Sample records for camera link interface

  1. Adaptation of the Camera Link Interface for Flight-Instrument Applications

    NASA Technical Reports Server (NTRS)

    Randall, David P.; Mahoney, John C.

    2010-01-01

    COTS (commercial-off-the-shelf) hard ware using an industry-standard Camera Link interface is proposed to accomplish the task of designing, building, assembling, and testing electronics for an airborne spectrometer that would be low-cost, but sustain the required data speed and volume. The focal plane electronics were designed to support that hardware standard. Analysis was done to determine how these COTS electronics could be interfaced with space-qualified camera electronics. Interfaces available for spaceflight application do not support the industry standard Camera Link interface, but with careful design, COTS EGSE (electronics ground support equipment), including camera interfaces and camera simulators, can still be used.

  2. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  3. Packet based serial link realized in FPGA dedicated for high resolution infrared image transmission

    NASA Astrophysics Data System (ADS)

    Bieszczad, Grzegorz

    2015-05-01

    In article the external digital interface specially designed for thermographic camera built in Military University of Technology is described. The aim of article is to illustrate challenges encountered during design process of thermal vision camera especially related to infrared data processing and transmission. Article explains main requirements for interface to transfer Infra-Red or Video digital data and describes the solution which we elaborated based on Low Voltage Differential Signaling (LVDS) physical layer and signaling scheme. Elaborated link for image transmission is built using FPGA integrated circuit with built-in high speed serial transceivers achieving up to 2500Gbps throughput. Image transmission is realized using proprietary packet protocol. Transmission protocol engine was described in VHDL language and tested in FPGA hardware. The link is able to transmit 1280x1024@60Hz 24bit video data using one signal pair. Link was tested to transmit thermal-vision camera picture to remote monitor. Construction of dedicated video link allows to reduce power consumption compared to solutions with ASIC based encoders and decoders realizing video links like DVI or packed based Display Port, with simultaneous reduction of wires needed to establish link to one pair. Article describes functions of modules integrated in FPGA design realizing several functions like: synchronization to video source, video stream packeting, interfacing transceiver module and dynamic clock generation for video standard conversion.

  4. Design of video interface conversion system based on FPGA

    NASA Astrophysics Data System (ADS)

    Zhao, Heng; Wang, Xiang-jun

    2014-11-01

    This paper presents a FPGA based video interface conversion system that enables the inter-conversion between digital and analog video. Cyclone IV series EP4CE22F17C chip from Altera Corporation is used as the main video processing chip, and single-chip is used as the information interaction control unit between FPGA and PC. The system is able to encode/decode messages from the PC. Technologies including video decoding/encoding circuits, bus communication protocol, data stream de-interleaving and de-interlacing, color space conversion and the Camera Link timing generator module of FPGA are introduced. The system converts Composite Video Broadcast Signal (CVBS) from the CCD camera into Low Voltage Differential Signaling (LVDS), which will be collected by the video processing unit with Camera Link interface. The processed video signals will then be inputted to system output board and displayed on the monitor.The current experiment shows that it can achieve high-quality video conversion with minimum board size.

  5. High-performance dual-speed CCD camera system for scientific imaging

    NASA Astrophysics Data System (ADS)

    Simpson, Raymond W.

    1996-03-01

    Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.

  6. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  7. Multifunctional microcontrollable interface module

    NASA Astrophysics Data System (ADS)

    Spitzer, Mark B.; Zavracky, Paul M.; Rensing, Noa M.; Crawford, J.; Hockman, Angela H.; Aquilino, P. D.; Girolamo, Henry J.

    2001-08-01

    This paper reports the development of a complete eyeglass- mounted computer interface system including display, camera and audio subsystems. The display system provides an SVGA image with a 20 degree horizontal field of view. The camera system has been optimized for face recognition and provides a 19 degree horizontal field of view. A microphone and built-in pre-amp optimized for voice recognition and a speaker on an articulated arm are included for audio. An important feature of the system is a high degree of adjustability and reconfigurability. The system has been developed for testing by the Military Police, in a complete system comprising the eyeglass-mounted interface, a wearable computer, and an RF link. Details of the design, construction, and performance of the eyeglass-based system are discussed.

  8. BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications

    NASA Astrophysics Data System (ADS)

    Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard

    2013-06-01

    Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.

  9. A CMOS high speed imaging system design based on FPGA

    NASA Astrophysics Data System (ADS)

    Tang, Hong; Wang, Huawei; Cao, Jianzhong; Qiao, Mingrui

    2015-10-01

    CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3" optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.

  10. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  11. In-camera video-stream processing for bandwidth reduction in web inspection

    NASA Astrophysics Data System (ADS)

    Jullien, Graham A.; Li, QiuPing; Hajimowlana, S. Hossain; Morvay, J.; Conflitti, D.; Roberts, James W.; Doody, Brian C.

    1996-02-01

    Automated machine vision systems are now widely used for industrial inspection tasks where video-stream data information is taken in by the camera and then sent out to the inspection system for future processing. In this paper we describe a prototype system for on-line programming of arbitrary real-time video data stream bandwidth reduction algorithms; the output of the camera only contains information that has to be further processed by a host computer. The processing system is built into a DALSA CCD camera and uses a microcontroller interface to download bit-stream data to a XILINXTM FPGA. The FPGA is directly connected to the video data-stream and outputs data to a low bandwidth output bus. The camera communicates to a host computer via an RS-232 link to the microcontroller. Static memory is used to both generate a FIFO interface for buffering defect burst data, and for off-line examination of defect detection data. In addition to providing arbitrary FPGA architectures, the internal program of the microcontroller can also be changed via the host computer and a ROM monitor. This paper describes a prototype system board, mounted inside a DALSA camera, and discusses some of the algorithms currently being implemented for web inspection applications.

  12. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  13. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  14. Automated geo/ortho registered aerial imagery product generation using the mapping system interface card (MSIC)

    NASA Astrophysics Data System (ADS)

    Bratcher, Tim; Kroutil, Robert; Lanouette, André; Lewis, Paul E.; Miller, David; Shen, Sylvia; Thomas, Mark

    2013-05-01

    The development concept paper for the MSIC system was first introduced in August 2012 by these authors. This paper describes the final assembly, testing, and commercial availability of the Mapping System Interface Card (MSIC). The 2.3kg MSIC is a self-contained, compact variable configuration, low cost real-time precision metadata annotator with embedded INS/GPS designed specifically for use in small aircraft. The MSIC was specifically designed to convert commercial-off-the-shelf (COTS) digital cameras and imaging/non-imaging spectrometers with Camera Link standard data streams into mapping systems for airborne emergency response and scientific remote sensing applications. COTS digital cameras and imaging/non-imaging spectrometers covering the ultraviolet through long-wave infrared wavelengths are important tools now readily available and affordable for use by emergency responders and scientists. The MSIC will significantly enhance the capability of emergency responders and scientists by providing a direct transformation of these important COTS sensor tools into low-cost real-time aerial mapping systems.

  15. Metal/Ceramic Interfaces: Relationships Between Structure and Chemistry

    DTIC Science & Technology

    1992-12-31

    using an Eikonix camera linked to the Vax Station 3200. § 3. RESULTS AND DISCUSSION In this section, results are reported and discussed for four aspects... GESELLS ( HA ~FU ~(R NIET’ALLK1 DE EN. ~ DR. RIEDERER-VERLAG GMBH POSTFACH 104052 7000 STUTTGART I Bd. 81 (1990) H 10 Spinel Interphase Formation at NU

  16. A USB 2.0 computer interface for the UCO/Lick CCD cameras

    NASA Astrophysics Data System (ADS)

    Wei, Mingzhi; Stover, Richard J.

    2004-09-01

    The new UCO/Lick Observatory CCD camera uses a 200 MHz fiber optic cable to transmit image data and an RS232 serial line for low speed bidirectional command and control. Increasingly RS232 is a legacy interface supported on fewer computers. The fiber optic cable requires either a custom interface board that is plugged into the mainboard of the image acquisition computer to accept the fiber directly or an interface converter that translates the fiber data onto a widely used standard interface. We present here a simple USB 2.0 interface for the UCO/Lick camera. A single USB cable connects to the image acquisition computer and the camera's RS232 serial and fiber optic cables plug into the USB interface. Since most computers now support USB 2.0 the Lick interface makes it possible to use the camera on essentially any modern computer that has the supporting software. No hardware modifications or additions to the computer are needed. The necessary device driver software has been written for the Linux operating system which is now widely used at Lick Observatory. The complete data acquisition software for the Lick CCD camera is running on a variety of PC style computers as well as an HP laptop.

  17. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource

    PubMed Central

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E.; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data. PMID:21045053

  18. Community cyberinfrastructure for Advanced Microbial Ecology Research and Analysis: the CAMERA resource.

    PubMed

    Sun, Shulei; Chen, Jing; Li, Weizhong; Altintas, Ilkay; Lin, Abel; Peltier, Steve; Stocks, Karen; Allen, Eric E; Ellisman, Mark; Grethe, Jeffrey; Wooley, John

    2011-01-01

    The Community Cyberinfrastructure for Advanced Microbial Ecology Research and Analysis (CAMERA, http://camera.calit2.net/) is a database and associated computational infrastructure that provides a single system for depositing, locating, analyzing, visualizing and sharing data about microbial biology through an advanced web-based analysis portal. CAMERA collects and links metadata relevant to environmental metagenome data sets with annotation in a semantically-aware environment allowing users to write expressive semantic queries against the database. To meet the needs of the research community, users are able to query metadata categories such as habitat, sample type, time, location and other environmental physicochemical parameters. CAMERA is compliant with the standards promulgated by the Genomic Standards Consortium (GSC), and sustains a role within the GSC in extending standards for content and format of the metagenomic data and metadata and its submission to the CAMERA repository. To ensure wide, ready access to data and annotation, CAMERA also provides data submission tools to allow researchers to share and forward data to other metagenomics sites and community data archives such as GenBank. It has multiple interfaces for easy submission of large or complex data sets, and supports pre-registration of samples for sequencing. CAMERA integrates a growing list of tools and viewers for querying, analyzing, annotating and comparing metagenome and genome data.

  19. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  20. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  1. Eye gaze tracking for endoscopic camera positioning: an application of a hardware/software interface developed to automate Aesop.

    PubMed

    Ali, S M; Reisner, L A; King, B; Cao, A; Auner, G; Klein, M; Pandya, A K

    2008-01-01

    A redesigned motion control system for the medical robot Aesop allows automating and programming its movements. An IR eye tracking system has been integrated with this control interface to implement an intelligent, autonomous eye gaze-based laparoscopic positioning system. A laparoscopic camera held by Aesop can be moved based on the data from the eye tracking interface to keep the user's gaze point region at the center of a video feedback monitor. This system setup provides autonomous camera control that works around the surgeon, providing an optimal robotic camera platform.

  2. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  3. Optical links in handheld multimedia devices

    NASA Astrophysics Data System (ADS)

    van Geffen, S.; Duis, J.; Miller, R.

    2008-04-01

    Ever emerging applications in handheld multimedia devices such as mobile phones, laptop computers, portable video games and digital cameras requiring increased screen resolutions are driving higher aggregate bitrates between host processor and display(s) enabling services such as mobile video conferencing, video on demand and TV broadcasting. Larger displays and smaller phones require complex mechanical 3D hinge configurations striving to combine maximum functionality with compact building volumes. Conventional galvanic interconnections such as Micro-Coax and FPC carrying parallel digital data between host processor and display module may produce Electromagnetic Interference (EMI) and bandwidth limitations caused by small cable size and tight cable bends. To reduce the number of signals through a hinge, the mobile phone industry, organized in the MIPI (Mobile Industry Processor Interface) alliance, is currently defining an electrical interface transmitting serialized digital data at speeds >1Gbps. This interface allows for electrical or optical interconnects. Above 1Gbps optical links may offer a cost effective alternative because of their flexibility, increased bandwidth and immunity to EMI. This paper describes the development of optical links for handheld communication devices. A cable assembly based on a special Plastic Optical Fiber (POF) selected for its mechanical durability is terminated with a small form factor molded lens assembly which interfaces between an 850nm VCSEL transmitter and a receiving device on the printed circuit board of the display module. A statistical approach based on a Lean Design For Six Sigma (LDFSS) roadmap for new product development tries to find an optimum link definition which will be robust and low cost meeting the power consumption requirements appropriate for battery operated systems.

  4. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  5. General-Purpose Serial Interface For Remote Control

    NASA Technical Reports Server (NTRS)

    Busquets, Anthony M.; Gupton, Lawrence E.

    1990-01-01

    Computer controls remote television camera. General-purpose controller developed to serve as interface between host computer and pan/tilt/zoom/focus functions on series of automated video cameras. Interface port based on 8251 programmable communications-interface circuit configured for tristated outputs, and connects controller system to any host computer with RS-232 input/output (I/O) port. Accepts byte-coded data from host, compares them with prestored codes in read-only memory (ROM), and closes or opens appropriate switches. Six output ports control opening and closing of as many as 48 switches. Operator controls remote television camera by speaking commands, in system including general-purpose controller.

  6. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  8. STRIPE: Remote Driving Using Limited Image Data

    NASA Technical Reports Server (NTRS)

    Kay, Jennifer S.

    1997-01-01

    Driving a vehicle, either directly or remotely, is an inherently visual task. When heavy fog limits visibility, we reduce our car's speed to a slow crawl, even along very familiar roads. In teleoperation systems, an operator's view is limited to images provided by one or more cameras mounted on the remote vehicle. Traditional methods of vehicle teleoperation require that a real time stream of images is transmitted from the vehicle camera to the operator control station, and the operator steers the vehicle accordingly. For this type of teleoperation, the transmission link between the vehicle and operator workstation must be very high bandwidth (because of the high volume of images required) and very low latency (because delayed images can cause operators to steer incorrectly). In many situations, such a high-bandwidth, low-latency communication link is unavailable or even technically impossible to provide. Supervised TeleRobotics using Incremental Polyhedral Earth geometry, or STRIPE, is a teleoperation system for a robot vehicle that allows a human operator to accurately control the remote vehicle across very low bandwidth communication links, and communication links with large delays. In STRIPE, a single image from a camera mounted on the vehicle is transmitted to the operator workstation. The operator uses a mouse to pick a series of 'waypoints' in the image that define a path that the vehicle should follow. These 2D waypoints are then transmitted back to the vehicle, where they are used to compute the appropriate steering commands while the next image is being transmitted. STRIPE requires no advance knowledge of the terrain to be traversed, and can be used by novice operators with only minimal training. STRIPE is a unique combination of computer and human control. The computer must determine the 3D world path designated by the 2D waypoints and then accurately control the vehicle over rugged terrain. The human issues involve accurate path selection, and the prevention of disorientation, a common problem across all types of teleoperation systems. STRIPE is the only semi-autonomous teleoperation system that can accurately follow paths designated in monocular images on varying terrain. The thesis describes the STRIPE algorithm for tracking points using the incremental geometry model, insight into the design and redesign of the interface, an analysis of the effects of potential errors, details of the user studies, and hints on how to improve both the algorithm and interface for future designs.

  9. Design of a CAN bus interface for photoelectric encoder in the spaceflight camera

    NASA Astrophysics Data System (ADS)

    Sun, Ying; Wan, Qiu-hua; She, Rong-hong; Zhao, Chang-hai; Jiang, Yong

    2009-05-01

    In order to make photoelectric encoder usable in a spaceflight camera which adopts CAN bus as the communication method, CAN bus interface of the photoelectric encoder is designed in this paper. CAN bus interface hardware circuit of photoelectric encoder consists of CAN bus controller SJA 1000, CAN bus transceiver TJA1050 and singlechip. CAN bus interface controlling software program is completed in C language. A ten-meter shield twisted pair line is used as the transmission medium in the spaceflight camera, and speed rate is 600kbps.The experiments show that: the photoelectric encoder with CAN bus interface which has the advantages of more reliability, real-time, transfer rate and transfer distance overcomes communication line's shortcomings of classical photoelectric encoder system. The system works well in automatic measuring and controlling system.

  10. CMOS Camera Array With Onboard Memory

    NASA Technical Reports Server (NTRS)

    Gat, Nahum

    2009-01-01

    A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.

  11. Single software platform used for high speed data transfer implementation in a 65k pixel camera working in single photon counting mode

    NASA Astrophysics Data System (ADS)

    Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.

    2015-12-01

    Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.

  12. KSC-07pd2202

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, from left in blue flight suits, STS-120 Mission Specialist Stephanie D. Wilson, Commander Pamela A. Melroy, Pilot George D. Zamka, Mission Specialist Scott E. Parazynski (back to camera), Mission Specialist Douglas H. Wheelock and Mission Specialist Paolo A. Nespoli (holding camera), a European Space Agency astronaut from Italy, are given the opportunity to operate the cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  13. KSC-07pd2201

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, from left in blue flight suits, STS-120 Mission Specialist Stephanie D. Wilson, Pilot George D. Zamka, Commander Pamela A. Melroy, Mission Specialist Scott E. Parazynski (holding camera) and Mission Specialist Douglas H. Wheelock are given the opportunity to operate the cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  14. Computerized digital dermoscopy.

    PubMed

    Gewirtzman, A J; Braun, R P

    2003-01-01

    Within the past 15 years, dermoscopy has become a widely used non-invasive technique for physicians to better visualize pigmented lesions. Dermoscopy has helped trained physicians to better diagnose pigmented lesions. Now, the digital revolution is beginning to enhance standard dermoscopic procedures. Using digital dermoscopy, physicians are better able to document pigmented lesions for patient follow-up and to get second opinions, either through teledermoscopy with an expert colleague or by using computer-assisted diagnosis. As the market for digital dermoscopy products begins to grow, so do the number of decisions physicians need to make when choosing a system to fit their needs. The current market for digital dermoscopy includes two varieties of relatively simple and cheap attachments which can convert a consumer digital camera into a digital dermoscope. A coupling adapter acts as a fastener between the camera and an ordinary dermoscope, whereas a dermoscopy attachment includes the dermoscope optics and light source and can be attached directly to the camera. Other options for digital dermoscopy include complete dermoscopy systems that use a hand-held video camera linked directly to a computer. These systems differ from each other in whether or not they are calibrated as well as the quality of the camera and software interface. Another option in digital skin imaging involves spectral analysis rather than dermoscopy. This article serves as a guide to the current systems available and their capabilities.

  15. Software Graphical User Interface For Analysis Of Images

    NASA Technical Reports Server (NTRS)

    Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn

    1992-01-01

    CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.

  16. The advanced linked extended reconnaissance and targeting technology demonstration project

    NASA Astrophysics Data System (ADS)

    Cruickshank, James; de Villers, Yves; Maheux, Jean; Edwards, Mark; Gains, David; Rea, Terry; Banbury, Simon; Gauthier, Michelle

    2007-06-01

    The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing key operational needs of the future Canadian Army's Surveillance and Reconnaissance forces by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. We discuss concepts for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as beyond line-of-sight systems such as a mini-UAV and unattended ground sensors. The authors address technical issues associated with the use of fully digital IR and day video cameras and discuss video-rate image processing developed to assist the operator to recognize poorly visible targets. Automatic target detection and recognition algorithms processing both IR and visible-band images have been investigated to draw the operator's attention to possible targets. The machine generated information display requirements are presented with the human factors engineering aspects of the user interface in this complex environment, with a view to establishing user trust in the automation. The paper concludes with a summary of achievements to date and steps to project completion.

  17. Viking lander camera radiometry calibration report, volume 2

    NASA Technical Reports Server (NTRS)

    Wolf, M. R.; Atwood, D. L.; Morrill, M. E.

    1977-01-01

    The requirements the performance validation, and interfaces for the RADCAM program, to convert Viking lander camera image data to radiometric units were established. A proposed algorithm is described, and an appendix summarizing the planned reduction of camera test data was included.

  18. From Antarctica to space: Use of telepresence and virtual reality in control of remote vehicles

    NASA Technical Reports Server (NTRS)

    Stoker, Carol; Hine, Butler P., III; Sims, Michael; Rasmussen, Daryl; Hontalas, Phil; Fong, Terrence W.; Steele, Jay; Barch, Don; Andersen, Dale; Miles, Eric

    1994-01-01

    In the Fall of 1993, NASA Ames deployed a modified Phantom S2 Remotely-Operated underwater Vehicle (ROV) into an ice-covered sea environment near McMurdo Science Station, Antarctica. This deployment was part of the antarctic Space Analog Program, a joint program between NASA and the National Science Foundation to demonstrate technologies relevant for space exploration in realistic field setting in the Antarctic. The goal of the mission was to operationally test the use of telepresence and virtual reality technology in the operator interface to a remote vehicle, while performing a benthic ecology study. The vehicle was operated both locally, from above a dive hole in the ice through which it was launched, and remotely over a satellite communications link from a control room at NASA's Ames Research Center. Local control of the vehicle was accomplished using the standard Phantom control box containing joysticks and switches, with the operator viewing stereo video camera images on a stereo display monitor. Remote control of the vehicle over the satellite link was accomplished using the Virtual Environment Vehicle Interface (VEVI) control software developed at NASA Ames. The remote operator interface included either a stereo display monitor similar to that used locally or a stereo head-mounted head-tracked display. The compressed video signal from the vehicle was transmitted to NASA Ames over a 768 Kbps satellite channel. Another channel was used to provide a bi-directional Internet link to the vehicle control computer through which the command and telemetry signals traveled, along with a bi-directional telephone service. In addition to the live stereo video from the satellite link, the operator could view a computer-generated graphic representation of the underwater terrain, modeled from the vehicle's sensors. The virtual environment contained an animate graphic model of the vehicle which reflected the state of the actual vehicle, along with ancillary information such as the vehicle track, science markers, and locations of video snapshots. The actual vehicle was driven either from within the virtual environment or through a telepresence interface. All vehicle functions could be controlled remotely over the satellite link.

  19. ARINC 818 adds capabilities for high-speed sensors and systems

    NASA Astrophysics Data System (ADS)

    Keller, Tim; Grunwald, Paul

    2014-06-01

    ARINC 818, titled Avionics Digital Video Bus (ADVB), is the standard for cockpit video that has gained wide acceptance in both the commercial and military cockpits including the Boeing 787, the A350XWB, the A400M, the KC- 46A and many others. Initially conceived of for cockpit displays, ARINC 818 is now propagating into high-speed sensors, such as infrared and optical cameras due to its high-bandwidth and high reliability. The ARINC 818 specification that was initially release in the 2006 and has recently undergone a major update that will enhance its applicability as a high speed sensor interface. The ARINC 818-2 specification was published in December 2013. The revisions to the specification include: video switching, stereo and 3-D provisions, color sequential implementations, regions of interest, data-only transmissions, multi-channel implementations, bi-directional communication, higher link rates to 32Gbps, synchronization signals, options for high-speed coax interfaces and optical interface details. The additions to the specification are especially appealing for high-bandwidth, multi sensor systems that have issues with throughput bottlenecks and SWaP concerns. ARINC 818 is implemented on either copper or fiber optic high speed physical layers, and allows for time multiplexing multiple sensors onto a single link. This paper discusses each of the new capabilities in the ARINC 818-2 specification and the benefits for ISR and countermeasures implementations, several examples are provided.

  20. Standard design for National Ignition Facility x-ray streak and framing cameras.

    PubMed

    Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S

    2010-10-01

    The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.

  1. View From Camera Not Used During Curiosity's First Six Months on Mars

    NASA Image and Video Library

    2017-12-08

    This view of Curiosity's left-front and left-center wheels and of marks made by wheels on the ground in the "Yellowknife Bay" area comes from one of six cameras used on Mars for the first time more than six months after the rover landed. The left Navigation Camera (Navcam) linked to Curiosity's B-side computer took this image during the 223rd Martian day, or sol, of Curiosity's work on Mars (March 22, 2013). The wheels are 20 inches (50 centimeters) in diameter. Curiosity carries a pair of main computers, redundant to each other, in order to have a backup available if one fails. Each of the computers, A-side and B-side, also has other redundant subsystems linked to just that computer. Curiosity operated on its A-side from before the August 2012 landing until Feb. 28, when engineers commanded a switch to the B-side in response to a memory glitch on the A-side. One set of activities after switching to the B-side computer has been to check the six engineering cameras that are hard-linked to that computer. The rover's science instruments, including five science cameras, can each be operated by either the A-side or B-side computer, whichever is active. However, each of Curiosity's 12 engineering cameras is linked to just one of the computers. The engineering cameras are the Navigation Camera (Navcam), the Front Hazard-Avoidance Camera (Front Hazcam) and Rear Hazard-Avoidance Camera (Rear Hazcam). Each of those three named cameras has four cameras as part of it: two stereo pairs of cameras, with one pair linked to each computer. Only the pairs linked to the active computer can be used, and the A-side computer was active from before landing, in August, until Feb. 28. All six of the B-side engineering cameras have been used during March 2013 and checked out OK. Image Credit: NASA/JPL-Caltech NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  3. Optomechanical Design of Ten Modular Cameras for the Mars Exploration Rovers

    NASA Technical Reports Server (NTRS)

    Ford, Virginia G.; Karlmann, Paul; Hagerott, Ed; Scherr, Larry

    2003-01-01

    This viewgraph presentation reviews the design and fabrication of the modular cameras for the Mars Exploration Rovers. In the 2003 mission there were to be 2 landers and 2 rovers, each were to have 10 cameras each. Views of the camera design, the lens design, the lens interface with the detector assembly, the detector assembly, the electronics assembly are shown.

  4. WADeG Cell Phone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2009-09-01

    The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less

  5. Positive-Buoyancy Rover for Under Ice Mobility

    NASA Technical Reports Server (NTRS)

    Leichty, John M.; Klesh, Andrew T.; Berisford, Daniel F.; Matthews, Jaret B.; Hand, Kevin P.

    2013-01-01

    A buoyant rover has been developed to traverse the underside of ice-covered lakes and seas. The rover operates at the ice/water interface and permits direct observation and measurement of processes affecting freeze- over and thaw events in lake and marine environments. Operating along the 2- D ice-water interface simplifies many aspects of underwater exploration, especially when compared to submersibles, which have difficulty in station-keeping and precision mobility. The buoyant rover consists of an all aluminum body with two aluminum sawtooth wheels. The two independent body segments are sandwiched between four actuators that permit isolation of wheel movement from movement of the central tether spool. For normal operations, the wheels move while the tether spool feeds out line and the cameras on each segment maintain a user-controlled fixed position. Typically one camera targets the ice/water interface and one camera looks down to the lake floor to identify seep sources. Each wheel can be operated independently for precision turning and adjustments. The rover is controlled by a touch- tablet interface and wireless goggles enable real-time viewing of video streamed from the rover cameras. The buoyant rover was successfully deployed and tested during an October 2012 field campaign to investigate methane trapped in ice in lakes along the North Slope of Alaska.

  6. A design of camera simulator for photoelectric image acquisition system

    NASA Astrophysics Data System (ADS)

    Cai, Guanghui; Liu, Wen; Zhang, Xin

    2015-02-01

    In the process of developing the photoelectric image acquisition equipment, it needs to verify the function and performance. In order to make the photoelectric device recall the image data formerly in the process of debugging and testing, a design scheme of the camera simulator is presented. In this system, with FPGA as the control core, the image data is saved in NAND flash trough USB2.0 bus. Due to the access rate of the NAND, flash is too slow to meet the requirement of the sytsem, to fix the problem, the pipeline technique and the High-Band-Buses technique are applied in the design to improve the storage rate. It reads image data out from flash in the control logic of FPGA and output separately from three different interface of Camera Link, LVDS and PAL, which can provide image data for photoelectric image acquisition equipment's debugging and algorithm validation. However, because the standard of PAL image resolution is 720*576, the resolution is different between PAL image and input image, so the image can be output after the resolution conversion. The experimental results demonstrate that the camera simulator outputs three format image sequence correctly, which can be captured and displayed by frame gather. And the three-format image data can meet test requirements of the most equipment, shorten debugging time and improve the test efficiency.

  7. KSC-07pd2203

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, Expedition 16 Flight Engineer Daniel M. Tani is given the opportunity to operate a camera that will fly on the mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  8. KSC-07pd2200

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. From left in blue flight suits, STS-120 Mission Specialist Douglas H. Wheelock, Commander Pamela A. Melroy and Mission Specialist Scott E. Parazynski receive instruction in Orbiter Processing Facility bay 3 on the operation of cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  9. Sitting in the Pilot's Seat; Optimizing Human-Systems Interfaces for Unmanned Aerial Vehicles

    NASA Technical Reports Server (NTRS)

    Queen, Steven M.; Sanner, Kurt Gregory

    2011-01-01

    One of the pilot-machine interfaces (the forward viewing camera display) for an Unmanned Aerial Vehicle called the DROID (Dryden Remotely Operated Integrated Drone) will be analyzed for optimization. The goal is to create a visual display for the pilot that as closely resembles an out-the-window view as possible. There are currently no standard guidelines for designing pilot-machine interfaces for UAVs. Typically, UAV camera views have a narrow field, which limits the situational awareness (SA) of the pilot. Also, at this time, pilot-UAV interfaces often use displays that have a diagonal length of around 20". Using a small display may result in a distorted and disproportional view for UAV pilots. Making use of a larger display and a camera lens with a wider field of view may minimize the occurrences of pilot error associated with the inability to see "out the window" as in a manned airplane. It is predicted that the pilot will have a less distorted view of the DROID s surroundings, quicker response times and more stable vehicle control. If the experimental results validate this concept, other UAV pilot-machine interfaces will be improved with this design methodology.

  10. Projection Mapping User Interface for Disabled People

    PubMed Central

    Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities. PMID:29686827

  11. Projection Mapping User Interface for Disabled People.

    PubMed

    Gelšvartas, Julius; Simutis, Rimvydas; Maskeliūnas, Rytis

    2018-01-01

    Difficulty in communicating is one of the key challenges for people suffering from severe motor and speech disabilities. Often such person can communicate and interact with the environment only using assistive technologies. This paper presents a multifunctional user interface designed to improve communication efficiency and person independence. The main component of this interface is a projection mapping technique used to highlight objects in the environment. Projection mapping makes it possible to create a natural augmented reality information presentation method. The user interface combines a depth sensor and a projector to create camera-projector system. We provide a detailed description of camera-projector system calibration procedure. The described system performs tabletop object detection and automatic projection mapping. Multiple user input modalities have been integrated into the multifunctional user interface. Such system can be adapted to the needs of people with various disabilities.

  12. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    PubMed

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.

  13. Human-Robot Interaction

    NASA Technical Reports Server (NTRS)

    Sandor, Aniko; Cross, E. Vincent, II; Chang, Mai Lee

    2015-01-01

    Human-robot interaction (HRI) is a discipline investigating the factors affecting the interactions between humans and robots. It is important to evaluate how the design of interfaces affect the human's ability to perform tasks effectively and efficiently when working with a robot. By understanding the effects of interface design on human performance, workload, and situation awareness, interfaces can be developed to appropriately support the human in performing tasks with minimal errors and with appropriate interaction time and effort. Thus, the results of research on human-robot interfaces have direct implications for the design of robotic systems. For efficient and effective remote navigation of a rover, a human operator needs to be aware of the robot's environment. However, during teleoperation, operators may get information about the environment only through a robot's front-mounted camera causing a keyhole effect. The keyhole effect reduces situation awareness which may manifest in navigation issues such as higher number of collisions, missing critical aspects of the environment, or reduced speed. One way to compensate for the keyhole effect and the ambiguities operators experience when they teleoperate a robot is adding multiple cameras and including the robot chassis in the camera view. Augmented reality, such as overlays, can also enhance the way a person sees objects in the environment or in camera views by making them more visible. Scenes can be augmented with integrated telemetry, procedures, or map information. Furthermore, the addition of an exocentric (i.e., third-person) field of view from a camera placed in the robot's environment may provide operators with the additional information needed to gain spatial awareness of the robot. Two research studies investigated possible mitigation approaches to address the keyhole effect: 1) combining the inclusion of the robot chassis in the camera view with augmented reality overlays, and 2) modifying the camera frame of reference. The first study investigated the effects of inclusion and exclusion of the robot chassis along with superimposing a simple arrow overlay onto the video feed of operator task performance during teleoperation of a mobile robot in a driving task. In this study, the front half of the robot chassis was made visible through the use of three cameras, two side-facing and one forward-facing. The purpose of the second study was to compare operator performance when teleoperating a robot from an egocentric-only and combined (egocentric plus exocentric camera) view. Camera view parameters that are found to be beneficial in these laboratory experiments can be implemented on NASA rovers and tested in a real-world driving and navigation scenario on-site at the Johnson Space Center.

  14. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  15. Comparison of three different techniques for camera and motion control of a teleoperated robot.

    PubMed

    Doisy, Guillaume; Ronen, Adi; Edan, Yael

    2017-01-01

    This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Implementation of a Serial Delay Insertion Type Loop Communication for a Real Time Multitransputer System.

    DTIC Science & Technology

    1985-06-01

    just pass the message WAIT NOW AFTER Rlock + timeo t -- if time is out write.screen( TIME IS OUT") MAIN PROGRAM* CHAN linki , link2, link3, link4...PAR D.I.Loop.Interface (link4, linkl,) D.I.Loop.Interface ( linki , link2, 2) D.I.Loop.Interface (link2, link3, 3) JD.I Loop.Interface (link3, link4, 4

  17. Real time mitigation of atmospheric turbulence in long distance imaging using the lucky region fusion algorithm with FPGA and GPU hardware acceleration

    NASA Astrophysics Data System (ADS)

    Jackson, Christopher Robert

    "Lucky-region" fusion (LRF) is a synthetic imaging technique that has proven successful in enhancing the quality of images distorted by atmospheric turbulence. The LRF algorithm selects sharp regions of an image obtained from a series of short exposure frames, and fuses the sharp regions into a final, improved image. In previous research, the LRF algorithm had been implemented on a PC using the C programming language. However, the PC did not have sufficient sequential processing power to handle real-time extraction, processing and reduction required when the LRF algorithm was applied to real-time video from fast, high-resolution image sensors. This thesis describes two hardware implementations of the LRF algorithm to achieve real-time image processing. The first was created with a VIRTEX-7 field programmable gate array (FPGA). The other developed using the graphics processing unit (GPU) of a NVIDIA GeForce GTX 690 video card. The novelty in the FPGA approach is the creation of a "black box" LRF video processing system with a general camera link input, a user controller interface, and a camera link video output. We also describe a custom hardware simulation environment we have built to test the FPGA LRF implementation. The advantage of the GPU approach is significantly improved development time, integration of image stabilization into the system, and comparable atmospheric turbulence mitigation.

  18. Rapid prototyping 3D virtual world interfaces within a virtual factory environment

    NASA Technical Reports Server (NTRS)

    Kosta, Charles Paul; Krolak, Patrick D.

    1993-01-01

    On-going work into user requirements analysis using CLIPS (NASA/JSC) expert systems as an intelligent event simulator has led to research into three-dimensional (3D) interfaces. Previous work involved CLIPS and two-dimensional (2D) models. Integral to this work was the development of the University of Massachusetts Lowell parallel version of CLIPS, called PCLIPS. This allowed us to create both a Software Bus and a group problem-solving environment for expert systems development. By shifting the PCLIPS paradigm to use the VEOS messaging protocol we have merged VEOS (HlTL/Seattle) and CLIPS into a distributed virtual worlds prototyping environment (VCLIPS). VCLIPS uses the VEOS protocol layer to allow multiple experts to cooperate on a single problem. We have begun to look at the control of a virtual factory. In the virtual factory there are actors and objects as found in our Lincoln Logs Factory of the Future project. In this artificial reality architecture there are three VCLIPS entities in action. One entity is responsible for display and user events in the 3D virtual world. Another is responsible for either simulating the virtual factory or communicating with the real factory. The third is a user interface expert. The interface expert maps user input levels, within the current prototype, to control information for the factory. The interface to the virtual factory is based on a camera paradigm. The graphics subsystem generates camera views of the factory on standard X-Window displays. The camera allows for view control and object control. Control or the factory is accomplished by the user reaching into the camera views to perform object interactions. All communication between the separate CLIPS expert systems is done through VEOS.

  19. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  20. KSC-07pd2213

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 crew members get a close look at hardware in Discovery's payload bay. In the bucket at left is Mission Specialist Paolo A. Nespoli, who is a European Space Agency astronaut from Italy. The object with the shiny gold surface is a payload bay bulkhead camera. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  1. KSC-07pd2195

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles on space shuttle Discovery in Orbiter Processing Facility bay 3 are Mission Specialists Douglas H. Wheelock and Paolo A. Nespoli, a European Space Agency astronaut from Italy, and Expedition 16 Flight Engineer Daniel M. Tani (with camera). Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  2. A high resolution IR/visible imaging system for the W7-X limiter

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.

    2016-11-01

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.

  3. A high resolution IR/visible imaging system for the W7-X limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less

  4. Virtual displays for 360-degree video

    NASA Astrophysics Data System (ADS)

    Gilbert, Stephen; Boonsuk, Wutthigrai; Kelly, Jonathan W.

    2012-03-01

    In this paper we describe a novel approach for comparing users' spatial cognition when using different depictions of 360- degree video on a traditional 2D display. By using virtual cameras within a game engine and texture mapping of these camera feeds to an arbitrary shape, we were able to offer users a 360-degree interface composed of four 90-degree views, two 180-degree views, or one 360-degree view of the same interactive environment. An example experiment is described using these interfaces. This technique for creating alternative displays of wide-angle video facilitates the exploration of how compressed or fish-eye distortions affect spatial perception of the environment and can benefit the creation of interfaces for surveillance and remote system teleoperation.

  5. Geolocating thermal binoculars based on a software defined camera core incorporating HOT MCT grown by MOVPE

    NASA Astrophysics Data System (ADS)

    Pillans, Luke; Harmer, Jack; Edwards, Tim; Richardson, Lee

    2016-05-01

    Geolocation is the process of calculating a target position based on bearing and range relative to the known location of the observer. A high performance thermal imager with integrated geolocation functions is a powerful long range targeting device. Firefly is a software defined camera core incorporating a system-on-a-chip processor running the AndroidTM operating system. The processor has a range of industry standard serial interfaces which were used to interface to peripheral devices including a laser rangefinder and a digital magnetic compass. The core has built in Global Positioning System (GPS) which provides the third variable required for geolocation. The graphical capability of Firefly allowed flexibility in the design of the man-machine interface (MMI), so the finished system can give access to extensive functionality without appearing cumbersome or over-complicated to the user. This paper covers both the hardware and software design of the system, including how the camera core influenced the selection of peripheral hardware, and the MMI design process which incorporated user feedback at various stages.

  6. The South African Astronomical Observatory instrumentation software architecture and the SHOC instruments

    NASA Astrophysics Data System (ADS)

    van Gend, Carel; Lombaard, Briehan; Sickafoose, Amanda; Whittal, Hamish

    2016-07-01

    Until recently, software for instruments on the smaller telescopes at the South African Astronomical Observatory (SAAO) has not been designed for remote accessibility and frequently has not been developed using modern software best-practice. We describe a software architecture we have implemented for use with new and upgraded instruments at the SAAO. The architecture was designed to allow for multiple components and to be fast, reliable, remotely- operable, support different user interfaces, employ as much non-proprietary software as possible, and to take future-proofing into consideration. Individual component drivers exist as standalone processes, communicating over a network. A controller layer coordinates the various components, and allows a variety of user interfaces to be used. The Sutherland High-speed Optical Cameras (SHOC) instruments incorporate an Andor electron-multiplying CCD camera, a GPS unit for accurate timing and a pair of filter wheels. We have applied the new architecture to the SHOC instruments, with the camera driver developed using Andor's software development kit. We have used this to develop an innovative web-based user-interface to the instrument.

  7. The PanCam Instrument for the ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Coates, A. J.; Jaumann, R.; Griffiths, A. D.; Leff, C. E.; Schmitz, N.; Josset, J.-L.; Paar, G.; Gunn, M.; Hauber, E.; Cousins, C. R.; Cross, R. E.; Grindrod, P.; Bridges, J. C.; Balme, M.; Gupta, S.; Crawford, I. A.; Irwin, P.; Stabbins, R.; Tirsch, D.; Vago, J. L.; Theodorou, T.; Caballo-Perucha, M.; Osinski, G. R.; PanCam Team

    2017-07-01

    The scientific objectives of the ExoMars rover are designed to answer several key questions in the search for life on Mars. In particular, the unique subsurface drill will address some of these, such as the possible existence and stability of subsurface organics. PanCam will establish the surface geological and morphological context for the mission, working in collaboration with other context instruments. Here, we describe the PanCam scientific objectives in geology, atmospheric science, and 3-D vision. We discuss the design of PanCam, which includes a stereo pair of Wide Angle Cameras (WACs), each of which has an 11-position filter wheel and a High Resolution Camera (HRC) for high-resolution investigations of rock texture at a distance. The cameras and electronics are housed in an optical bench that provides the mechanical interface to the rover mast and a planetary protection barrier. The electronic interface is via the PanCam Interface Unit (PIU), and power conditioning is via a DC-DC converter. PanCam also includes a calibration target mounted on the rover deck for radiometric calibration, fiducial markers for geometric calibration, and a rover inspection mirror.

  8. A compact high-definition low-cost digital stereoscopic video camera for rapid robotic surgery development.

    PubMed

    Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C

    2012-01-01

    Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.

  9. mREST Interface Specification

    NASA Technical Reports Server (NTRS)

    McCartney, Patrick; MacLean, John

    2012-01-01

    mREST is an implementation of the REST architecture specific to the management and sharing of data in a system of logical elements. The purpose of this document is to clearly define the mREST interface protocol. The interface protocol covers all of the interaction between mREST clients and mREST servers. System-level requirements are not specifically addressed. In an mREST system, there are typically some backend interfaces between a Logical System Element (LSE) and the associated hardware/software system. For example, a network camera LSE would have a backend interface to the camera itself. These interfaces are specific to each type of LSE and are not covered in this document. There are also frontend interfaces that may exist in certain mREST manager applications. For example, an electronic procedure execution application may have a specialized interface for configuring the procedures. This interface would be application specific and outside of this document scope. mREST is intended to be a generic protocol which can be used in a wide variety of applications. A few scenarios are discussed to provide additional clarity but, in general, application-specific implementations of mREST are not specifically addressed. In short, this document is intended to provide all of the information necessary for an application developer to create mREST interface agents. This includes both mREST clients (mREST manager applications) and mREST servers (logical system elements, or LSEs).

  10. Gigavision - A weatherproof, multibillion pixel resolution time-lapse camera system for recording and tracking phenology in every plant in a landscape

    NASA Astrophysics Data System (ADS)

    Brown, T.; Borevitz, J. O.; Zimmermann, C.

    2010-12-01

    We have a developed a camera system that can record hourly, gigapixel (multi-billion pixel) scale images of an ecosystem in a 360x90 degree panorama. The “Gigavision” camera system is solar-powered and can wirelessly stream data to a server. Quantitative data collection from multiyear timelapse gigapixel images is facilitated through an innovative web-based toolkit for recording time-series data on developmental stages (phenology) from any plant in the camera’s field of view. Gigapixel images enable time-series recording of entire landscapes with a resolution sufficient to record phenology from a majority of individuals in entire populations of plants. When coupled with next generation sequencing, quantitative population genomics can be performed in a landscape context linking ecology and evolution in situ and in real time. The Gigavision camera system achieves gigapixel image resolution by recording rows and columns of overlapping megapixel images. These images are stitched together into a single gigapixel resolution image using commercially available panorama software. Hardware consists of a 5-18 megapixel resolution DSLR or Network IP camera mounted on a pair of heavy-duty servo motors that provide pan-tilt capabilities. The servos and camera are controlled with a low-power Windows PC. Servo movement, power switching, and system status monitoring are enabled with Phidgets-brand sensor boards. System temperature, humidity, power usage, and battery voltage are all monitored at 5 minute intervals. All sensor data is uploaded via cellular or 802.11 wireless to an interactive online interface for easy remote monitoring of system status. Systems with direct internet connections upload the full sized images directly to our automated stitching server where they are stitched and available online for viewing within an hour of capture. Systems with cellular wireless upload an 80 megapixel “thumbnail” of each larger panorama and full-sized images are manually retrieved at bi-weekly intervals. Our longer-term goal is to make gigapixel time-lapse datasets available online in an interactive interface that layers plant-level phenology data with gigapixel resolution images, genomic sequence data from individual plants with weather and other abitotic sensor data. Co-visualization of all of these data types provides researchers with a powerful new tool for examining complex ecological interactions across scales from the individual to the ecosystem. We will present detailed phenostage data from more than 100 plants of multiple species from our Gigavision timelapse camera at our “Big Blowout East” field site in the Indiana Dunes State Park, IN. This camera has been recording three to four 700 million pixel images a day since February 28, 2010. The camera field of view covers an area of about 7 hectares resulting in an average image resolution of about 1 pixel per centimeter over the entire site. We will also discuss some of the many technological challenges with developing and maintaining these types of hardware systems, collecting quantitative data from gigapixel resolution time-lapse data and effectively managing terabyte-sized datasets of millions of images.

  11. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  12. Camera systems in human motion analysis for biomedical applications

    NASA Astrophysics Data System (ADS)

    Chin, Lim Chee; Basah, Shafriza Nisha; Yaacob, Sazali; Juan, Yeap Ewe; Kadir, Aida Khairunnisaa Ab.

    2015-05-01

    Human Motion Analysis (HMA) system has been one of the major interests among researchers in the field of computer vision, artificial intelligence and biomedical engineering and sciences. This is due to its wide and promising biomedical applications, namely, bio-instrumentation for human computer interfacing and surveillance system for monitoring human behaviour as well as analysis of biomedical signal and image processing for diagnosis and rehabilitation applications. This paper provides an extensive review of the camera system of HMA, its taxonomy, including camera types, camera calibration and camera configuration. The review focused on evaluating the camera system consideration of the HMA system specifically for biomedical applications. This review is important as it provides guidelines and recommendation for researchers and practitioners in selecting a camera system of the HMA system for biomedical applications.

  13. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1991-01-01

    A program of research embracing teleoperator and automatic navigational control of freely flying satellite robots is presented. Current research goals include: (1) developing visual operator interfaces for improved vehicle teleoperation; (2) determining the effects of different visual interface system designs on operator performance; and (3) achieving autonomous vision-based vehicle navigation and control. This research program combines virtual-environment teleoperation studies and neutral-buoyancy experiments using a space-robot simulator vehicle currently under development. Visual-interface design options under investigation include monoscopic versus stereoscopic displays and cameras, helmet-mounted versus panel-mounted display monitors, head-tracking versus fixed or manually steerable remote cameras, and the provision of vehicle-fixed visual cues, or markers, in the remote scene for improved sensing of vehicle position, orientation, and motion.

  14. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  15. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  16. Image Intensifier Modules For Use With Commercially Available Solid State Cameras

    NASA Astrophysics Data System (ADS)

    Murphy, Howard; Tyler, Al; Lake, Donald W.

    1989-04-01

    A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.

  17. A generic readout system for astrophysical detectors

    NASA Astrophysics Data System (ADS)

    Doumayrou, E.; Lortholary, M.

    2012-09-01

    We have developed a generic digital platform to fulfill the needs for the development of new detectors in astrophysics, which is used in lab, for ground-based telescopes instruments and also in prototype versions for space instruments development. This system is based on hardware FPGA electronic board (called MISE) together with software on a PC computer (called BEAR). The MISE board generates the fast clocking which reads the detectors thanks to a programmable digital sequencer and performs data acquisition, buffering of digitalized pixels outputs and interfaces with others boards. The data are then sent to the PC via a SpaceWire or Usb link. The BEAR software sets the MISE board up, makes data acquisition and enables the visualization, processing and the storage of data in line. These software tools are made of C++ and Labview (NI) on a Linux OS. MISE and BEAR make a generic acquisition architecture, on which dedicated analog boards are plugged, so that to accommodate with detectors specificity: number of pixels, the readout channels and frequency, analog bias and clock interfaces. We have used this concept to build a camera for the P-ARTEMIS project including a 256 pixels sub-millimeter bolometer detector at 10Kpixel/s (SPIE 7741-12 (2010)). For the EUCLID project, a lab camera is now working for the test of CCDs 4Mpixels at 4*200Kpixel/s. Another is working for the testing of new near infrared detectors (NIR LFSA for the ESA TRP program) 110Kpixels at 2*100Kpixels/s. Other projects are in progress for the space missions PLATO and SPICA.

  18. Desarrollo de una Interfaz de Control para un Observatorio Astronómico Robotizado con fines educativos en la Facultad de Ciencias Exactas; Físicas y Naturales de la UNSJ

    NASA Astrophysics Data System (ADS)

    Pogrebinsky, L.; Francile, C.

    We report the development and the construction of an Interface to Control a robotized Astronomical Observatory (ICOA), which allows to control the operation of an observatory based on a Meade LX200 telescope. The interface operates together with a computer to control and supervise all the local variables of the observatory, and can take the control of it in risky situations. It serves as a link among the control computer and all the necessary devices for the astronomical observation such as the telescope, the dome, the weather station, the CCD camera, the calibration devices and the security devices. The computer receives orders from an operator who can be or not at the site of observation. The goal of this robotized observatory is the operation in a secure, autonomous and unattended way, with the purpose of to be used remotely by the students of the "Facultad de Ciencias Exactas, Físicas y Naturales" of the UNSJ. FULL TEXT IN SPANISH

  19. 25 CFR 543.2 - What are the definitions for this part?

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ..., mechanical, or other technologic form, that function together to aid the play of one or more Class II games... a particular game, player interface, shift, or other period. Count room. A secured room where the... validated directly by a voucher system. Dedicated camera. A video camera that continuously records a...

  20. Lock-in thermography, penetrant inspection, and scanning electron microscopy for quantitative evaluation of open micro-cracks at the tooth-restoration interface

    NASA Astrophysics Data System (ADS)

    Streza, M.; Hodisan, I.; Prejmerean, C.; Boue, C.; Tessier, Gilles

    2015-03-01

    The evaluation of a dental restoration in a non-invasive way is of paramount importance in clinical practice. The aim of this study was to assess the minimum detectable open crack at the cavity-restorative material interface by the lock-in thermography technique, at laser intensities which are safe for living teeth. For the analysis of the interface, 18 box-type class V standardized cavities were prepared on the facial and oral surfaces of each tooth, with coronal margins in enamel and apical margins in dentine. The preparations were restored with the Giomer Beautifil (Shofu) in combination with three different adhesive systems. Three specimens were randomly selected from each experimental group and each slice has been analysed by visible, infrared (IR), and scanning electron microscopy (SEM). Lock-in thermography showed the most promising results in detecting both marginal and internal defects. The proposed procedure leads to a diagnosis of micro-leakages having openings of 1 µm, which is close to the diffraction limit of the IR camera. Clinical use of a thermographic camera in assessing the marginal integrity of a restoration becomes possible. The method overcomes some drawbacks of standard SEM or dye penetration testing. The results support the use of an IR camera in dentistry, for the diagnosis of micro-gaps at bio-interfaces.

  1. Discriminating between intentional and unintentional gaze fixation using multimodal-based fuzzy logic algorithm for gaze tracking system with NIR camera sensor

    NASA Astrophysics Data System (ADS)

    Naqvi, Rizwan Ali; Park, Kang Ryoung

    2016-06-01

    Gaze tracking systems are widely used in human-computer interfaces, interfaces for the disabled, game interfaces, and for controlling home appliances. Most studies on gaze detection have focused on enhancing its accuracy, whereas few have considered the discrimination of intentional gaze fixation (looking at a target to activate or select it) from unintentional fixation while using gaze detection systems. Previous research methods based on the use of a keyboard or mouse button, eye blinking, and the dwell time of gaze position have various limitations. Therefore, we propose a method for discriminating between intentional and unintentional gaze fixation using a multimodal fuzzy logic algorithm applied to a gaze tracking system with a near-infrared camera sensor. Experimental results show that the proposed method outperforms the conventional method for determining gaze fixation.

  2. The Texas Thermal Interface: A real-time computer interface for an Inframetrics infrared camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storek, D.J.; Gentle, K.W.

    1996-03-01

    The Texas Thermal Interface (TTI) offers an advantageous alternative to the conventional video path for computer analysis of infrared images from Inframetrics cameras. The TTI provides real-time computer data acquisition of 48 consecutive fields (version described here) with 8-bit pixels. The alternative requires time-consuming individual frame grabs from video tape with frequent loss of resolution in the D/A/D conversion. Within seconds after the event, the TTI temperature files may be viewed and processed to infer heat fluxes or other quantities as needed. The system cost is far less than commercial units which offer less capability. The system was developed formore » and is being used to measure heat fluxes to the plasma-facing components in a tokamak. {copyright} {ital 1996 American Institute of Physics.}« less

  3. Computational Virtual Reality (VR) as a human-computer interface in the operation of telerobotic systems

    NASA Technical Reports Server (NTRS)

    Bejczy, Antal K.

    1995-01-01

    This presentation focuses on the application of computer graphics or 'virtual reality' (VR) techniques as a human-computer interface tool in the operation of telerobotic systems. VR techniques offer very valuable task realization aids for planning, previewing and predicting robotic actions, operator training, and for visual perception of non-visible events like contact forces in robotic tasks. The utility of computer graphics in telerobotic operation can be significantly enhanced by high-fidelity calibration of virtual reality images to actual TV camera images. This calibration will even permit the creation of artificial (synthetic) views of task scenes for which no TV camera views are available.

  4. The PanCam Instrument for the ExoMars Rover

    PubMed Central

    Coates, A.J.; Jaumann, R.; Griffiths, A.D.; Leff, C.E.; Schmitz, N.; Josset, J.-L.; Paar, G.; Gunn, M.; Hauber, E.; Cousins, C.R.; Cross, R.E.; Grindrod, P.; Bridges, J.C.; Balme, M.; Gupta, S.; Crawford, I.A.; Irwin, P.; Stabbins, R.; Tirsch, D.; Vago, J.L.; Theodorou, T.; Caballo-Perucha, M.; Osinski, G.R.

    2017-01-01

    Abstract The scientific objectives of the ExoMars rover are designed to answer several key questions in the search for life on Mars. In particular, the unique subsurface drill will address some of these, such as the possible existence and stability of subsurface organics. PanCam will establish the surface geological and morphological context for the mission, working in collaboration with other context instruments. Here, we describe the PanCam scientific objectives in geology, atmospheric science, and 3-D vision. We discuss the design of PanCam, which includes a stereo pair of Wide Angle Cameras (WACs), each of which has an 11-position filter wheel and a High Resolution Camera (HRC) for high-resolution investigations of rock texture at a distance. The cameras and electronics are housed in an optical bench that provides the mechanical interface to the rover mast and a planetary protection barrier. The electronic interface is via the PanCam Interface Unit (PIU), and power conditioning is via a DC-DC converter. PanCam also includes a calibration target mounted on the rover deck for radiometric calibration, fiducial markers for geometric calibration, and a rover inspection mirror. Key Words: Mars—ExoMars—Instrumentation—Geology—Atmosphere—Exobiology—Context. Astrobiology 17, 511–541.

  5. Measurement of resistance to solute transport across surfactant-laden interfaces using a Fluorescence Recovery After Photobleaching (FRAP) technique

    NASA Technical Reports Server (NTRS)

    Browne, Edward P.; Nivaggioli, Thierry; Hatton, T. Alan

    1994-01-01

    A noninvasive fluorescence recovery after photobleaching (FRAP) technique is under development to measure interfacial transport in two phase systems without disturbing the interface. The concentration profiles of a probe solute are measured in both sides of the interface by argon-ion laser, and the system relaxation is then monitored by a microscope-mounted CCD camera.

  6. High resolution bone mineral densitometry with a gamma camera

    NASA Technical Reports Server (NTRS)

    Leblanc, A.; Evans, H.; Jhingran, S.; Johnson, P.

    1983-01-01

    A technique by which the regional distribution of bone mineral can be determined in bone samples from small animals is described. The technique employs an Anger camera interfaced to a medical computer. High resolution imaging is possible by producing magnified images of the bone samples. Regional densitometry of femurs from oophorectomised and bone mineral loss.

  7. Unattended real-time re-establishment of visibility in high dynamic range video and stills

    NASA Astrophysics Data System (ADS)

    Abidi, B.

    2014-05-01

    We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.

  8. A Laboratory Application of Microcomputer Graphics.

    ERIC Educational Resources Information Center

    Gehring, Kalle B.; Moore, John W.

    1983-01-01

    A PASCAL graphics and instrument interface program for a Z80/S-100 based microcomputer was developed. The computer interfaces to a stopped-flow spectrophotometer replacing a storage oscilloscope and polaroid camera. Applications of this system are discussed, indicating that graphics and analog-to-digital boards have transformed the computer into…

  9. Liquid lens enabling real-time focus and tilt compensation for optical image stabilization in camera modules

    NASA Astrophysics Data System (ADS)

    Simon, Eric; Craen, Pierre; Gaton, Hilario; Jacques-Sermet, Olivier; Laune, Frédéric; Legrand, Julien; Maillard, Mathieu; Tallaron, Nicolas; Verplanck, Nicolas; Berge, Bruno

    2010-05-01

    A new generation of liquid lenses based on electrowetting has been developed, using a multi-electrode design, enabling to induce optical tilt and focus corrections in the same component. The basic principle is to rely on a conical shape for supporting the liquid interface, the conical shape insuring a restoring force for the liquid liquid interface to come at the center position. The multi-electrode design enables to induce an average tilt of the liquid liquid interface when a bias voltage is applied to the different electrodes. This tilt is reversible, vanishing when voltage bias is cancelled. Possible application of this new lens component is the realization of miniature camera featuring auto-focus and optical image stabilization (OIS) without any mobile mechanical part. Experimental measurements of actual performances of liquid lens component will be presented : focus and tilt amplitude, residual optical wave front error and response time.

  10. A sensor simulation framework for the testing and evaluation of external hazard monitors and integrated alerting and notification functions

    NASA Astrophysics Data System (ADS)

    Uijt de Haag, Maarten; Venable, Kyle; Bezawada, Rajesh; Adami, Tony; Vadlamani, Ananth K.

    2009-05-01

    This paper discusses a sensor simulator/synthesizer framework that can be used to test and evaluate various sensor integration strategies for the implementation of an External Hazard Monitor (EHM) and Integrated Alerting and Notification (IAN) function as part of NASA's Integrated Intelligent Flight Deck (IIFD) project. The IIFD project under the NASA's Aviation Safety program "pursues technologies related to the flight deck that ensure crew workload and situational awareness are both safely optimized and adapted to the future operational environment as envisioned by NextGen." Within the simulation framework, various inputs to the IIFD and its subsystems, the EHM and IAN, are simulated, synthesized from actual collected data, or played back from actual flight test sensor data. Sensors and avionics included in this framework are TCAS, ADS-B, Forward-Looking Infrared, Vision cameras, GPS, Inertial navigators, EGPWS, Laser Detection and Ranging sensors, altimeters, communication links with ATC, and weather radar. The framework is implemented in Simulink, a modeling language developed by The Mathworks. This modeling language allows for test and evaluation of various sensor and communication link configurations as well as the inclusion of feedback from the pilot on the performance of the aircraft. Specifically, this paper addresses the architecture of the simulator, the sensor model interfaces, the timing and database (environment) aspects of the sensor models, the user interface of the modeling environment, and the various avionics implementations.

  11. Procurement specification color graphic camera system

    NASA Technical Reports Server (NTRS)

    Prow, G. E.

    1980-01-01

    The performance and design requirements for a Color Graphic Camera System are presented. The system is a functional part of the Earth Observation Department Laboratory System (EODLS) and will be interfaced with Image Analysis Stations. It will convert the output of a raster scan computer color terminal into permanent, high resolution photographic prints and transparencies. Images usually displayed will be remotely sensed LANDSAT imager scenes.

  12. Integrated multi sensors and camera video sequence application for performance monitoring in archery

    NASA Astrophysics Data System (ADS)

    Taha, Zahari; Arif Mat-Jizat, Jessnor; Amirul Abdullah, Muhammad; Muazu Musa, Rabiu; Razali Abdullah, Mohamad; Fauzi Ibrahim, Mohamad; Hanafiah Shaharudin, Mohd Ali

    2018-03-01

    This paper explains the development of a comprehensive archery performance monitoring software which consisted of three camera views and five body sensors. The five body sensors evaluate biomechanical related variables of flexor and extensor muscle activity, heart rate, postural sway and bow movement during archery performance. The three camera views with the five body sensors are integrated into a single computer application which enables the user to view all the data in a single user interface. The five body sensors’ data are displayed in a numerical and graphical form in real-time. The information transmitted by the body sensors are computed with an embedded algorithm that automatically transforms the summary of the athlete’s biomechanical performance and displays in the application interface. This performance will be later compared to the pre-computed psycho-fitness performance from the prefilled data into the application. All the data; camera views, body sensors; performance-computations; are recorded for further analysis by a sports scientist. Our developed application serves as a powerful tool for assisting the coach and athletes to observe and identify any wrong technique employ during training which gives room for correction and re-evaluation to improve overall performance in the sport of archery.

  13. Development of an imaging method for quantifying a large digital PCR droplet

    NASA Astrophysics Data System (ADS)

    Huang, Jen-Yu; Lee, Shu-Sheng; Hsu, Yu-Hsiang

    2017-02-01

    Portable devices have been recognized as the future linkage between end-users and lab-on-a-chip devices. It has a user friendly interface and provides apps to interface headphones, cameras, and communication duct, etc. In particular, the digital resolution of cameras installed in smartphones or pads already has a high imaging resolution with a high number of pixels. This unique feature has triggered researches to integrate optical fixtures with smartphone to provide microscopic imaging capabilities. In this paper, we report our study on developing a portable diagnostic tool based on the imaging system of a smartphone and a digital PCR biochip. A computational algorithm is developed to processing optical images taken from a digital PCR biochip with a smartphone in a black box. Each reaction droplet is recorded in pixels and is analyzed in a sRGB (red, green, and blue) color space. Multistep filtering algorithm and auto-threshold algorithm are adopted to minimize background noise contributed from ccd cameras and rule out false positive droplets, respectively. Finally, a size-filtering method is applied to identify the number of positive droplets to quantify target's concentration. Statistical analysis is then performed for diagnostic purpose. This process can be integrated in an app and can provide a user friendly interface without professional training.

  14. User interface using a 3D model for video surveillance

    NASA Astrophysics Data System (ADS)

    Hata, Toshihiko; Boh, Satoru; Tsukada, Akihiro; Ozaki, Minoru

    1998-02-01

    These days fewer people, who must carry out their tasks quickly and precisely, are required in industrial surveillance and monitoring applications such as plant control or building security. Utilizing multimedia technology is a good approach to meet this need, and we previously developed Media Controller, which is designed for the applications and provides realtime recording and retrieval of digital video data in a distributed environment. In this paper, we propose a user interface for such a distributed video surveillance system in which 3D models of buildings and facilities are connected to the surveillance video. A novel method of synchronizing camera field data with each frame of a video stream is considered. This method records and reads the camera field data similarity to the video data and transmits it synchronously with the video stream. This enables the user interface to have such useful functions as comprehending the camera field immediately and providing clues when visibility is poor, for not only live video but also playback video. We have also implemented and evaluated the display function which makes surveillance video and 3D model work together using Media Controller with Java and Virtual Reality Modeling Language employed for multi-purpose and intranet use of 3D model.

  15. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    NASA Technical Reports Server (NTRS)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  16. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera

    PubMed Central

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-01-01

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556

  17. Afocal viewport optics for underwater imaging

    NASA Astrophysics Data System (ADS)

    Slater, Dan

    2014-09-01

    A conventional camera can be adapted for underwater use by enclosing it in a sealed waterproof pressure housing with a viewport. The viewport, as an optical interface between water and air needs to consider both the camera and water optical characteristics while also providing a high pressure water seal. Limited hydrospace visibility drives a need for wide angle viewports. Practical optical interfaces between seawater and air vary from simple flat plate windows to complex water contact lenses. This paper first provides a brief overview of the physical and optical properties of the ocean environment along with suitable optical materials. This is followed by a discussion of the characteristics of various afocal underwater viewport types including flat windows, domes and the Ivanoff corrector lens, a derivative of a Galilean wide angle camera adapter. Several new and interesting optical designs derived from the Ivanoff corrector lens are presented including a pair of very compact afocal viewport lenses that are compatible with both in water and in air environments and an afocal underwater hyper-hemispherical fisheye lens.

  18. A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.

    PubMed

    Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo

    2016-03-25

    In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.

  19. Alaskan Auroral All-Sky Images on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Stenbaek-Nielsen, H. C.

    1997-01-01

    In response to a 1995 NASA SPDS announcement of support for preservation and distribution of important data sets online, the Geophysical Institute, University of Alaska Fairbanks, Alaska, proposed to provide World Wide Web access to the Poker Flat Auroral All-sky Camera images in real time. The Poker auroral all-sky camera is located in the Davis Science Operation Center at Poker Flat Rocket Range about 30 miles north-east of Fairbanks, Alaska, and is connected, through a microwave link, with the Geophysical Institute where we maintain the data base linked to the Web. To protect the low light-level all-sky TV camera from damage due to excessive light, we only operate during the winter season when the moon is down. The camera and data acquisition is now fully computer controlled. Digital images are transmitted each minute to the Web linked data base where the data are available in a number of different presentations: (1) Individual JPEG compressed images (1 minute resolution); (2) Time lapse MPEG movie of the stored images; and (3) A meridional plot of the entire night activity.

  20. Engineering study for pallet adapting the Apollo laser altimeter and photographic camera system for the Lidar Test Experiment on orbital flight tests 2 and 4

    NASA Technical Reports Server (NTRS)

    Kuebert, E. J.

    1977-01-01

    A Laser Altimeter and Mapping Camera System was included in the Apollo Lunar Orbital Experiment Missions. The backup system, never used in the Apollo Program, is available for use in the Lidar Test Experiments on the STS Orbital Flight Tests 2 and 4. Studies were performed to assess the problem associated with installation and operation of the Mapping Camera System in the STS. They were conducted on the photographic capabilities of the Mapping Camera System, its mechanical and electrical interface with the STS, documentation, operation and survivability in the expected environments, ground support equipment, test and field support.

  1. A wideband wireless neural stimulation platform for high-density microelectrode arrays.

    PubMed

    Myers, Frank B; Simpson, Jim A; Ghovanloo, Maysam

    2006-01-01

    We describe a system that allows researchers to control an implantable neural microstimulator from a PC via a USB 2.0 interface and a novel dual-carrier wireless link, which provides separate data and power transmission. Our wireless stimulator, Interestim-2B (IS-2B), is a modular device capable of generating controlled-current stimulation pulse trains across 32 sites per module with support for a variety of stimulation schemes (biphasic/monophasic, bipolar/monopolar). We have developed software to generate multi-site stimulation commands for the IS-2B based on streaming data from artificial sensory devices such as cameras and microphones. For PC interfacing, we have developed a USB 2.0 microcontroller-based interface. Data is transmitted using frequency-shift keying (FSK) at 6/12 MHz to achieve a data rate of 3 Mb/s via a pair of rectangular coils. Power is generated using a class-E power amplifier operating at 1 MHz and transmitted via a separate pair of spiral planar coils which are oriented perpendicular to the data coils to minimize cross-coupling. We have successfully demonstrated the operation of the system by applying it as a visual prosthesis. Pulse-frequency modulated stimuli are generated in real-time based on a grayscale image from a webcam. These pulses are projected onto an 11x11 LED matrix that represents a 2D microelectrode array.

  2. PNIC - A near infrared camera for testing focal plane arrays

    NASA Astrophysics Data System (ADS)

    Hereld, Mark; Harper, D. A.; Pernic, R. J.; Rauscher, Bernard J.

    1990-07-01

    This paper describes the design and the performance of the Astrophysical Research Consortium prototype near-infrared camera (pNIC) designed to test focal plane arrays both on and off the telescope. Special attention is given to the detector in pNIC, the mechanical and optical designs, the electronics, and the instrument interface. Experiments performed to illustrate the most salient aspects of pNIC are described.

  3. Speech versus manual control of camera functions during a telerobotic task

    NASA Technical Reports Server (NTRS)

    Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.

    1989-01-01

    Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.

  4. Local intelligent electronic device (IED) rendering templates over limited bandwidth communication link to manage remote IED

    DOEpatents

    Bradetich, Ryan; Dearien, Jason A; Grussling, Barry Jakob; Remaley, Gavin

    2013-11-05

    The present disclosure provides systems and methods for remote device management. According to various embodiments, a local intelligent electronic device (IED) may be in communication with a remote IED via a limited bandwidth communication link, such as a serial link. The limited bandwidth communication link may not support traditional remote management interfaces. According to one embodiment, a local IED may present an operator with a management interface for a remote IED by rendering locally stored templates. The local IED may render the locally stored templates using sparse data obtained from the remote IED. According to various embodiments, the management interface may be a web client interface and/or an HTML interface. The bandwidth required to present a remote management interface may be significantly reduced by rendering locally stored templates rather than requesting an entire management interface from the remote IED. According to various embodiments, an IED may comprise an encryption transceiver.

  5. User Interface Preferences in the Design of a Camera-Based Navigation and Wayfinding Aid

    ERIC Educational Resources Information Center

    Arditi, Aries; Tian, YingLi

    2013-01-01

    Introduction: Development of a sensing device that can provide a sufficient perceptual substrate for persons with visual impairments to orient themselves and travel confidently has been a persistent rehabilitation technology goal, with the user interface posing a significant challenge. In the study presented here, we enlist the advice and ideas of…

  6. SCC500: next-generation infrared imaging camera core products with highly flexible architecture for unique camera designs

    NASA Astrophysics Data System (ADS)

    Rumbaugh, Roy N.; Grealish, Kevin; Kacir, Tom; Arsenault, Barry; Murphy, Robert H.; Miller, Scott

    2003-09-01

    A new 4th generation MicroIR architecture is introduced as the latest in the highly successful Standard Camera Core (SCC) series by BAE SYSTEMS to offer an infrared imaging engine with greatly reduced size, weight, power, and cost. The advanced SCC500 architecture provides great flexibility in configuration to include multiple resolutions, an industry standard Real Time Operating System (RTOS) for customer specific software application plug-ins, and a highly modular construction for unique physical and interface options. These microbolometer based camera cores offer outstanding and reliable performance over an extended operating temperature range to meet the demanding requirements of real-world environments. A highly integrated lens and shutter is included in the new SCC500 product enabling easy, drop-in camera designs for quick time-to-market product introductions.

  7. Toolkit for testing scientific CCD cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, Janusz; Mankiewicz, Lech; Molak, Marcin; Wrochna, Grzegorz

    2006-03-01

    The CCD Toolkit (1) is a software tool for testing CCD cameras which allows to measure important characteristics of a camera like readout noise, total gain, dark current, 'hot' pixels, useful area, etc. The application makes a statistical analysis of images saved in files with FITS format, commonly used in astronomy. A graphical interface is based on the ROOT package, which offers high functionality and flexibility. The program was developed in a way to ensure future compatibility with different operating systems: Windows and Linux. The CCD Toolkit was created for the "Pie of the Sky" project collaboration (2).

  8. Feasibility study for the application of the large format camera as a payload for the Orbiter program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.

  9. Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images

    PubMed Central

    Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung

    2013-01-01

    Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713

  10. Development of a high-speed H-alpha camera system for the observation of rapid fluctuations in solar flares

    NASA Technical Reports Server (NTRS)

    Kiplinger, Alan L.; Dennis, Brian R.; Orwig, Larry E.; Chen, P. C.

    1988-01-01

    A solid-state digital camera was developed for obtaining H alpha images of solar flares with 0.1 s time resolution. Beginning in the summer of 1988, this system will be operated in conjunction with SMM's hard X-ray burst spectrometer (HXRBS). Important electron time-of-flight effects that are crucial for determining the flare energy release processes should be detectable with these combined H alpha and hard X-ray observations. Charge-injection device (CID) cameras provide 128 x 128 pixel images simultaneously in the H alpha blue wing, line center, and red wing, or other wavelength of interest. The data recording system employs a microprocessor-controlled, electronic interface between each camera and a digital processor board that encodes the data into a serial bitstream for continuous recording by a standard video cassette recorder. Only a small fraction of the data will be permanently archived through utilization of a direct memory access interface onto a VAX-750 computer. In addition to correlations with hard X-ray data, observations from the high speed H alpha camera will also be correlated and optical and microwave data and data from future MAX 1991 campaigns. Whether the recorded optical flashes are simultaneous with X-ray peaks to within 0.1 s, are delayed by tenths of seconds or are even undetectable, the results will have implications on the validity of both thermal and nonthermal models of hard X-ray production.

  11. Partial camera automation in an unmanned air vehicle.

    PubMed

    Korteling, J E; van der Borg, W

    1997-03-01

    The present study focused on an intelligent, semiautonomous, interface for a camera operator of a simulated unmanned air vehicle (UAV). This interface used system "knowledge" concerning UAV motion in order to assist a camera operator in tracking an object moving through the landscape below. The semiautomated system compensated for the translations of the UAV relative to the earth. This compensation was accompanied by the appropriate joystick movements ensuring tactile (haptic) feedback of these system interventions. The operator had to superimpose self-initiated joystick manipulations over these system-initiated joystick motions in order to track the motion of a target (a driving truck) relative to the terrain. Tracking data showed that subjects performed substantially better with the active system. Apparently, the subjects had no difficulty in maintaining control, i.e., "following" the active stick while superimposing self-initiated control movements over the system-interventions. Furthermore, tracking performance with an active interface was clearly superior relative to the passive system. The magnitude of this effect was equal to the effect of update-frequency (2-5 Hz) of the monitor image. The benefits of update frequency enhancement and semiautomated tracking were the greatest under difficult steering conditions. Mental workload scores indicated that, for the difficult tracking-dynamics condition, both semiautomation and update frequency increase resulted in less experienced mental effort. For the easier dynamics this effect was only seen for update frequency.

  12. Compact Video Microscope Imaging System Implemented in Colloid Studies

    NASA Technical Reports Server (NTRS)

    McDowell, Mark

    2002-01-01

    Long description Photographs showing fiber-optic light source, microscope and charge-coupled discharge (CCD) camera head connected to camera body, CCD camera body feeding data to image acquisition board in PC, and Cartesian robot controlled via PC board. The Compact Microscope Imaging System (CMIS) is a diagnostic tool with intelligent controls for use in space, industrial, medical, and security applications. CMIS can be used in situ with a minimum amount of user intervention. This system can scan, find areas of interest in, focus on, and acquire images automatically. Many multiple-cell experiments require microscopy for in situ observations; this is feasible only with compact microscope systems. CMIS is a miniature machine vision system that combines intelligent image processing with remote control. The software also has a user-friendly interface, which can be used independently of the hardware for further post-experiment analysis. CMIS has been successfully developed in the SML Laboratory at the NASA Glenn Research Center and adapted for use for colloid studies and is available for telescience experiments. The main innovations this year are an improved interface, optimized algorithms, and the ability to control conventional full-sized microscopes in addition to compact microscopes. The CMIS software-hardware interface is being integrated into our SML Analysis package, which will be a robust general-purpose image-processing package that can handle over 100 space and industrial applications.

  13. Camera identity from minute film-edge mark characteristics.

    PubMed

    Kuppuswamy, R

    2003-07-08

    A case is presented in which a camera recovered from a site of bomb blast was linked to some incriminating film negatives by the characteristic markings existing along a small portion of the edge of the film negatives.

  14. Continuous Microfluidics (Ecology-on-a-Chip) Experiments for Long Term Observation of Bacteria at Liquid-Liquid Interfaces

    NASA Astrophysics Data System (ADS)

    Miranda, Michael; White, Andrew; Jalali, Maryam; Sheng, Jian

    2017-11-01

    A microfluidic bioassay incorporating a peristaltic pump and chemostat capable of continuously culturing a bacterial suspension through a microchannel for an extended period of time relevant to ecological processes is presented. A single crude oil droplet is dispensed on-chip and subsequently pinned to the top and bottom surfaces of the microchannel to establish a vertical curved oil-water interface to observe bacteria without boundary interference. The accumulation of extracellular polymeric substances (EPS), microbial film formation, and aggregation is provided by DIC microscopy with an EMCCD camera at an interval of 30 sec. Cell-interface interactions such as cell translational and angular motilities as well as encountering, attachment, detachment to the interface are obtained by a high speed camera at 1000 fps with a sampling interval of 10 min. Experiments on Pseudomonas sp. (P62) and isolated EPS suspensions from Sagitulla Stelleta and Roseobacter show rapid formation of bacterial aggregates including EPS streamers stretching tens of drop diameters long. These results provide crucial insights into environmentally relevant processes such as the initiation of marine oil snow, an alternative mode of biodegradation to conventional bioconsumption. Funded by GoMRI, NSF, ARO.

  15. A novel device for head gesture measurement system in combination with eye-controlled human machine interface

    NASA Astrophysics Data System (ADS)

    Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun

    2006-06-01

    This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.

  16. Advanced EVA Suit Camera System Development Project

    NASA Technical Reports Server (NTRS)

    Mock, Kyla

    2016-01-01

    The National Aeronautics and Space Administration (NASA) at the Johnson Space Center (JSC) is developing a new extra-vehicular activity (EVA) suit known as the Advanced EVA Z2 Suit. All of the improvements to the EVA Suit provide the opportunity to update the technology of the video imagery. My summer internship project involved improving the video streaming capabilities of the cameras that will be used on the Z2 Suit for data acquisition. To accomplish this, I familiarized myself with the architecture of the camera that is currently being tested to be able to make improvements on the design. Because there is a lot of benefit to saving space, power, and weight on the EVA suit, my job was to use Altium Design to start designing a much smaller and simplified interface board for the camera's microprocessor and external components. This involved checking datasheets of various components and checking signal connections to ensure that this architecture could be used for both the Z2 suit and potentially other future projects. The Orion spacecraft is a specific project that may benefit from this condensed camera interface design. The camera's physical placement on the suit also needed to be determined and tested so that image resolution can be maximized. Many of the options of the camera placement may be tested along with other future suit testing. There are multiple teams that work on different parts of the suit, so the camera's placement could directly affect their research or design. For this reason, a big part of my project was initiating contact with other branches and setting up multiple meetings to learn more about the pros and cons of the potential camera placements we are analyzing. Collaboration with the multiple teams working on the Advanced EVA Z2 Suit is absolutely necessary and these comparisons will be used as further progress is made for the overall suit design. This prototype will not be finished in time for the scheduled Z2 Suit testing, so my time was also spent creating a case for the original interface board that is already being used. This design is being done by use of Creo 2. Due to time constraints, I may not be able to complete the 3-D printing portion of this design, but I was able to use my knowledge of the interface board and Altium Design to help in the task. As a side project, I assisted another intern in selecting and programming a microprocessor to control linear actuators. These linear actuators will be used to move various increments of polyethylene for controlled radiation testing. For this, we began the software portion of the project using the Arduino's coding environment to control an Arduino Due and H-Bridge components. Along with the obvious learning of computer programs such as Altium Design and Creo 2, I also acquired more skills with networking and collaborating with others, being able to multi-task because of responsibilities to work on various projects, and how to set realistic goals in the work place. Like many internship projects, this project will be continued and improved, so I also had the chance to improve my organization and communication skills as I documented all of my meetings and research. As a result of my internship at JSC, I desire to continue a career with NASA, whether that be through another internship or possibly a co-op. I am excited to return to my university and continue my education in electrical engineering because of all of my experiences at JSC.

  17. USMC UGS technology advancements

    NASA Astrophysics Data System (ADS)

    Hartup, David C.; Barr, Michael E.; Hirz, Philip M.; Kipp, Jason; Fishburn, Thomas A.; Waller, Ezra S.; Marks, Brian A.

    2008-04-01

    Technology advancements for the USMC UGS system are described. Integration of the ARL Blue Radio/CSR into the System Controller and Radio Repeater permit the TRSS system to operate seamlessly within the Family of UGS concept. In addition to the Blue Radio/CSR, the TRSS system provides VHF and SATCOM radio links. The TRSS system is compatible with a wide range of imagers, including those with both analog and digital interfaces. The TRSS System Controller permits simultaneous monitoring of 2 camera inputs. To complement enhanced compatibility and improved processing, the mechanical housing of the TRSS System Controller has been updated. The SDR-II, a system monitoring device, also incorporates four Blue Radio/CSRs along with other communication capabilities, making it an ideal choice for a monitoring station within the Family of UGS. Field testing of L-3 Nova's UGS system at YPG has shown flawless performance, capturing all 126 targets.

  18. KSC-07pd2197

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Commander Pamela A. Melroy gives a close inspection to space shuttle Discovery in Orbiter Processing Facility bay 3. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  19. KSC-07pd2196

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Giving a close inspection to space shuttle Discovery in Orbiter Processing Facility bay 3 are Mission Specialist Stephanie D. Wilson and Commander Pamela A. Melroy. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  20. KSC-07pd2215

    NASA Image and Video Library

    2007-08-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility bay 3, STS-120 Commander Pamela A. Melroy sits in the orbiter Discovery to inspect the cockpit windows. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  1. KSC-07pd2219

    NASA Image and Video Library

    2007-08-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility bay 3, STS-120 Pilot George D. Zamka makes a close inspection of the cockpit window on the orbiter Discovery. Seated next to him is Commander Pamela A. Melroy. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. NASA/George Shelton

  2. KSC-07pd2205

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. -In Orbiter Processing Facility bay 3, STS-120 Commander Pamela A. Melroy (center left) and Mission Specialist Stephanie D. Wilson (center right) are lowered in a bucket into Discovery's payload bay. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  3. KSC-07pd2218

    NASA Image and Video Library

    2007-08-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility bay 3, STS-120 Commander Pamela A. Melroy makes a close inspection of the cockpit window on the orbiter Discovery. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  4. KSC-07pd2216

    NASA Image and Video Library

    2007-08-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility bay 3,STS-120 Commander Pamela A. Melroy sits in the orbiter Discovery to inspect the cockpit windows. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  5. KSC-07pd2217

    NASA Image and Video Library

    2007-08-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility bay 3, STS-120 Commander Pamela A. Melroy makes a close inspection of the cockpit window on the orbiter Discovery. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  6. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  7. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  8. Innovation in robotic surgery: the Indian scenario.

    PubMed

    Deshpande, Suresh V

    2015-01-01

    Robotics is the science. In scientific words a "Robot" is an electromechanical arm device with a computer interface, a combination of electrical, mechanical, and computer engineering. It is a mechanical arm that performs tasks in Industries, space exploration, and science. One such idea was to make an automated arm - A robot - In laparoscopy to control the telescope-camera unit electromechanically and then with a computer interface using voice control. It took us 5 long years from 2004 to bring it to the level of obtaining a patent. That was the birth of the Swarup Robotic Arm (SWARM) which is the first and the only Indian contribution in the field of robotics in laparoscopy as a total voice controlled camera holding robotic arm developed without any support by industry or research institutes.

  9. Improving the color fidelity of cameras for advanced television systems

    NASA Astrophysics Data System (ADS)

    Kollarits, Richard V.; Gibbon, David C.

    1992-08-01

    In this paper we compare the accuracy of the color information obtained from television cameras using three and five wavelength bands. This comparison is based on real digital camera data. The cameras are treated as colorimeters whose characteristics are not linked to that of the display. The color matrices for both cameras were obtained by identical optimization procedures that minimized the color error The color error for the five band camera is 2. 5 times smaller than that obtained from the three band camera. Visual comparison of color matches on a characterized color monitor indicate that the five band camera is capable of color measurements that produce no significant visual error on the display. Because the outputs from the five band camera are reduced to the normal three channels conventionally used for display there need be no increase in signal handling complexity outside the camera. Likewise it is possible to construct a five band camera using only three sensors as in conventional cameras. The principal drawback of the five band camera is the reduction in effective camera sensitivity by about 3/4 of an I stop. 1.

  10. ORELA data acquisition system hardware. Vol. 5. SEL 810B/PDP-4/PDP-9 intercomputer link

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Reynolds, J.W.; Holladay, J.H.

    1977-01-01

    A report is given describing the IOT assignments and programming for the PDP-4 and PDP-9, the ground isolation wiring, the connector pin assignments, a simplified theory of operation for the SEL 810B link interface, a detailed theory of operation for the link interfaces at the PDP-4 and the PDP-9, and the use of the PDP-4 and PDP-9 link interfaces as an input to the SEL 810B Four-Channel Priority Multiplexer.

  11. A DirtI Application for LBT Commissioning Campaigns

    NASA Astrophysics Data System (ADS)

    Borelli, J. L.

    2009-09-01

    In order to characterize the Gregorian focal stations and test the performance achieved by the Large Binocular Telescope (LBT) adaptive optics system, two infrared test cameras were constructed within a joint project between INAF (Observatorio Astronomico di Bologna, Italy) and the Max Planck Institute for Astronomy (Germany). Is intended here to describe the functionality and successful results obtained with the Daemon for the Infrared Test Camera Interface (DirtI) during commissioning campaigns.

  12. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J.; Rowe, R. Wanda; Zubal, I. George

    1986-01-07

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  13. Nuclear medicine imaging system

    DOEpatents

    Bennett, Gerald W.; Brill, A. Bertrand; Bizais, Yves J. C.; Rowe, R. Wanda; Zubal, I. George

    1986-01-01

    A nuclear medicine imaging system having two large field of view scintillation cameras mounted on a rotatable gantry and being movable diametrically toward or away from each other is disclosed. In addition, each camera may be rotated about an axis perpendicular to the diameter of the gantry. The movement of the cameras allows the system to be used for a variety of studies, including positron annihilation, and conventional single photon emission, as well as static orthogonal dual multi-pinhole tomography. In orthogonal dual multi-pinhole tomography, each camera is fitted with a seven pinhole collimator to provide seven views from slightly different perspectives. By using two cameras at an angle to each other, improved sensitivity and depth resolution is achieved. The computer system and interface acquires and stores a broad range of information in list mode, including patient physiological data, energy data over the full range detected by the cameras, and the camera position. The list mode acquisition permits the study of attenuation as a result of Compton scatter, as well as studies involving the isolation and correlation of energy with a range of physiological conditions.

  14. A High Performance Micro Channel Interface for Real-Time Industrial Image Processing

    Treesearch

    Thomas H. Drayer; Joseph G. Tront; Richard W. Conners

    1995-01-01

    Data collection and transfer devices are critical to the performance of any machine vision system. The interface described in this paper collects image data from a color line scan camera and transfers the data obtained into the system memory of a Micro Channel-based host computer. A maximum data transfer rate of 20 Mbytes/sec can be achieved using the DMA capabilities...

  15. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  16. Serious Gaming Technologies Support Human Factors Investigations of Advanced Interfaces for Semi-Autonomous Vehicles

    DTIC Science & Technology

    2006-06-01

    conventional camera vs. thermal imager vs. night vision; camera field of view (narrow, wide, panoramic); keyboard + mouse vs. joystick control vs...motorised platform which could scan the immediate area, producing a 360o panorama of “stitched-together” digital pictures. The picture file, together with...VBS was used to automate the process of creating a QuickTime panorama (.mov or .qt), which includes the initial retrieval of the images, the

  17. 40 Gbps data acquisition system for NectarCAM

    NASA Astrophysics Data System (ADS)

    Hoffmann, Dirk; Houles, Julien; NectarCAM Team; CTA Consortium, the

    2017-10-01

    The Cherenkov Telescope Array (CTA) will be the next generation ground-based gamma-ray observatory. It will be made up of approximately 100 telescopes of three different sizes, from 4 to 23 meters in diameter. The previously presented prototype of a high speed data acquisition (DAQ) system for CTA (CHEP 2012, [6]) has become concrete within the NectarCAM project, one of the most challenging camera projects with very demanding needs for bandwidth of data handling. We designed a Linux-PC system able to concentrate and process without packet loss the 40 Gb/s average data rate coming from the 265 Front End Boards (FEB) through Gigabit Ethernet links, and to reduce data to fit the two ten-Gigabit Ethernet downstream links by external trigger decisions as well as custom tailored compression algorithms. Within the given constraints, we implemented de-randomisation of the event fragments received as relatively small UDP packets emitted by the FEB, using off-the-shelf equipment as required by the project and for an operation period of at least 30 years. We tested out-of-the-box interfaces and used original techniques to cope with these requirements, and set up a test bench with hundreds of synchronous Gigabit links in order to validate and tune the acquisition chain including downstream data logging based on zeroMQ and Google ProtocolBuffers [8].

  18. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  19. Controller/Computer Interface with an Air-Ground Data Link

    DOT National Transportation Integrated Search

    1976-06-01

    This report describes the results of an experiment for evaluating the controller/computer interface in an ARTS III/M&S system modified for use with a simulated digital data link and a voice link utilizing a computer-generated voice system. A modified...

  20. Deployment of Shaped Charges by a Semi-Autonomous Ground Vehicle

    DTIC Science & Technology

    2007-06-01

    lives on a daily basis. BigFoot seeks to replace the local human component by deploying and remotely detonating shaped charges to destroy IEDs...robotic arm to deploy and remotely detonate shaped charges. BigFoot incorporates improved communication range over previous Autonomous Ground Vehicles...and an updated user interface that includes controls for the arm and camera by interfacing multiple microprocessors. BigFoot is capable of avoiding

  1. Human-Robot Interface: Issues in Operator Performance, Interface Design, and Technologies

    DTIC Science & Technology

    2006-07-01

    and the use of lightweight portable robotic sensor platforms. 5 robotics has reached a point where some generalities of HRI transcend specific...displays with control devices such as joysticks, wheels, and pedals (Kamsickas, 2003). Typical control stations include panels displaying (a) sensor ...tasks that do not involve mobility and usually involve camera control or data fusion from sensors Active search: Search tasks that involve mobility

  2. The human dopamine transporter forms a tetramer in the plasma membrane: cross-linking of a cysteine in the fourth transmembrane segment is sensitive to cocaine analogs.

    PubMed

    Hastrup, Hanne; Sen, Namita; Javitch, Jonathan A

    2003-11-14

    Using cysteine cross-linking, we demonstrated previously that the dopamine transporter (DAT) is at least a homodimer, with the extracellular end of transmembrane segment (TM) 6 at a symmetrical dimer interface. We have now explored the possibility that DAT exists as a higher order oligomer in the plasma membrane. Cysteine cross-linking of wild type DAT resulted in bands on SDS-PAGE consistent with dimer, trimer, and tetramer, suggesting that DAT forms a tetramer in the plasma membrane. A cysteine-depleted DAT (CD-DAT) into which only Cys243 or Cys306 was reintroduced was cross-linked to dimer, suggesting that these endogenous cysteines in TM4 and TM6, respectively, were cross-linked at a symmetrical dimer interface. Reintroduction of both Cys243 and Cys306 into CD-DAT led to a pattern of cross-linking indistinguishable from that of wild type, with dimer, trimer, and tetramer bands. This indicated that the TM4 interface and the TM6 interface are distinct and further suggested that DAT may exist in the plasma membrane as a dimer of dimers, with two symmetrical homodimer interfaces. The cocaine analog MFZ 2-12 and other DAT inhibitors, including benztropine and mazindol, protected Cys243 against cross-linking. In contrast, two substrates of DAT, dopamine and tyramine, did not significantly impact cross-linking. We propose that the impairment of cross-linking produced by the inhibitors results from a conformational change at the TM4 interface, further demonstrating that these compounds are not neutral blockers but by themselves have effects on the structure of the transporter.

  3. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  4. Design of microcontroller based system for automation of streak camera.

    PubMed

    Joshi, M J; Upadhyay, J; Deshpande, P P; Sharma, M L; Navathe, C P

    2010-08-01

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor. A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.

  5. Design of microcontroller based system for automation of streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Joshi, M. J.; Upadhyay, J.; Deshpande, P. P.

    2010-08-15

    A microcontroller based system has been developed for automation of the S-20 optical streak camera, which is used as a diagnostic tool to measure ultrafast light phenomenon. An 8 bit MCS family microcontroller is employed to generate all control signals for the streak camera. All biasing voltages required for various electrodes of the tubes are generated using dc-to-dc converters. A high voltage ramp signal is generated through a step generator unit followed by an integrator circuit and is applied to the camera's deflecting plates. The slope of the ramp can be changed by varying values of the capacitor and inductor.more » A programmable digital delay generator has been developed for synchronization of ramp signal with the optical signal. An independent hardwired interlock circuit has been developed for machine safety. A LABVIEW based graphical user interface has been developed which enables the user to program the settings of the camera and capture the image. The image is displayed with intensity profiles along horizontal and vertical axes. The streak camera was calibrated using nanosecond and femtosecond lasers.« less

  6. KSC-01pp1802

    NASA Image and Video Library

    2001-12-01

    KENNEDY SPACE CENTER, Fla. - STS-109 Mission Specialist Richard Lennehan (left) and Payload Commander John Grunsfeld get a feel for tools and equipment that will be used on the mission. The crew is at KSC to take part in Crew Equipment Interface Test activities that include familiarization with the orbiter and equipment. The goal of the mission is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002

  7. STARS: a software application for the EBEX autonomous daytime star cameras

    NASA Astrophysics Data System (ADS)

    Chapman, Daniel; Didier, Joy; Hanany, Shaul; Hillbrand, Seth; Limon, Michele; Miller, Amber; Reichborn-Kjennerud, Britt; Tucker, Greg; Vinokurov, Yury

    2014-07-01

    The E and B Experiment (EBEX) is a balloon-borne telescope designed to probe polarization signals in the CMB resulting from primordial gravitational waves, gravitational lensing, and Galactic dust emission. EBEX completed an 11 day flight over Antarctica in January 2013 and data analysis is underway. EBEX employs two star cameras to achieve its real-time and post-flight pointing requirements. We wrote a software application called STARS to operate, command, and collect data from each of the star cameras, and to interface them with the main flight computer. We paid special attention to make the software robust against potential in-flight failures. We report on the implementation, testing, and successful in flight performance of STARS.

  8. Performance and Calibration of H2RG Detectors and SIDECAR ASICs for the RATIR Camera

    NASA Technical Reports Server (NTRS)

    Fox, Ori D.; Kutyrev, Alexander S.; Rapchun, David A.; Klein, Christopher R.; Butler, Nathaniel R.; Bloom, Josh; de Diego, Jos A.; Simn Farah, Alejandro D.; Gehrels, Neil A.; Georgiev, Leonid; hide

    2012-01-01

    The Reionization And Transient Infra,.Red (RATIR) camera has been built for rapid Gamma,.Ray Burst (GRE) followup and will provide simultaneous optical and infrared photometric capabilities. The infrared portion of this camera incorporates two Teledyne HgCdTe HAWAII-2RG detectors, controlled by Teledyne's SIDECAR ASICs. While other ground-based systems have used the SIDECAR before, this system also utilizes Teledyne's JADE2 interface card and IDE development environment. Together, this setup comprises Teledyne's Development Kit, which is a bundled solution that can be efficiently integrated into future ground-based systems. In this presentation, we characterize the system's read noise, dark current, and conversion gain.

  9. Liquid lens: advances in adaptive optics

    NASA Astrophysics Data System (ADS)

    Casey, Shawn Patrick

    2010-12-01

    'Liquid lens' technologies promise significant advancements in machine vision and optical communications systems. Adaptations for machine vision, human vision correction, and optical communications are used to exemplify the versatile nature of this technology. Utilization of liquid lens elements allows the cost effective implementation of optical velocity measurement. The project consists of a custom image processor, camera, and interface. The images are passed into customized pattern recognition and optical character recognition algorithms. A single camera would be used for both speed detection and object recognition.

  10. Real-Time Acquisition and Display of Data and Video

    NASA Technical Reports Server (NTRS)

    Bachnak, Rafic; Chakinarapu, Ramya; Garcia, Mario; Kar, Dulal; Nguyen, Tien

    2007-01-01

    This paper describes the development of a prototype that takes in an analog National Television System Committee (NTSC) video signal generated by a video camera and data acquired by a microcontroller and display them in real-time on a digital panel. An 8051 microcontroller is used to acquire power dissipation by the display panel, room temperature, and camera zoom level. The paper describes the major hardware components and shows how they are interfaced into a functional prototype. Test data results are presented and discussed.

  11. A miniature low-cost LWIR camera with a 160×120 microbolometer FPA

    NASA Astrophysics Data System (ADS)

    Tepegoz, Murat; Kucukkomurler, Alper; Tankut, Firat; Eminoglu, Selim; Akin, Tayfun

    2014-06-01

    This paper presents the development of a miniature LWIR thermal camera, MSE070D, which targets value performance infrared imaging applications, where a 160x120 CMOS-based microbolometer FPA is utilized. MSE070D features a universal USB interface that can communicate with computers and some particular mobile devices in the market. In addition, it offers high flexibility and mobility with the help of its USB powered nature, eliminating the need for any external power source, thanks to its low-power requirement option. MSE070D provides thermal imaging with its 1.65 inch3 volume with the use of a vacuum packaged CMOS-based microbolometer type thermal sensor MS1670A-VP, achieving moderate performance with a very low production cost. MSE070D allows 30 fps thermal video imaging with the 160x120 FPA size while resulting in an NETD lower than 350 mK with f/1 optics. It is possible to obtain test electronics and software, miniature camera cores, complete Application Programming Interfaces (APIs) and relevant documentation with MSE070D, as MikroSens want to help its customers to evaluate its products and to ensure quick time-to-market for systems manufacturers.

  12. Science Activity Planner for the MER Mission

    NASA Technical Reports Server (NTRS)

    Norris, Jeffrey S.; Crockett, Thomas M.; Fox, Jason M.; Joswig, Joseph C.; Powell, Mark W.; Shams, Khawaja S.; Torres, Recaredo J.; Wallick, Michael N.; Mittman, David S.

    2008-01-01

    The Maestro Science Activity Planner is a computer program that assists human users in planning operations of the Mars Explorer Rover (MER) mission and visualizing scientific data returned from the MER rovers. Relative to its predecessors, this program is more powerful and easier to use. This program is built on the Java Eclipse open-source platform around a Web-browser-based user-interface paradigm to provide an intuitive user interface to Mars rovers and landers. This program affords a combination of advanced display and simulation capabilities. For example, a map view of terrain can be generated from images acquired by the High Resolution Imaging Science Explorer instrument aboard the Mars Reconnaissance Orbiter spacecraft and overlaid with images from a navigation camera (more precisely, a stereoscopic pair of cameras) aboard a rover, and an interactive, annotated rover traverse path can be incorporated into the overlay. It is also possible to construct an overhead perspective mosaic image of terrain from navigation-camera images. This program can be adapted to similar use on other outer-space missions and is potentially adaptable to numerous terrestrial applications involving analysis of data, operations of robots, and planning of such operations for acquisition of scientific data.

  13. A mobile phone user interface for image-based dietary assessment

    NASA Astrophysics Data System (ADS)

    Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.

    2014-02-01

    Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.

  14. A Mobile Phone User Interface for Image-Based Dietary Assessment

    PubMed Central

    Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A.; Boushey, Carol J.; Delp, Edward J.

    2016-01-01

    Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use. PMID:28572696

  15. A Mobile Phone User Interface for Image-Based Dietary Assessment.

    PubMed

    Ahmad, Ziad; Khanna, Nitin; Kerr, Deborah A; Boushey, Carol J; Delp, Edward J

    2014-02-02

    Many chronic diseases, including obesity and cancer, are related to diet. Such diseases may be prevented and/or successfully treated by accurately monitoring and assessing food and beverage intakes. Existing dietary assessment methods such as the 24-hour dietary recall and the food frequency questionnaire, are burdensome and not generally accurate. In this paper, we present a user interface for a mobile telephone food record that relies on taking images, using the built-in camera, as the primary method of recording. We describe the design and implementation of this user interface while stressing the solutions we devised to meet the requirements imposed by the image analysis process, yet keeping the user interface easy to use.

  16. Hardware platform for multiple mobile robots

    NASA Astrophysics Data System (ADS)

    Parzhuber, Otto; Dolinsky, D.

    2004-12-01

    This work is concerned with software and communications architectures that might facilitate the operation of several mobile robots. The vehicles should be remotely piloted or tele-operated via a wireless link between the operator and the vehicles. The wireless link will carry control commands from the operator to the vehicle, telemetry data from the vehicle back to the operator and frequently also a real-time video stream from an on board camera. For autonomous driving the link will carry commands and data between the vehicles. For this purpose we have developed a hardware platform which consists of a powerful microprocessor, different sensors, stereo- camera and Wireless Local Area Network (WLAN) for communication. The adoption of IEEE802.11 standard for the physical and access layer protocols allow a straightforward integration with the internet protocols TCP/IP. For the inspection of the environment the robots are equipped with a wide variety of sensors like ultrasonic, infrared proximity sensors and a small inertial measurement unit. Stereo cameras give the feasibility of the detection of obstacles, measurement of distance and creation of a map of the room.

  17. RDS-SL VS Communication System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2012-09-12

    The RDS-SL VS Communication System is a component of the Radiation Detection System for Strategic, Low-Volume Seaports. Its purpose is to acquire real-time data from radiation portal monitors and cameras, record that data in a database, and make it available to system operators and administrators via a web interface. The software system contains two components: a standalone data acquisition and storage component and an ASP.NETweb application that implements the web interface.

  18. High-performance parallel interface to synchronous optical network gateway

    DOEpatents

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  19. Commander Truly on aft flight deck holding communication kit assembly (ASSY)

    NASA Technical Reports Server (NTRS)

    1983-01-01

    On aft flight deck, Commander Truly holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the onorbit station. HASSELBLAD camera is positioned on overhead window W8.

  20. Technology transfer of operator-in-the-loop simulation

    NASA Technical Reports Server (NTRS)

    Yae, K. H.; Lin, H. C.; Lin, T. C.; Frisch, H. P.

    1994-01-01

    The technology developed for operator-in-the-loop simulation in space teleoperation has been applied to Caterpillar's backhoe, wheel loader, and off-highway truck. On an SGI workstation, the simulation integrates computer modeling of kinematics and dynamics, real-time computational and visualization, and an interface with the operator through the operator's console. The console is interfaced with the workstation through an IBM-PC in which the operator's commands were digitized and sent through an RS-232 serial port. The simulation gave visual feedback adequate for the operator in the loop, with the camera's field of vision projected on a large screen in multiple view windows. The view control can emulate either stationary or moving cameras. This simulator created an innovative engineering design environment by integrating computer software and hardware with the human operator's interactions. The backhoe simulation has been adopted by Caterpillar in building a virtual reality tool for backhoe design.

  1. Real-time vehicle matching for multi-camera tunnel surveillance

    NASA Astrophysics Data System (ADS)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  2. Generic Bluetooth Data Module

    DTIC Science & Technology

    2002-09-01

    to Ref (1). 34 RS232.java Serial Coomunication port class To Bluetooth module HCI.java Host Control Interface class L2CAP.java Logical Link Control...standard protocol for transporting IP datagrams over point-to-point link . It is designed to run over RFCOMM to accomplish point-to-point connections...Control and Adaption Host Controller Interface Link Manager Baseband / Link Controller Radio Figure 2. Bluetooth layers (From Ref. [3].) C

  3. Optical measurement of interface movements of liquid metal excited by a pneumatic shaker

    NASA Astrophysics Data System (ADS)

    Men, Shouqiang; Zhou, Jun; Xu, Jingwen

    2015-02-01

    A model experiment was designed, and Faraday instabilities were generated in a plexiglass cylinder excited by a pneumatic shaker. A contacting distance meter and a single-point fiber-optic vibrometer were applied to measure the displacement/velocity of the shaker, both of the results are in good agreement with each other. Besides, the fibre-optic laser vibrometer was exploited to measure the velocity of the interface between potassium hydroxide aqueous solution and Galinstan. It shows that the fibre-optic vibrometer can be applied to measure the interface movements without Faraday instabilities, whereas there are strong scatter and the interface displacement can only be obtained qualitatively. In this case, a scanning vibrometer or a high-speed CCD camera should be used to record the interface movements.

  4. UCam: universal camera controller and data acquisition system

    NASA Astrophysics Data System (ADS)

    McLay, S. A.; Bezawada, N. N.; Atkinson, D. C.; Ives, D. J.

    2010-07-01

    This paper describes the software architecture and design concepts used in the UKATC's generic camera control and data acquisition software system (UCam) which was originally developed for use with the ARC controller hardware. The ARC detector control electronics are developed by Astronomical Research Cameras (ARC), of San Diego, USA. UCam provides an alternative software solution programmed in C/C++ and python that runs on a real-time Linux operating system to achieve critical speed performance for high time resolution instrumentation. UCam is a server based application that can be accessed remotely and easily integrated as part of a larger instrument control system. It comes with a user friendly client application interface that has several features including a FITS header editor and support for interfacing with network devices. Support is also provided for writing automated scripts in python or as text files. UCam has an application centric design where custom applications for different types of detectors and read out modes can be developed, downloaded and executed on the ARC controller. The built-in de-multiplexer can be easily reconfigured to readout any number of channels for almost any type of detector. It also provides support for numerous sampling modes such as CDS, FOWLER, NDR and threshold limited NDR. UCam has been developed over several years for use on many instruments such as the Wide Field Infra Red Camera (WFCAM) at UKIRT in Hawaii, the mid-IR imager/spectrometer UIST and is also used on instruments at SUBARU, Gemini and Palomar.

  5. Motion Imagery and Robotics Application (MIRA)

    NASA Technical Reports Server (NTRS)

    Martinez, Lindolfo; Rich, Thomas

    2011-01-01

    Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.

  6. Characterization and optimization for detector systems of IGRINS

    NASA Astrophysics Data System (ADS)

    Jeong, Ueejeong; Chun, Moo-Young; Oh, Jae Sok; Park, Chan; Yuk, In-Soo; Oh, Heeyoung; Kim, Kang-Min; Ko, Kyeong Yeon; Pavel, Michael D.; Yu, Young Sam; Jaffe, Daniel T.

    2014-07-01

    IGRINS (Immersion GRating INfrared Spectrometer) is a high resolution wide-band infrared spectrograph developed by the Korea Astronomy and Space Science Institute (KASI) and the University of Texas at Austin (UT). This spectrograph has H-band and K-band science cameras and a slit viewing camera, all three of which use Teledyne's λc~2.5μm 2k×2k HgCdTe HAWAII-2RG CMOS detectors. The two spectrograph cameras employ science grade detectors, while the slit viewing camera includes an engineering grade detector. Teledyne's cryogenic SIDECAR ASIC boards and JADE2 USB interface cards were installed to control those detectors. We performed experiments to characterize and optimize the detector systems in the IGRINS cryostat. We present measurements and optimization of noise, dark current, and referencelevel stability obtained under dark conditions. We also discuss well depth, linearity and conversion gain measurements obtained using an external light source.

  7. Who Goes There? Linking Remote Cameras and Schoolyard Science to Empower Action

    ERIC Educational Resources Information Center

    Tanner, Dawn; Ernst, Julie

    2013-01-01

    Taking Action Opportunities (TAO) is a curriculum that combines guided reflection, a focus on the local environment, and innovative use of wildlife technology to empower student action toward improving the environment. TAO is experientially based and uses remote cameras as a tool for schoolyard exploration. Through TAO, students engage in research…

  8. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  9. New Web Services for Broader Access to National Deep Submergence Facility Data Resources Through the Interdisciplinary Earth Data Alliance

    NASA Astrophysics Data System (ADS)

    Ferrini, V. L.; Grange, B.; Morton, J. J.; Soule, S. A.; Carbotte, S. M.; Lehnert, K.

    2016-12-01

    The National Deep Submergence Facility (NDSF) operates the Human Occupied Vehicle (HOV) Alvin, the Remotely Operated Vehicle (ROV) Jason, and the Autonomous Underwater Vehicle (AUV) Sentry. These vehicles are deployed throughout the global oceans to acquire sensor data and physical samples for a variety of interdisciplinary science programs. As part of the EarthCube Integrative Activity Alliance Testbed Project (ATP), new web services were developed to improve access to existing online NDSF data and metadata resources. These services make use of tools and infrastructure developed by the Interdisciplinary Earth Data Alliance (IEDA) and enable programmatic access to metadata and data resources as well as the development of new service-driven user interfaces. The Alvin Frame Grabber and Jason Virtual Van enable the exploration of frame-grabbed images derived from video cameras on NDSF dives. Metadata available for each image includes time and vehicle position, data from environmental sensors, and scientist-generated annotations, and data are organized and accessible by cruise and/or dive. A new FrameGrabber web service and service-driven user interface were deployed to offer integrated access to these data resources through a single API and allows users to search across content curated in both systems. In addition, a new NDSF Dive Metadata web service and service-driven user interface was deployed to provide consolidated access to basic information about each NDSF dive (e.g. vehicle name, dive ID, location, etc), which is important for linking distributed data resources curated in different data systems.

  10. Standard interface: Twin-coaxial converter

    NASA Technical Reports Server (NTRS)

    Lushbaugh, W. A.

    1976-01-01

    The network operations control center standard interface has been adopted as a standard computer interface for all future minicomputer based subsystem development for the Deep Space Network. Discussed is an intercomputer communications link using a pair of coaxial cables. This unit is capable of transmitting and receiving digital information at distances up to 600 m with complete ground isolation between the communicating devices. A converter is described that allows a computer equipped with the standard interface to use the twin coaxial link.

  11. Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  12. Advanced Spacesuit Informatics Software Design for Power, Avionics and Software Version 2.0

    NASA Technical Reports Server (NTRS)

    Wright, Theodore W.

    2016-01-01

    A description of the software design for the 2016 edition of the Informatics computer assembly of the NASAs Advanced Extravehicular Mobility Unit (AEMU), also called the Advanced Spacesuit. The Informatics system is an optional part of the spacesuit assembly. It adds a graphical interface for displaying suit status, timelines, procedures, and warning information. It also provides an interface to the suit mounted camera for recording still images, video, and audio field notes.

  13. T-LECS: The Control Software System for MOIRCS

    NASA Astrophysics Data System (ADS)

    Yoshikawa, T.; Omata, K.; Konishi, M.; Ichikawa, T.; Suzuki, R.; Tokoku, C.; Katsuno, Y.; Nishimura, T.

    2006-07-01

    MOIRCS (Multi-Object Infrared Camera and Spectrograph) is a new instrument for the Subaru Telescope. We present the system design of the control software system for MOIRCS, named T-LECS (Tohoku University - Layered Electronic Control System). T-LECS is a PC-Linux based network distributed system. Two PCs equipped with the focal plane array system operate two HAWAII2 detectors, respectively, and another PC is used for user interfaces and a database server. Moreover, these PCs control various devices for observations distributed on a TCP/IP network. T-LECS has three interfaces; interfaces to the devices and two user interfaces. One of the user interfaces is to the integrated observation control system (Subaru Observation Software System) for observers, and another one provides the system developers the direct access to the devices of MOIRCS. In order to help the communication between these interfaces, we employ an SQL database system.

  14. A portable high-definition electronic endoscope based on embedded system

    NASA Astrophysics Data System (ADS)

    Xu, Guang; Wang, Liqiang; Xu, Jin

    2012-11-01

    This paper presents a low power and portable highdefinition (HD) electronic endoscope based on CortexA8 embedded system. A 1/6 inch CMOS image sensor is used to acquire HD images with 1280 *800 pixels. The camera interface of A8 is designed to support images of various sizes and support multiple inputs of video format such as ITUR BT601/ 656 standard. Image rotation (90 degrees clockwise) and image process functions are achieved by CAMIF. The decode engine of the processor plays back or records HD videos at speed of 30 frames per second, builtin HDMI interface transmits high definition images to the external display. Image processing procedures such as demosaicking, color correction and auto white balance are realized on the A8 platform. Other functions are selected through OSD settings. An LCD panel displays the real time images. The snapshot pictures or compressed videos are saved in an SD card or transmited to a computer through USB interface. The size of the camera head is 4×4.8×15 mm with more than 3 meters working distance. The whole endoscope system can be powered by a lithium battery, with the advantages of miniature, low cost and portability.

  15. The software architecture of the camera for the ASTRI SST-2M prototype for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Sangiorgi, Pierluca; Capalbi, Milvia; Gimenes, Renato; La Rosa, Giovanni; Russo, Francesco; Segreto, Alberto; Sottile, Giuseppe; Catalano, Osvaldo

    2016-07-01

    The purpose of this contribution is to present the current status of the software architecture of the ASTRI SST-2M Cherenkov Camera. The ASTRI SST-2M telescope is an end-to-end prototype for the Small Size Telescope of the Cherenkov Telescope Array. The ASTRI camera is an innovative instrument based on SiPM detectors and has several internal hardware components. In this contribution we will give a brief description of the hardware components of the camera of the ASTRI SST-2M prototype and of their interconnections. Then we will present the outcome of the software architectural design process that we carried out in order to identify the main structural components of the camera software system and the relationships among them. We will analyze the architectural model that describes how the camera software is organized as a set of communicating blocks. Finally, we will show where these blocks are deployed in the hardware components and how they interact. We will describe in some detail, the physical communication ports and external ancillary devices management, the high precision time-tag management, the fast data collection and the fast data exchange between different camera subsystems, and the interfacing with the external systems.

  16. KSC-07pd2193

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles under space shuttle Discovery in Orbiter Processing Facility bay 3 is Mission Specialist Paolo A. Nespoli, a European Space Agency astronaut from Italy. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  17. KSC-07pd2192

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles under space shuttle Discovery in Orbiter Processing Facility bay 3 is Mission Specialist Scott E. Parazynski, the lead spacewalker on the mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  18. KSC-07pd2208

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 Mission Specialists Scott E. Parazynski, Douglas H. Wheelock and Paolo A. Nespoli inspect tools they will use during the mission. Nespoli is a European Space Agency astronaut from Italy. With them is Allison Bolinger, an EVA technician with NASA. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  19. KSC-07pd2198

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Receiving a briefing on the thermal protection system, or TPS, tiles on space shuttle Discovery in Orbiter Processing Facility bay 3 are Commander Pamela A. Melroy and Mission Specialist Paolo A. Nespoli, a European Space Agency astronaut from Italy. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  20. KSC-07pd2210

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 crew members practice handling tools they will use during the mission. From left are Mission Specialist Stephanie D. Wilson, Pilot George D. Zamka and Commander Pamela A. Melroy. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  1. KSC-07pd2204

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - Dressed in clean-room suits are STS-120 Commander Pamela A. Melroy (left) and Mission Specialist Stephanie D. Wilson (center), getting ready to get into the bucket that will lower them into Discovery's payload bay in bay 3 of the Orbiter Processing Facility. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  2. KSC-07pd2206

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 Mission Specialists Scott E. Parazynski, Douglas H. Wheelock and Paolo A. Nespoli inspect tools they will use during the mission. Nespoli is a European Space Agency astronaut from Italy. Behind them is Allison Bolinger, an EVA technician with NASA. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  3. KSC-07pd2220

    NASA Image and Video Library

    2007-08-04

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility bay 3, STS-120 Pilot George D. Zamka makes a close inspection of the cockpit window on the orbiter Discovery. Seated next to him is Commander Pamela A. Melroy. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  4. KSC-07pd2212

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Discovery's payload bay in Orbiter Processing Facility bay 3, STS-120 crew members are getting hands-on experience with a winch that is used to manually close the payload bay doors in the event that becomes necessary. At right is Expedition 16 Flight Engineer Daniel M. Tani. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  5. KSC-07pd2187

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Standing under space shuttle Discovery in Orbiter Processing Facility bay 3, from left, are Expedition 16 Flight Engineer Daniel M. Tani, Pilot George D. Zamka and Mission Specialist Paolo A. Nespoli, a European Space Agency astronaut from Italy. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  6. Remote media vision-based computer input device

    NASA Astrophysics Data System (ADS)

    Arabnia, Hamid R.; Chen, Ching-Yi

    1991-11-01

    In this paper, we introduce a vision-based computer input device which has been built at the University of Georgia. The user of this system gives commands to the computer without touching any physical device. The system receives input through a CCD camera; it is PC- based and is built on top of the DOS operating system. The major components of the input device are: a monitor, an image capturing board, a CCD camera, and some software (developed by use). These are interfaced with a standard PC running under the DOS operating system.

  7. Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras

    NASA Astrophysics Data System (ADS)

    Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich

    2018-01-01

    A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.

  8. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  9. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images

    PubMed Central

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-01-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces. PMID:23250787

  10. Hand-gesture-based sterile interface for the operating room using contextual cues for the navigation of radiological images.

    PubMed

    Jacob, Mithun George; Wachs, Juan Pablo; Packer, Rebecca A

    2013-06-01

    This paper presents a method to improve the navigation and manipulation of radiological images through a sterile hand gesture recognition interface based on attentional contextual cues. Computer vision algorithms were developed to extract intention and attention cues from the surgeon's behavior and combine them with sensory data from a commodity depth camera. The developed interface was tested in a usability experiment to assess the effectiveness of the new interface. An image navigation and manipulation task was performed, and the gesture recognition accuracy, false positives and task completion times were computed to evaluate system performance. Experimental results show that gesture interaction and surgeon behavior analysis can be used to accurately navigate, manipulate and access MRI images, and therefore this modality could replace the use of keyboard and mice-based interfaces.

  11. LabVIEW Interface for PCI-SpaceWire Interface Card

    NASA Technical Reports Server (NTRS)

    Lux, James; Loya, Frank; Bachmann, Alex

    2005-01-01

    This software provides a LabView interface to the NT drivers for the PCISpaceWire card, which is a peripheral component interface (PCI) bus interface that conforms to the IEEE-1355/ SpaceWire standard. As SpaceWire grows in popularity, the ability to use SpaceWire links within LabVIEW will be important to electronic ground support equipment vendors. In addition, there is a need for a high-level LabVIEW interface to the low-level device- driver software supplied with the card. The LabVIEW virtual instrument (VI) provides graphical interfaces to support all (1) SpaceWire link functions, including message handling and routing; (2) monitoring as a passive tap using specialized hardware; and (3) low-level access to satellite mission-control subsystem functions. The software is supplied in a zip file that contains LabVIEW VI files, which provide various functions of the PCI-SpaceWire card, as well as higher-link-level functions. The VIs are suitably named according to the matching function names in the driver manual. A number of test programs also are provided to exercise various functions.

  12. Multiple Target Tracking in a Wide-Field-of-View Camera System

    DTIC Science & Technology

    1990-01-01

    assembly is mounted on a Contraves alt-azi axis table with a pointing accuracy of < 2 Urad. * Work performed under the auspices of the U.S. Department of... Contraves SUN 3 CCD DR11W VME EITHERNET SUN 3 !3T 3 RS170 Video 1 Video ^mglifier^ I WWV Clock VCR Datacube u Monitor Monitor UL...displaying processed images with overlay from the Datacube. We control the Contraves table using a GPIB interface on the SUN. GPIB also interfaces a

  13. Writing instrument interfaces with xf/tktcl

    NASA Technical Reports Server (NTRS)

    Henden, A. A.

    1992-01-01

    Tcl is an embedded control language written in C, running primarily under Unix and with an interpreted C look-and-feel. Tk is an X11 toolkit based on tcl. Xf is an application builder for tk. The entire package is public domain and available from sprite.berkeley.edu. This paper discusses the use of tk to develop a user interface for OSIRIS, an infrared camera/spectrograph now operational on the OSU Perkins 1.8m telescope. The good and bad features of the development process are described.

  14. Hubble Space Telescope photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  15. Electronic Still Camera image of Astronaut Claude Nicollier working with RMS

    NASA Image and Video Library

    1993-12-05

    S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  16. The Common Data Acquisition Platform in the Helmholtz Association

    NASA Astrophysics Data System (ADS)

    Kaever, P.; Balzer, M.; Kopmann, A.; Zimmer, M.; Rongen, H.

    2017-04-01

    Various centres of the German Helmholtz Association (HGF) started in 2012 to develop a modular data acquisition (DAQ) platform, covering the entire range from detector readout to data transfer into parallel computing environments. This platform integrates generic hardware components like the multi-purpose HGF-Advanced Mezzanine Card or a smart scientific camera framework, adding user value with Linux drivers and board support packages. Technically the scope comprises the DAQ-chain from FPGA-modules to computing servers, notably frontend-electronics-interfaces, microcontrollers and GPUs with their software plus high-performance data transmission links. The core idea is a generic and component-based approach, enabling the implementation of specific experiment requirements with low effort. This so called DTS-platform will support standards like MTCA.4 in hard- and software to ensure compatibility with commercial components. Its capability to deploy on other crate standards or FPGA-boards with PCI express or Ethernet interfaces remains an essential feature. Competences of the participating centres are coordinated in order to provide a solid technological basis for both research topics in the Helmholtz Programme ``Matter and Technology'': ``Detector Technology and Systems'' and ``Accelerator Research and Development''. The DTS-platform aims at reducing costs and development time and will ensure access to latest technologies for the collaboration. Due to its flexible approach, it has the potential to be applied in other scientific programs.

  17. High spatial resolution infrared camera as ISS external experiment

    NASA Astrophysics Data System (ADS)

    Eckehard, Lorenz; Frerker, Hap; Fitch, Robert Alan

    High spatial resolution infrared camera as ISS external experiment for monitoring global climate changes uses ISS internal and external resources (eg. data storage). The optical experiment will consist of an infrared camera for monitoring global climate changes from the ISS. This technology was evaluated by the German small satellite mission BIRD and further developed in different ESA projects. Compared to BIRD the presended instrument uses proven sensor advanced technologies (ISS external) and ISS on board processing and storage capabili-ties (internal). The instrument will be equipped with a serial interfaces for TM/TC and several relay commands for the power supply. For data processing and storage a mass memory is re-quired. The access to actual attitude data is highly desired to produce geo referenced maps-if possible by an on board processing.

  18. Image acquisition in the Pi-of-the-Sky project

    NASA Astrophysics Data System (ADS)

    Jegier, M.; Nawrocki, K.; Poźniak, K.; Sokołowski, M.

    2006-10-01

    Modern astronomical image acquisition systems dedicated for sky surveys provide large amount of data in a single measurement session. During one session that lasts a few hours it is possible to get as much as 100 GB of data. This large amount of data needs to be transferred from camera and processed. This paper presents some aspects of image acquisition in a sky survey image acquisition system. It describes a dedicated USB linux driver for the first version of the "Pi of The Sky" CCD camera (later versions have also Ethernet interface) and the test program for the camera together with a driver-wrapper providing core device functionality. Finally, the paper contains description of an algorithm for matching several images based on image features, i.e. star positions and their brightness.

  19. A general UNIX interface for biocomputing and network information retrieval software.

    PubMed

    Kiong, B K; Tan, T W

    1993-10-01

    We describe a UNIX program, HYBROW, which can integrate without modification a wide range of UNIX biocomputing and network information retrieval software. HYBROW works in conjunction with a separate set of ASCII files containing embedded hypertext-like links. The program operates like a hypertext browser featuring five basic links: file link, execute-only link, execute-display link, directory-browse link and field-filling link. Useful features of the interface may be developed using combinations of these links with simple shell scripts and examples of these are briefly described. The system manager who supports biocomputing users should find the program easy to maintain, and useful in assisting new and infrequent users; it is also simple to incorporate new programs. Moreover, the individual user can customize the interface, create dynamic menus, hypertext a document, invoke shell scripts and new programs simply with a basic understanding of the UNIX operating system and any text editor. This program was written in C language and uses the UNIX curses and termcap libraries. It is freely available as a tar compressed file (by anonymous FTP from nuscc.nus.sg).

  20. Feasibility study of transmission of OTV camera control information in the video vertical blanking interval

    NASA Technical Reports Server (NTRS)

    White, Preston A., III

    1994-01-01

    The Operational Television system at Kennedy Space Center operates hundreds of video cameras, many remotely controllable, in support of the operations at the center. This study was undertaken to determine if commercial NABTS (North American Basic Teletext System) teletext transmission in the vertical blanking interval of the genlock signals distributed to the cameras could be used to send remote control commands to the cameras and the associated pan and tilt platforms. Wavelength division multiplexed fiberoptic links are being installed in the OTV system to obtain RS-250 short-haul quality. It was demonstrated that the NABTS transmission could be sent over the fiberoptic cable plant without excessive video quality degradation and that video cameras could be controlled using NABTS transmissions over multimode fiberoptic paths as long as 1.2 km.

  1. Temperature and melt solid interface control during crystal growth

    NASA Technical Reports Server (NTRS)

    Batur, Celal

    1990-01-01

    Findings on the adaptive control of a transparent Bridgman crystal growth furnace are summarized. The task of the process controller is to establish a user specified axial temperature profile by controlling the temperatures in eight heating zones. The furnace controller is built around a computer. Adaptive PID (Proportional Integral Derivative) and Pole Placement control algorithms are applied. The need for adaptive controller stems from the fact that the zone dynamics changes with respect to time. The controller was tested extensively on the Lead Bromide crystal growth. Several different temperature profiles and ampoule's translational rates are tried. The feasibility of solid liquid interface quantification by image processing was determined. The interface is observed by a color video camera and the image data file is processed to determine if the interface is flat, convex or concave.

  2. Technology as the Crayon Box.

    ERIC Educational Resources Information Center

    Garcia, Lilia

    2000-01-01

    While arts facilities should be equipped with computers, color scanners, MIDI (Musical Instrument Digital Interface) labs, connective video cameras, and appropriate software, music rooms still need pianos and visual art rooms need traditional art supplies. Dade County (Florida) Schools's pilot teacher assistance projects and arts-centered schools…

  3. STS-5 Columbia, OV-102, middeck documentation

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Items stowed temporarily on forward middeck lockers include (left to right) field sequential (FS) crew cabin camera, procedural notebook, communications kit assembly (assy) headset (HDST) interface unit (HIU), personal hygiene kit, personal hygiene mirror assy, meal tray assemblies, towels, and Vestibular Study Experiment headset and antenna.

  4. 3-dimensional telepresence system for a robotic environment

    DOEpatents

    Anderson, Matthew O.; McKay, Mark D.

    2000-01-01

    A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.

  5. Stereo camera based virtual cane system with identifiable distance tactile feedback for the blind.

    PubMed

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-06-13

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind.

  6. Stereo Camera Based Virtual Cane System with Identifiable Distance Tactile Feedback for the Blind

    PubMed Central

    Kim, Donghun; Kim, Kwangtaek; Lee, Sangyoun

    2014-01-01

    In this paper, we propose a new haptic-assisted virtual cane system operated by a simple finger pointing gesture. The system is developed by two stages: development of visual information delivery assistant (VIDA) with a stereo camera and adding a tactile feedback interface with dual actuators for guidance and distance feedbacks. In the first stage, user's pointing finger is automatically detected using color and disparity data from stereo images and then a 3D pointing direction of the finger is estimated with its geometric and textural features. Finally, any object within the estimated pointing trajectory in 3D space is detected and the distance is then estimated in real time. For the second stage, identifiable tactile signals are designed through a series of identification experiments, and an identifiable tactile feedback interface is developed and integrated into the VIDA system. Our approach differs in that navigation guidance is provided by a simple finger pointing gesture and tactile distance feedbacks are perfectly identifiable to the blind. PMID:24932864

  7. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  8. A novel graphical user interface for ultrasound-guided shoulder arthroscopic surgery

    NASA Astrophysics Data System (ADS)

    Tyryshkin, K.; Mousavi, P.; Beek, M.; Pichora, D.; Abolmaesumi, P.

    2007-03-01

    This paper presents a novel graphical user interface developed for a navigation system for ultrasound-guided computer-assisted shoulder arthroscopic surgery. The envisioned purpose of the interface is to assist the surgeon in determining the position and orientation of the arthroscopic camera and other surgical tools within the anatomy of the patient. The user interface features real time position tracking of the arthroscopic instruments with an optical tracking system, and visualization of their graphical representations relative to a three-dimensional shoulder surface model of the patient, created from computed tomography images. In addition, the developed graphical interface facilitates fast and user-friendly intra-operative calibration of the arthroscope and the arthroscopic burr, capture and segmentation of ultrasound images, and intra-operative registration. A pilot study simulating the computer-aided shoulder arthroscopic procedure on a shoulder phantom demonstrated the speed, efficiency and ease-of-use of the system.

  9. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  10. Note: Design and implementation of a home-built imaging system with low jitter for cold atom experiments

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hachtel, A. J.; Gillette, M. C.; Clements, E. R.

    A novel home-built system for imaging cold atom samples is presented using a readily available astronomy camera which has the requisite sensitivity but no timing-control. We integrate the camera with LabVIEW achieving fast, low-jitter imaging with a convenient user-defined interface. We show that our system takes precisely timed millisecond exposures and offers significant improvements in terms of system jitter and readout time over previously reported home-built systems. Our system rivals current commercial “black box” systems in performance and user-friendliness.

  11. Visual Simulation The Old Way

    NASA Astrophysics Data System (ADS)

    Gomes, Gary G.

    1986-05-01

    A cost effective and supportable color visual system has been developed to provide the necessary visual cues to United States Air Force B-52 bomber pilots training to become proficient at the task of inflight refueling. This camera model visual system approach is not suitable for all simulation applications, but provides a cost effective alternative to digital image generation systems when high fidelity of a single movable object is required. The system consists of a three axis gimballed KC-l35 tanker model, a range carriage mounted color augmented monochrome television camera, interface electronics, a color light valve projector and an infinity optics display system.

  12. Sensor fusion and augmented reality with the SAFIRE system

    NASA Astrophysics Data System (ADS)

    Saponaro, Philip; Treible, Wayne; Phelan, Brian; Sherbondy, Kelly; Kambhamettu, Chandra

    2018-04-01

    The Spectrally Agile Frequency-Incrementing Reconfigurable (SAFIRE) mobile radar system was developed and exercised at an arid U.S. test site. The system can detect hidden target using radar, a global positioning system (GPS), dual stereo color cameras, and dual stereo thermal cameras. An Augmented Reality (AR) software interface allows the user to see a single fused video stream containing the SAR, color, and thermal imagery. The stereo sensors allow the AR system to display both fused 2D imagery and 3D metric reconstructions, where the user can "fly" around the 3D model and switch between the modalities.

  13. Film annotation system for a space experiment

    NASA Technical Reports Server (NTRS)

    Browne, W. R.; Johnson, S. S.

    1989-01-01

    This microprocessor system was designed to control and annotate a Nikon 35 mm camera for the purpose of obtaining photographs and data at predefined time intervals. The single STD BUSS interface card was designed in such a way as to allow it to be used in either a stand alone application with minimum features or installed in a STD BUSS computer allowing for maximum features. This control system also allows the exposure of twenty eight alpha/numeric characters across the bottom of each photograph. The data contains such information as camera identification, frame count, user defined text, and time to .01 second.

  14. Shock Interaction with a Finite Thickness Two-Gas Interface

    NASA Astrophysics Data System (ADS)

    Labenski, John; Kim, Yong

    2006-03-01

    A dual-driver shock tube was used to investigate the growth rate of a finite thickness two-gas interface after shock forcing. One driver was used to create an argon-refrigerant interface as the contact surface behind a weak shock wave. The other driver, at the opposite end of the driven section, generates a stronger shock of Mach 1.1 to 1.3 to force the interface back in front of the detector station. Two schlieren systems record the density fluctuations while light scattering detectors record the density of the refrigerant as a function of position over the interface during both it's initial passage and return. A pair of digital cameras take stereo images of the interface, as mapped out by the tracer particles under illumination by a Q-switched ruby laser. The amount of time that the interface is allowed to travel up the driven section determines the interaction time as a control. Comparisons made between the schlieren signals, light scattering detector outputs, and the images quantify the fingered characteristics of the interface and its growth due to shock forcing. The results show that the interface has a distribution of thicknesses and that the interaction with a shock further broadens the interface.

  15. Suitability of digital camcorders for virtual reality image data capture

    NASA Astrophysics Data System (ADS)

    D'Apuzzo, Nicola; Maas, Hans-Gerd

    1998-12-01

    Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.

  16. HST High Gain Antennae photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-021 (7 Dec 1993) --- This close-up view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members have been working in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  17. Hubble Space Telescope photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-001 (4 Dec 1993) --- This medium close-up view of the top portion of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  18. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-07

    S61-E-020 (7 Dec 1993) --- This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993, in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  19. The Lancashire telemedicine ambulance.

    PubMed

    Curry, G R; Harrop, N

    1998-01-01

    An emergency ambulance was equipped with three video-cameras and a system for transmitting slow-scan video-pictures through a cellular telephone link to a hospital accident and emergency department. Video-pictures were trasmitted at a resolution of 320 x 240 pixels and a frame rate of 15 pictures/min. In addition, a helmet-mounted camera was used with a wireless transmission link to the ambulance and thence the hospital. Speech was transmitted by a second hand-held cellular telephone. The equipment was installed in 1996-7 and video-recordings of actual ambulance journeys were made in July 1997. The technical feasibility of the telemedicine ambulance has been demonstrated and further clinical assessment is now in progress.

  20. Interface fluctuations during rapid drainage

    NASA Astrophysics Data System (ADS)

    Ayaz, Monem; Toussaint, Renaud; Schäfer, Gerhard; Jørgen Måløy, Knut; Moura, Marcel

    2017-04-01

    We experimentally study the interface dynamics of an immiscible fluid as it invades a monolayer of saturated porous medium through rapid drainage. The seemingly stable and continuous motion of the interface at macroscale, involves numerous abrupt pore-scale jumps and local reconfigurations of the interface. By computing the velocity fluctuations along the invasion front from sequences of images captured at high frame rate, we are able to study both the local and global behavior. The latter displays an intermittent behavior with power-law distributed avalanches in size and duration. As the system is drained potential surface energy is stored at the interface up to a given threshold in pressure. The energy released generates elastic waves at the confining plate, which we detect using piezoelectric type acoustic sensors. By detecting pore-scale events emanating from the depinning of the interface, we look to develop techniques for localizing the displacement front. To assess the quality of these techniques, optical monitoring is done in parallel using a high speed camera.

  1. Self-aligning LED-based optical link

    NASA Astrophysics Data System (ADS)

    Shen, Thomas C.; Drost, Robert J.; Rzasa, John R.; Sadler, Brian M.; Davis, Christopher C.

    2016-09-01

    The steady advances in light-emitting diode (LED) technology have motivated the use of LEDs in optical wireless communication (OWC) applications such as indoor local area networks (LANs) and communication between mobile platforms (e.g., robots, vehicles). In contrast to traditional radio frequency (RF) wireless communication, OWC utilizes electromagnetic spectrum that is largely unregulated and unrestricted. OWC communication may be especially useful in RF-denied environments, in which RF communication may be prohibited or undesirable. However, OWC does present some challenges, including the need to maintain alignment between potentially moving nodes. We describe a novel system for link alignment that is composed of a hyperboloidal mirror, camera, and gimbal. The experimental system is able to use the mirror and camera to detect an LED beacon of a neighboring node and estimate its bearing (azimuth and elevation), point the gimbal towards the beacon, and establish an optical link.

  2. Data transmission protocol for Pi-of-the-Sky cameras

    NASA Astrophysics Data System (ADS)

    Uzycki, J.; Kasprowicz, G.; Mankiewicz, M.; Nawrocki, K.; Sitek, P.; Sokolowski, M.; Sulej, R.; Tlaczala, W.

    2006-10-01

    The large amount of data collected by the automatic astronomical cameras has to be transferred to the fast computers in a reliable way. The method chosen should ensure data streaming in both directions but in nonsymmetrical way. The Ethernet interface is very good choice because of its popularity and proven performance. However it requires TCP/IP stack implementation in devices like cameras for full compliance with existing network and operating systems. This paper describes NUDP protocol, which was made as supplement to standard UDP protocol and can be used as a simple-network protocol. The NUDP does not need TCP protocol implementation and makes it possible to run the Ethernet network with simple devices based on microcontroller and/or FPGA chips. The data transmission idea was created especially for the "Pi of the Sky" project.

  3. Implementation of data acquisition interface using on-board field-programmable gate array (FPGA) universal serial bus (USB) link

    NASA Astrophysics Data System (ADS)

    Yussup, N.; Ibrahim, M. M.; Lombigit, L.; Rahman, N. A. A.; Zin, M. R. M.

    2014-02-01

    Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of data acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.

  4. Implementation of data acquisition interface using on-board field-programmable gate array (FPGA) universal serial bus (USB) link

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yussup, N.; Ibrahim, M. M.; Lombigit, L.

    Typically a system consists of hardware as the controller and software which is installed in the personal computer (PC). In the effective nuclear detection, the hardware involves the detection setup and the electronics used, with the software consisting of analysis tools and graphical display on PC. A data acquisition interface is necessary to enable the communication between the controller hardware and PC. Nowadays, Universal Serial Bus (USB) has become a standard connection method for computer peripherals and has replaced many varieties of serial and parallel ports. However the implementation of USB is complex. This paper describes the implementation of datamore » acquisition interface between a field-programmable gate array (FPGA) board and a PC by exploiting the USB link of the FPGA board. The USB link is based on an FTDI chip which allows direct access of input and output to the Joint Test Action Group (JTAG) signals from a USB host and a complex programmable logic device (CPLD) with a 24 MHz clock input to the USB link. The implementation and results of using the USB link of FPGA board as the data interfacing are discussed.« less

  5. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  6. Microwave interferometry technique for obtaining gas interface velocity measurements in an expansion tube facility

    NASA Technical Reports Server (NTRS)

    Laney, C. C., Jr.

    1974-01-01

    A microwave interferometer technique to determine the front interface velocity of a high enthalpy gas flow, is described. The system is designed to excite a standing wave in an expansion tube, and to measure the shift in this standing wave as it is moved by the test gas front. Data, in the form of a varying sinusoidal signal, is recorded on a high-speed drum camera-oscilloscope combination. Measurements of average and incremental velocities in excess of 6,000 meters per second were made.

  7. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest.

    PubMed

    Yang, Hualei; Yang, Xi; Heskel, Mary; Sun, Shucun; Tang, Jianwu

    2017-04-28

    Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporal resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). We found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.

  8. Experimental metrology to obtain thermal phonon transmission coefficients at solid interfaces

    NASA Astrophysics Data System (ADS)

    Hua, Chengyun; Chen, Xiangwen; Ravichandran, Navaneetha K.; Minnich, Austin J.

    2017-05-01

    Interfaces play an essential role in phonon-mediated heat conduction in solids, impacting applications ranging from thermoelectric waste heat recovery to heat dissipation in electronics. From the microscopic perspective, interfacial phonon transport is described by transmission coefficients that link vibrational modes in the materials composing the interface. However, direct experimental determination of these coefficients is challenging because most experiments provide a mode-averaged interface conductance that obscures the microscopic detail. Here, we report a metrology to extract thermal phonon transmission coefficients at solid interfaces using ab initio phonon transport modeling and a thermal characterization technique, time-domain thermoreflectance. In combination with transmission electron microscopy characterization of the interface, our approach allows us to link the atomic structure of an interface to the spectral content of the heat crossing it. Our work provides a useful perspective on the microscopic processes governing interfacial heat conduction.

  9. Experimental metrology to obtain thermal phonon transmission coefficients at solid interfaces

    DOE PAGES

    Hua, Chengyun; Chen, Xiangwen; Ravichandran, Navaneetha K.; ...

    2017-05-17

    Interfaces play an essential role in phonon-mediated heat conduction in solids, impacting applications ranging from thermoelectric waste heat recovery to heat dissipation in electronics. From the microscopic perspective, interfacial phonon transport is described by transmission coefficients that link vibrational modes in the materials composing the interface. But, direct experimental determination of these coefficients is challenging because most experiments provide a mode-averaged interface conductance that obscures the microscopic detail. Here, we report a metrology to extract thermal phonon transmission coefficients at solid interfaces using ab initio phonon transport modeling and a thermal characterization technique, time-domain thermoreflectance. In combination with transmission electronmore » microscopy characterization of the interface, our approach allows us to link the atomic structure of an interface to the spectral content of the heat crossing it. This work provides a useful perspective on the microscopic processes governing interfacial heat conduction.« less

  10. Near infrared observations of S 155. Evidence of induced star formation?

    NASA Astrophysics Data System (ADS)

    Hunt, L. K.; Lisi, F.; Felli, M.; Tofani, G.

    In order to investigate the possible existence of embedded objects of recent formation in the area of the Cepheus B - Sh2-155 interface, the authors have observed the region of the compact radio continuum source with the new near infrared camera ARNICA and the TIRGO telescope.

  11. Low-Cost Accelerometers for Physics Experiments

    ERIC Educational Resources Information Center

    Vannoni, Maurizio; Straulino, Samuele

    2007-01-01

    The implementation of a modern game-console controller as a data acquisition interface for physics experiments is discussed. The investigated controller is equipped with three perpendicular accelerometers and a built-in infrared camera to evaluate its own relative position. A pendulum experiment is realized as a demonstration of the proposed…

  12. Mobile app for chemical detection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klunder, Gregory; Cooper, Chadway R.; Satcher, Jr., Joe H.

    The present invention incorporates the camera from a mobile device (phone, iPad, etc.) to capture an image from a chemical test kit and process the image to provide chemical information. A simple user interface enables the automatic evaluation of the image, data entry, gps info, and maintain records from previous analyses.

  13. Commander Brand shaves in front of forward middeck lockers

    NASA Technical Reports Server (NTRS)

    1982-01-01

    Commander Brand, wearing shorts, shaves in front of forward middeck lockers using personal hygiene mirror assembly (assy). Open modular locker single tray assy, Field Sequential (FS) crew cabin camera, communications kit assy mini headset (HDST) and HDST interface unit (HIU), personal hygiene kit, and meal tray assemblies appear in view.

  14. Commander Truly on aft flight deck holding communication kit assembly (ASSY)

    NASA Image and Video Library

    1983-09-05

    STS008-04-106 (30 Aug-5 Sept 1983) --- On aft flight deck, Richard M. Truly, STS-8 commander, holds communication kit assembly (ASSY) headset (HDST) interface unit (HIU) and mini-HDST in front of the on orbit station. Hasselblad camera is positioned on overhead window W8.

  15. Single-Fiber Optical Link For Video And Control

    NASA Technical Reports Server (NTRS)

    Galloway, F. Houston

    1993-01-01

    Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.

  16. FPGA Implementation of Stereo Disparity with High Throughput for Mobility Applications

    NASA Technical Reports Server (NTRS)

    Villalpando, Carlos Y.; Morfopolous, Arin; Matthies, Larry; Goldberg, Steven

    2011-01-01

    High speed stereo vision can allow unmanned robotic systems to navigate safely in unstructured terrain, but the computational cost can exceed the capacity of typical embedded CPUs. In this paper, we describe an end-to-end stereo computation co-processing system optimized for fast throughput that has been implemented on a single Virtex 4 LX160 FPGA. This system is capable of operating on images from a 1024 x 768 3CCD (true RGB) camera pair at 15 Hz. Data enters the FPGA directly from the cameras via Camera Link and is rectified, pre-filtered and converted into a disparity image all within the FPGA, incurring no CPU load. Once complete, a rectified image and the final disparity image are read out over the PCI bus, for a bandwidth cost of 68 MB/sec. Within the FPGA there are 4 distinct algorithms: Camera Link capture, Bilinear rectification, Bilateral subtraction pre-filtering and the Sum of Absolute Difference (SAD) disparity. Each module will be described in brief along with the data flow and control logic for the system. The system has been successfully fielded upon the Carnegie Mellon University's National Robotics Engineering Center (NREC) Crusher system during extensive field trials in 2007 and 2008 and is being implemented for other surface mobility systems at JPL.

  17. VERDEX: A virtual environment demonstrator for remote driving applications

    NASA Technical Reports Server (NTRS)

    Stone, Robert J.

    1991-01-01

    One of the key areas of the National Advanced Robotics Centre's enabling technologies research program is that of the human system interface, phase 1 of which started in July 1989 and is currently addressing the potential of virtual environments to permit intuitive and natural interactions between a human operator and a remote robotic vehicle. The aim of the first 12 months of this program (to September, 1990) is to develop a virtual human-interface demonstrator for use later as a test bed for human factors experimentation. This presentation will describe the current state of development of the test bed, and will outline some human factors issues and problems for more general discussion. In brief, the virtual telepresence system for remote driving has been designed to take the following form. The human operator will be provided with a helmet-mounted stereo display assembly, facilities for speech recognition and synthesis (using the Marconi Macrospeak system), and a VPL DataGlove Model 2 unit. The vehicle to be used for the purposes of remote driving is a Cybermotion Navmaster K2A system, which will be equipped with a stereo camera and microphone pair, mounted on a motorized high-speed pan-and-tilt head incorporating a closed-loop laser ranging sensor for camera convergence control (currently under contractual development). It will be possible to relay information to and from the vehicle and sensory system via an umbilical or RF link. The aim is to develop an interactive audio-visual display system capable of presenting combined stereo TV pictures and virtual graphics windows, the latter featuring control representations appropriate for vehicle driving and interaction using a graphical 'hand,' slaved to the flex and tracking sensors of the DataGlove and an additional helmet-mounted Polhemus IsoTrack sensor. Developments planned for the virtual environment test bed include transfer of operator control between remote driving and remote manipulation, dexterous end effector integration, virtual force and tactile sensing (also the focus of a current ARRL contract, initially employing a 14-pneumatic bladder glove attachment), and sensor-driven world modeling for total virtual environment generation and operator-assistance in remote scene interrogation.

  18. Astronauts Thornton & Akers on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-012 (5 Dec 1993) --- This view of astronauts Kathryn C. Thornton (top) and Thomas D. Akers working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is teaming with Akers to install the +V2 Solar Array Panel as a replacement for the original one removed earlier. Akers uses tethers and a foot restraint to remain in position for the task. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  19. Latch of HST aft shroud photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-010 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  20. Astronauts Thornton & Akers on HST photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-05

    S61-E-014 (5 Dec 1993) --- This view of astronauts Kathryn C. Thornton (bottom) and Thomas D. Akers working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is teaming with Akers to install the +V2 Solar Array Panel as a replacement for the original one removed earlier. Akers uses tethers and a foot restraint to remain in position for the task. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  1. Latch of HST aft shroud photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-005 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  2. Latch of HST aft shroud photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-004 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  3. BacNet and Analog/Digital Interfaces of the Building Controls Virtual Testbed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nouidui, Thierry Stephane; Wetter, Michael; Li, Zhengwei

    2011-11-01

    This paper gives an overview of recent developments in the Building Controls Virtual Test Bed (BCVTB), a framework for co-simulation and hardware-in-the-loop. First, a general overview of the BCVTB is presented. Second, we describe the BACnet interface, a link which has been implemented to couple BACnet devices to the BCVTB. We present a case study where the interface was used to couple a whole building simulation program to a building control system to assess in real-time the performance of a real building. Third, we present the ADInterfaceMCC, an analog/digital interface that allows a USB-based analog/digital converter to be linked tomore » the BCVTB. In a case study, we show how the link was used to couple the analog/digital converter to a building simulation model for local loop control.« less

  4. The Advanced Linked Extended Reconnaissance & Targeting Technology Demonstration project

    NASA Astrophysics Data System (ADS)

    Edwards, Mark

    2008-04-01

    The Advanced Linked Extended Reconnaissance & Targeting (ALERT) Technology Demonstration (TD) project is addressing many operational needs of the future Canadian Army's Surveillance and Reconnaissance forces. Using the surveillance system of the Coyote reconnaissance vehicle as an experimental platform, the ALERT TD project aims to significantly enhance situational awareness by fusing multi-sensor and tactical data, developing automated processes, and integrating beyond line-of-sight sensing. The project is exploiting important advances made in computer processing capability, displays technology, digital communications, and sensor technology since the design of the original surveillance system. As the major research area within the project, concepts are discussed for displaying and fusing multi-sensor and tactical data within an Enhanced Operator Control Station (EOCS). The sensor data can originate from the Coyote's own visible-band and IR cameras, laser rangefinder, and ground-surveillance radar, as well as from beyond line-of-sight systems such as mini-UAVs and unattended ground sensors. Video-rate image processing has been developed to assist the operator to detect poorly visible targets. As a second major area of research, automatic target cueing capabilities have been added to the system. These include scene change detection, automatic target detection and aided target recognition algorithms processing both IR and visible-band images to draw the operator's attention to possible targets. The merits of incorporating scene change detection algorithms are also discussed. In the area of multi-sensor data fusion, up to Joint Defence Labs level 2 has been demonstrated. The human factors engineering aspects of the user interface in this complex environment are presented, drawing upon multiple user group sessions with military surveillance system operators. The paper concludes with Lessons Learned from the project. The ALERT system has been used in a number of C4ISR field trials, most recently at Exercise Empire Challenge in China Lake CA, and at Trial Quest in Norway. Those exercises provided further opportunities to investigate operator interactions. The paper concludes with recommendations for future work in operator interface design.

  5. Software solution for autonomous observations with H2RG detectors and SIDECAR ASICs for the RATIR camera

    NASA Astrophysics Data System (ADS)

    Klein, Christopher R.; Kubánek, Petr; Butler, Nathaniel R.; Fox, Ori D.; Kutyrev, Alexander S.; Rapchun, David A.; Bloom, Joshua S.; Farah, Alejandro; Gehrels, Neil; Georgiev, Leonid; González, J. Jesús; Lee, William H.; Lotkin, Gennadiy N.; Moseley, Samuel H.; Prochaska, J. Xavier; Ramirez-Ruiz, Enrico; Richer, Michael G.; Robinson, Frederick D.; Román-Zúñiga, Carlos; Samuel, Mathew V.; Sparr, Leroy M.; Tucker, Corey; Watson, Alan M.

    2012-07-01

    The Reionization And Transients InfraRed (RATIR) camera has been built for rapid Gamma-Ray Burst (GRB) followup and will provide quasi-simultaneous imaging in ugriZY JH. The optical component uses two 2048 × 2048 pixel Finger Lakes Imaging ProLine detectors, one optimized for the SDSS u, g, and r bands and one optimized for the SDSS i band. The infrared portion incorporates two 2048 × 2048 pixel Teledyne HgCdTe HAWAII-2RG detectors, one with a 1.7-micron cutoff and one with a 2.5-micron cutoff. The infrared detectors are controlled by Teledyne's SIDECAR (System for Image Digitization Enhancement Control And Retrieval) ASICs (Application Specific Integrated Circuits). While other ground-based systems have used the SIDECAR before, this system also utilizes Teledyne's JADE2 (JWST ASIC Drive Electronics) interface card and IDE (Integrated Development Environment). Here we present a summary of the software developed to interface the RATIR detectors with Remote Telescope System, 2nd Version (RTS2) software. RTS2 is an integrated open source package for remote observatory control under the Linux operating system and will autonomously coordinate observatory dome, telescope pointing, detector, filter wheel, focus stage, and dewar vacuum compressor operations. Where necessary we have developed custom interfaces between RTS2 and RATIR hardware, most notably for cryogenic focus stage motor drivers and temperature controllers. All detector and hardware interface software developed for RATIR is freely available and open source as part of the RTS2 distribution.

  6. Human-telerobot interactions - Information, control, and mental models

    NASA Technical Reports Server (NTRS)

    Smith, Randy L.; Gillan, Douglas J.

    1987-01-01

    A part of the NASA's Space Station will be a teleoperated robot (telerobot) with arms for grasping and manipulation, feet for holding onto objects, and television cameras for visual feedback. The objective of the work described in this paper is to develop the requirements and specifications for the user-telerobot interface and to determine through research and testing that the interface results in efficient system operation. The focus of the development of the user-telerobot interface is on the information required by the user, the user inputs, and the design of the control workstation. Closely related to both the information required by the user and the user's control of the telerobot is the user's mental model of the relationship between the control inputs and the telerobot's actions.

  7. Experiments in teleoperator and autonomous control of space robotic vehicles

    NASA Technical Reports Server (NTRS)

    Alexander, Harold L.

    1990-01-01

    A research program and strategy are described which include fundamental teleoperation issues and autonomous-control issues of sensing and navigation for satellite robots. The program consists of developing interfaces for visual operation and studying the consequences of interface designs as well as developing navigation and control technologies based on visual interaction. A space-robot-vehicle simulator is under development for use in virtual-environment teleoperation experiments and neutral-buoyancy investigations. These technologies can be utilized in a study of visual interfaces to address tradeoffs between head-tracking and manual remote cameras, panel-mounted and helmet-mounted displays, and stereoscopic and monoscopic display systems. The present program can provide significant data for the development of control experiments for autonomously controlled satellite robots.

  8. An off-the-shelf guider for the Palomar 200-inch telescope: interfacing amateur astronomy software with professional telescopes for an easy life

    NASA Astrophysics Data System (ADS)

    Clarke, Fraser; Lynn, James; Thatte, Niranjan; Tecza, Matthias

    2014-08-01

    We have developed a simple but effective guider for use with the Oxford-SWIFT integral field spectrograph on the Palomar 200-inch telescope. The guider uses mainly off-the-shelf components, including commercial amateur astronomy software to interface with the CCD camera, calculating guiding corrections, and send guide commands to the telescope. The only custom piece of software is an driver to provide an interface between the Palomar telescope control system and the industry standard 'ASCOM' system. Using existing commercial software provided a very cheap guider (<$5000) with minimal (<15 minutes) commissioning time. The final system provides sub-arcsecond guiding, and could easily be adapted to any other professional telescope

  9. Interaction of rippled shock wave with flat fast-slow interface

    NASA Astrophysics Data System (ADS)

    Zhai, Zhigang; Liang, Yu; Liu, Lili; Ding, Juchun; Luo, Xisheng; Zou, Liyong

    2018-04-01

    The evolution of a flat air/sulfur-hexafluoride interface subjected to a rippled shock wave is investigated. Experimentally, the rippled shock wave is produced by diffracting a planar shock wave around solid cylinder(s), and the effects of the cylinder number and the spacing between cylinders on the interface evolution are considered. The flat interface is created by a soap film technique. The postshock flow and the evolution of the shocked interface are captured by a schlieren technique combined with a high-speed video camera. Numerical simulations are performed to provide more details of flows. The wave patterns of a planar shock wave diffracting around one cylinder or two cylinders are studied. The shock stability problem is analytically discussed, and the effects of the spacing between cylinders on shock stability are highlighted. The relationship between the amplitudes of the rippled shock wave and the shocked interface is determined in the single cylinder case. Subsequently, the interface morphologies and growth rates under different cases are obtained. The results show that the shock-shock interactions caused by multiple cylinders have significant influence on the interface evolution. Finally, a modified impulsive theory is proposed to predict the perturbation growth when multiple solid cylinders are present.

  10. Contactless sub-millimeter displacement measurements

    NASA Astrophysics Data System (ADS)

    Sliepen, Guus; Jägers, Aswin P. L.; Bettonvil, Felix C. M.; Hammerschlag, Robert H.

    2008-07-01

    Weather effects on foldable domes, as used at the DOT and GREGOR, are investigated, in particular the correlation between the wind field and the stresses caused to both metal framework and tent clothing. Camera systems measure contactless the displacement of several dome points. The stresses follow from the measured deformation pattern. The cameras placed near the dome floor do not disturb telescope operations. In the set-ups of DOT and GREGOR, these cameras are up to 8 meters away from the measured points and must be able to detect displacements of less than 0.1 mm. The cameras have a FireWire (IEEE1394) interface to eliminate the need for frame grabbers. Each camera captures 15 images of 640 × 480 pixels per second. All data is processed on-site in real-time. In order to get the best estimate for the displacement within the constraints of available processing power, all image processing is done in Fourier-space, with all convolution operations being pre-computed once. A sub-pixel estimate of the peak of the correlation function is made. This enables to process the images of four cameras using only one commodity PC with a dual-core processor, and achieve an effective sensitivity of up to 0.01 mm. The deformation measurements are well correlated to the simultaneous wind measurements. The results are of high interest to upscaling the dome design (ELTs and solar telescopes).

  11. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites.

    PubMed

    Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin

    2015-12-05

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.

  12. The Bubble Box: Towards an Automated Visual Sensor for 3D Analysis and Characterization of Marine Gas Release Sites

    PubMed Central

    Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin

    2015-01-01

    Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168

  13. The Zwicky Transient Facility Camera

    NASA Astrophysics Data System (ADS)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  14. Flash LIDAR Emulator for HIL Simulation

    NASA Technical Reports Server (NTRS)

    Brewster, Paul F.

    2010-01-01

    NASA's Autonomous Landing and Hazard Avoidance Technology (ALHAT) project is building a system for detecting hazards and automatically landing controlled vehicles safely anywhere on the Moon. The Flash Light Detection And Ranging (LIDAR) sensor is used to create on-the-fly a 3D map of the unknown terrain for hazard detection. As part of the ALHAT project, a hardware-in-the-loop (HIL) simulation testbed was developed to test the data processing, guidance, and navigation algorithms in real-time to prove their feasibility for flight. Replacing the Flash LIDAR camera with an emulator in the testbed provided a cheaper, safer, more feasible way to test the algorithms in a controlled environment. This emulator must have the same hardware interfaces as the LIDAR camera, have the same performance characteristics, and produce images similar in quality to the camera. This presentation describes the issues involved and the techniques used to create a real-time flash LIDAR emulator to support HIL simulation.

  15. A video processing method for convenient mobile reading of printed barcodes with camera phones

    NASA Astrophysics Data System (ADS)

    Bäckström, Christer; Södergård, Caj; Udd, Sture

    2006-01-01

    Efficient communication requires an appropriate choice and combination of media. The print media has succeeded to attract audiences also in our electronic age because of its high usability. However, the limitations of print are self evident. By finding ways of combining printed and electronic information into so called hybrid media, the strengths of both media can be obtained. In hybrid media, paper functions as an interface to the web, integrating printed products into the connected digital world. This is a "reinvention" of printed matter making it into a more communicative technology. Hybrid media means that printed products can be updated in real time. Multimedia clips, personalization and e-shopping can be added as a part of the interactive medium. The concept of enhancing print with interactive features has been around for years. However, the technology has been so far too restricting - people don't want to be tied in front of their PC's reading newspapers. Our solution is communicative and totally mobile. A code on paper or electronic media constitutes the link to mobility.

  16. KSC-07pd2214

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 crew members get a close look at hardware in Discovery's payload bay. The crew includes Commander Pamela A. Melroy, Pilot George D. Zamka and Mission Specialists Scott E. Parazynski, Douglas H. Wheelock, Stephanie D. Wilson and Paolo A. Nespoli, who is a European Space Agency astronaut from Italy. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  17. KSC-07pd2207

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 Mission Specialists Scott E. Parazynski and Paolo A. Nespoli (foreground) inspect tools they will use during the mission. Nespoli is a European Space Agency astronaut from Italy. Behind them are Mission Specialist Douglas H. Wheelock and Allison Bolinger, an EVA technician with NASA. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  18. KSC-07pd2211

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Discovery's payload bay in Orbiter Processing Facility bay 3, STS-120 crew members are getting hands-on experience with a winch that is used to manually close the payload bay doors in the event that becomes necessary. At center is Pilot George D. Zamka and at right is Expedition 16 Flight Engineer Daniel M. Tani. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  19. Mosad and Stream Vision For A Telerobotic, Flying Camera System

    NASA Technical Reports Server (NTRS)

    Mandl, William

    2002-01-01

    Two full custom camera systems using the Multiplexed OverSample Analog to Digital (MOSAD) conversion technology for visible light sensing were built and demonstrated. They include a photo gate sensor and a photo diode sensor. The system includes the camera assembly, driver interface assembly, a frame stabler board with integrated decimeter and Windows 2000 compatible software for real time image display. An array size of 320X240 with 16 micron pixel pitch was developed for compatibility with 0.3 inch CCTV optics. With 1.2 micron technology, a 73% fill factor was achieved. Noise measurements indicated 9 to 11 bits operating with 13.7 bits best case. Power measured under 10 milliwatts at 400 samples per second. Nonuniformity variation was below noise floor. Pictures were taken with different cameras during the characterization study to demonstrate the operable range. The successful conclusion of this program demonstrates the utility of the MOSAD for NASA missions, providing superior performance over CMOS and lower cost and power consumption over CCD. The MOSAD approach also provides a path to radiation hardening for space based applications.

  20. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  1. Remote gaze tracking system on a large display.

    PubMed

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-10-07

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.

  2. Remote Gaze Tracking System on a Large Display

    PubMed Central

    Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun

    2013-01-01

    We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351

  3. Performance Test Data Analysis of Scintillation Cameras

    NASA Astrophysics Data System (ADS)

    Demirkaya, Omer; Mazrou, Refaat Al

    2007-10-01

    In this paper, we present a set of image analysis tools to calculate the performance parameters of gamma camera systems from test data acquired according to the National Electrical Manufacturers Association NU 1-2001 guidelines. The calculation methods are either completely automated or require minimal user interaction; minimizing potential human errors. The developed methods are robust with respect to varying conditions under which these tests may be performed. The core algorithms have been validated for accuracy. They have been extensively tested on images acquired by the gamma cameras from different vendors. All the algorithms are incorporated into a graphical user interface that provides a convenient way to process the data and report the results. The entire application has been developed in MATLAB programming environment and is compiled to run as a stand-alone program. The developed image analysis tools provide an automated, convenient and accurate means to calculate the performance parameters of gamma cameras and SPECT systems. The developed application is available upon request for personal or non-commercial uses. The results of this study have been partially presented in Society of Nuclear Medicine Annual meeting as an InfoSNM presentation.

  4. Experimental investigation of the displacement dynamics during biphasic flow in porous media

    NASA Astrophysics Data System (ADS)

    Ayaz, Monem; Toussaint, Renaud; Måløy, Knut-Jørgen; Schafer, Gerhard

    2016-04-01

    We experimentally study the interface dynamics of an immiscible fluid as it displaces a fully saturated porous medium. The system is confined by a vertically oriented Hele-Shaw cell, with piezoelectric type acoustic sensors mounted along the centerline. During drainage potential surface energy is stored at the interface up to a given threshold in pressure, at which an instability occurs as new pores are invaded and the radius of curvature of the interface increases locally, the energy gets released, and part of this energy is detectable as acoustic emission. By detecting pore-scale events emanating from the interface at various points, we look to develop techniques for localizing the displacement front. To assess the quality, optical monitoring is done using a high speed camera.In our study we also aim to gain further insight into the interface dynamics by varying parameters such as the effective gravity, and the invasion speed and using other methods of probing the system such as active tomography. We here present our preliminary results of this study.

  5. Statistical characterization of fluctuations of a laser beam transmitted through a random air-water interface: new results from a laboratory experiment

    NASA Astrophysics Data System (ADS)

    Majumdar, Arun K.; Land, Phillip; Siegenthaler, John

    2014-10-01

    New results for characterizing laser intensity fluctuation statistics of a laser beam transmitted through a random air-water interface relevant to underwater communications are presented. A laboratory watertank experiment is described to investigate the beam wandering effects of the transmitted beam. Preliminary results from the experiment provide information about histograms of the probability density functions of intensity fluctuations for different wind speeds measured by a CMOS camera for the transmitted beam. Angular displacements of the centroids of the fluctuating laser beam generates the beam wander effects. This research develops a probabilistic model for optical propagation at the random air-water interface for a transmission case under different wind speed conditions. Preliminary results for bit-error-rate (BER) estimates as a function of fade margin for an on-off keying (OOK) optical communication through the air-water interface are presented for a communication system where a random air-water interface is a part of the communication channel.

  6. Comparison and Evaluation of End-User Interfaces for Online Public Access Catalogs.

    ERIC Educational Resources Information Center

    Zumer, Maja

    End-user interfaces for the online public access catalogs (OPACs) of OhioLINK, a system linking major university and research libraries in Ohio, and its 16 member libraries, accessible through the Internet, are compared and evaluated from the user-oriented perspective. A common, systematic framework was used for the scientific observation of the…

  7. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest

    DOE PAGES

    Yang, Hualei; Yang, Xi; Heskel, Mary; ...

    2017-04-28

    Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less

  8. Seasonal variations of leaf and canopy properties tracked by ground-based NDVI imagery in a temperate forest

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yang, Hualei; Yang, Xi; Heskel, Mary

    Changes in plant phenology affect the carbon flux of terrestrial forest ecosystems due to the link between the growing season length and vegetation productivity. Digital camera imagery, which can be acquired frequently, has been used to monitor seasonal and annual changes in forest canopy phenology and track critical phenological events. However, quantitative assessment of the structural and biochemical controls of the phenological patterns in camera images has rarely been done. In this study, we used an NDVI (Normalized Difference Vegetation Index) camera to monitor daily variations of vegetation reflectance at visible and near-infrared (NIR) bands with high spatial and temporalmore » resolutions, and found that the infrared camera based NDVI (camera-NDVI) agreed well with the leaf expansion process that was measured by independent manual observations at Harvard Forest, Massachusetts, USA. We also measured the seasonality of canopy structural (leaf area index, LAI) and biochemical properties (leaf chlorophyll and nitrogen content). Here we found significant linear relationships between camera-NDVI and leaf chlorophyll concentration, and between camera-NDVI and leaf nitrogen content, though weaker relationships between camera-NDVI and LAI. Therefore, we recommend ground-based camera-NDVI as a powerful tool for long-term, near surface observations to monitor canopy development and to estimate leaf chlorophyll, nitrogen status, and LAI.« less

  9. Interfacial Properties of Thin Films of Poly(vinyl ether)s with Architectural Design in Water

    NASA Astrophysics Data System (ADS)

    Oda, Yukari; Itagaki, Nozomi; Sugimoto, Sin; Kawaguchi, Daisuke; Matsuno, Hisao; Tanaka, Keiji

    Precise design of primary structure and architecture of polymers leads to the well-defined structure, unique physical properties, and excellent functions not only in the bulk but also at the interfaces. We here constructed functional polymer interfaces in water based on the architectural design of poly(vinyl ether)s with oxyethylene side-chains (POEVE). A branched polymer with POEVE parts was preferentially segregated at the air interface in the matrix of poly(methyl methacrylate). As an alternative way to prepare the POEVE surface, the cross-linked hydrogel thin films were prepared. The moduli of the hydrogel films near the water interfaces, which were examined by force-distance curve measurements using atomic force microscopy, were greatly sensitive to the cross-linking density of the polymers. Diffuse interfaces of POEVE chains at the water interface make it possible to prevent the platelet adhesion on the films.

  10. Development of a Mobile User Interface for Image-based Dietary Assessment.

    PubMed

    Kim, Sungye; Schap, Tusarebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J; Ebert, David S; Boushey, Carol J

    2010-12-31

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records.

  11. The HST/WFC3 Quicklook Project: A User Interface to Hubble Space Telescope Wide Field Camera 3 Data

    NASA Astrophysics Data System (ADS)

    Bourque, Matthew; Bajaj, Varun; Bowers, Ariel; Dulude, Michael; Durbin, Meredith; Gosmeyer, Catherine; Gunning, Heather; Khandrika, Harish; Martlin, Catherine; Sunnquist, Ben; Viana, Alex

    2017-06-01

    The Hubble Space Telescope's Wide Field Camera 3 (WFC3) instrument, comprised of two detectors, UVIS (Ultraviolet-Visible) and IR (Infrared), has been acquiring ~ 50-100 images daily since its installation in 2009. The WFC3 Quicklook project provides a means for instrument analysts to store, calibrate, monitor, and interact with these data through the various Quicklook systems: (1) a ~ 175 TB filesystem, which stores the entire WFC3 archive on disk, (2) a MySQL database, which stores image header data, (3) a Python-based automation platform, which currently executes 22 unique calibration/monitoring scripts, (4) a Python-based code library, which provides system functionality such as logging, downloading tools, database connection objects, and filesystem management, and (5) a Python/Flask-based web interface to the Quicklook system. The Quicklook project has enabled large-scale WFC3 analyses and calibrations, such as the monitoring of the health and stability of the WFC3 instrument, the measurement of ~ 20 million WFC3/UVIS Point Spread Functions (PSFs), the creation of WFC3/IR persistence calibration products, and many others.

  12. Wireless multipoint communication for optical sensors in the industrial environment using the new Bluetooth standard

    NASA Astrophysics Data System (ADS)

    Hussmann, Stephan; Lau, Wing Y.; Chu, Terry; Grothof, Markus

    2003-07-01

    Traditionally, the measuring or monitoring system of manufacturing industries uses sensors, computers and screens for their quality control (Q.C.). The acquired information is fed back to the control room by wires, which - for obvious reason - are not suitable in many environments. This paper describes a method to solve this problem by employing the new Bluetooth technology to set up a complete new system, where a total wireless solution is made feasible. This new Q.C. system allows several line scan cameras to be connected at once to a graphical user interface (GUI) that can monitor the production process. There are many Bluetooth devices available on the market such as cell-phones, headsets, printers, PDA etc. However, the detailed application is a novel implementation in the industrial Q.C. area. This paper will contain more details about the Bluetooth standard and why it is used (nework topologies, host controller interface, data rates, etc.), the Bluetooth implemetation in the microcontroller of the line scan camera, and the GUI and its features.

  13. Using a depth-sensing infrared camera system to access and manipulate medical imaging from within the sterile operating field.

    PubMed

    Strickland, Matt; Tremaine, Jamie; Brigley, Greg; Law, Calvin

    2013-06-01

    As surgical procedures become increasingly dependent on equipment and imaging, the need for sterile members of the surgical team to have unimpeded access to the nonsterile technology in their operating room (OR) is of growing importance. To our knowledge, our team is the first to use an inexpensive infrared depthsensing camera (a component of the Microsoft Kinect) and software developed inhouse to give surgeons a touchless, gestural interface with which to navigate their picture archiving and communication systems intraoperatively. The system was designed and developed with feedback from surgeons and OR personnel and with consideration of the principles of aseptic technique and gestural controls in mind. Simulation was used for basic validation before trialing in a pilot series of 6 hepatobiliary-pancreatic surgeries. The interface was used extensively in 2 laparoscopic and 4 open procedures. Surgeons primarily used the system for anatomic correlation, real-time comparison of intraoperative ultrasound with preoperative computed tomography and magnetic resonance imaging scans and for teaching residents and fellows. The system worked well in a wide range of lighting conditions and procedures. It led to a perceived increase in the use of intraoperative image consultation. Further research should be focused on investigating the usefulness of touchless gestural interfaces in different types of surgical procedures and its effects on operative time.

  14. HST High Gain Antennae photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-009 (4 Dec 1993) --- This view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC). The scene was down linked to ground controllers soon after the Space Shuttle Endeavour caught up to the orbiting telescope 320 miles above Earth. Shown here before grapple, the HST was captured on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven STS-61 crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  15. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-002 (4 Dec 1993) --- This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed from inside Endeavour's cabin with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view features the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  16. HST Solar Arrays photographed by Electronic Still Camera

    NASA Image and Video Library

    1993-12-04

    S61-E-003 (4 Dec 1993) --- This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  17. Generating Fast and Accurate Compliance Reports for Various Data Rates

    NASA Astrophysics Data System (ADS)

    Penugonda, Srinath

    As the demands on the industry data rates have increased there is a need for interoperable interfaces to function flawlessly. Added to this complexity, the number of I/O data lines are also increasing making it more time consuming to design and test. This in general leads to creating of compliance standards to which interfaces must adhere. The goal of this theses is to aid the Signal Integrity Engineers with a better and fast way of rendering a full picture of the interface compliance parameters. Three different interfaces at various data rates were chosen. They are: 25Gbps Very Short Reach (VSR) based on Optical Internetworking Forum (OIF), Mobile Industry Processer Interface (MIPI) particularly for camera based on MIPI Alliance organization upto 1.5Gbps and for a passive Universal Serial Bus (USB) Type-C cable based on USB organization particularly for generation-I with data rate of 10Gbps. After a full understanding of each of the interfaces, a complete end-to-end reports for each of the interfaces were developed with an easy to use user interface. A standard one-to-one comparison is done with commercially available software tools for the above mentioned interfaces. The tools were developed in MATLAB and Python. Data was usually obtained by probing at interconnect, from either an oscilloscope or vector network analyzer.

  18. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  19. Skylab program. Earth resources experiment package. Sensor performance report. Volume 7 (S190B): SL2, SL3 and SL4 evaluations

    NASA Technical Reports Server (NTRS)

    Kenney, G. P.

    1974-01-01

    The S190B Earth Terrain Camera (ETC) operated acceptably for all of its scheduled EREP passes throughout the SL2 mission. The crew reported no problems in unstowing the camera, changing filters, installing the ETC window in the SAL, or installing the camera onto the window. The ETC was operated for a total of seven times with no failures. The clock on the ETC was checked on DOY 170 (June 19, 1973) and was found to be 30 min. and 58 sec. slower than GMT. The change in time was expected since a similar circumstance was experienced during ETC qualification testing for launch vibration. A leak existed in the seal of the spare magazine to the camera vacuum interface. For EREP passes 08 and 10, black-and-white film EK 3414 (roll no. 82) was installed in this spare magazine. Since there was an audible hiss, the vacuum hose was not connected to the camera. This caused the vacuum platen to be inoperable, resulting in some degradation in resolution for this roll of film. The vegetation of the South American jungle areas proved to be much darker than vegetation found in the United States, and was consequently about 1/2 stop underexposed in all cases.

  20. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  1. Photon collider: a four-channel autoguider solution

    NASA Astrophysics Data System (ADS)

    Hygelund, John C.; Haynes, Rachel; Burleson, Ben; Fulton, Benjamin J.

    2010-07-01

    The "Photon Collider" uses a compact array of four off axis autoguider cameras positioned with independent filtering and focus. The photon collider is two way symmetric and robustly mounted with the off axis light crossing the science field which allows the compact single frame construction to have extremely small relative deflections between guide and science CCDs. The photon collider provides four independent guiding signals with a total of 15 square arc minutes of sky coverage. These signals allow for simultaneous altitude, azimuth, field rotation and focus guiding. Guide cameras read out without exposure overhead increasing the tracking cadence. The independent focus allows the photon collider to maintain in focus guide stars when the main science camera is taking defocused exposures as well as track for telescope focus changes. Independent filters allow auto guiding in the science camera wavelength bandpass. The four cameras are controlled with a custom web services interface from a single Linux based industrial PC, and the autoguider mechanism and telemetry is built around a uCLinux based Analog Devices BlackFin embedded microprocessor. Off axis light is corrected with a custom meniscus correcting lens. Guide CCDs are cooled with ethylene glycol with an advanced leak detection system. The photon collider was built for use on Las Cumbres Observatory's 2 meter Faulks telescopes and currently used to guide the alt-az mount.

  2. Portable, stand-off spectral imaging camera for detection of effluents and residues

    NASA Astrophysics Data System (ADS)

    Goldstein, Neil; St. Peter, Benjamin; Grot, Jonathan; Kogan, Michael; Fox, Marsha; Vujkovic-Cvijin, Pajo; Penny, Ryan; Cline, Jason

    2015-06-01

    A new, compact and portable spectral imaging camera, employing a MEMs-based encoded imaging approach, has been built and demonstrated for detection of hazardous contaminants including gaseous effluents and solid-liquid residues on surfaces. The camera is called the Thermal infrared Reconfigurable Analysis Camera for Effluents and Residues (TRACER). TRACER operates in the long wave infrared and has the potential to detect a wide variety of materials with characteristic spectral signatures in that region. The 30 lb. camera is tripod mounted and battery powered. A touch screen control panel provides a simple user interface for most operations. The MEMS spatial light modulator is a Texas Instruments Digital Microarray Array with custom electronics and firmware control. Simultaneous 1D-spatial and 1Dspectral dimensions are collected, with the second spatial dimension obtained by scanning the internal spectrometer slit. The sensor can be configured to collect data in several modes including full hyperspectral imagery using Hadamard multiplexing, panchromatic thermal imagery, and chemical-specific contrast imagery, switched with simple user commands. Matched filters and other analog filters can be generated internally on-the-fly and applied in hardware, substantially reducing detection time and improving SNR over HSI software processing, while reducing storage requirements. Results of preliminary instrument evaluation and measurements of flame exhaust are presented.

  3. Cloud cameras at the Pierre Auger Observatory

    NASA Astrophysics Data System (ADS)

    Winnick, Michael G.

    2010-06-01

    This thesis presents the results of measurements made by infrared cloud cameras installed at the Pierre Auger Observatory in Argentina. These cameras were used to record cloud conditions during operation of the observatory's fluorescence detectors. As cloud may affect the measurement of fluorescence from cosmic ray extensive air showers, the cloud cameras provide a record of which measurements have been interfered with by cloud. Several image processing algorithms were developed, along with a methodology for the detection of cloud within infrared images taken by the cloud cameras. A graphical user interface (GUI) was developed to expediate this, as a large number of images need to be checked for cloud. A cross-check between images recorded by three of the observatory's cloud cameras is presented, along with a comparison with independent cloud measurements made by LIDAR. Despite the cloud cameras and LIDAR observing different areas of the sky, a good agreement is observed in the measured cloud fraction between the two instruments, particularly on very clear and overcast nights. Cloud information recorded by the cloud cameras, with cloud height information measured by the LIDAR, was used to identify those extensive air showers that were obscured by cloud. These events were used to study the effectiveness of standard quality cuts at removing cloud afflicted events. Of all of the standard quality cuts studied in this thesis, the LIDAR cloud fraction cut was the most effective at preferentially removing cloud obscured events. A 'cloudy pixel' veto is also presented, whereby cloud obscured measurements are excluded during the standard hybrid analysis, and new extensive air shower reconstructed parameters determined. The application of such a veto would provide a slight increase to the number of events available for higher level analysis.

  4. CATE 2016 Indonesia: Camera, Software, and User Interface

    NASA Astrophysics Data System (ADS)

    Kovac, S. A.; Jensen, L.; Hare, H. S.; Mitchell, A. M.; McKay, M. A.; Bosh, R.; Watson, Z.; Penn, M.

    2016-12-01

    The Citizen Continental-America Telescopic Eclipse (Citizen CATE) Experiment will use a fleet of 60 identical telescopes across the United States to image the inner solar corona during the 2017 total solar eclipse. For a proof of concept, five sites were hosted along the path of totality during the 2016 total solar eclipse in Indonesia. Tanjung Pandan, Belitung, Indonesia was the first site to experience totality. This site had the best seeing conditions and focus, resulting in the highest quality images. This site proved that the equipment that is going to be used is capable of recording high quality images of the solar corona. Because 60 sites will be funded, each set up needs to be cost effective. This requires us to use an inexpensive camera, which consequently has a small dynamic range. To compensate for the corona's intensity drop off factor of 1,000, images are taken at seven frames per second, at exposures 0.4ms, 1.3ms, 4.0ms, 13ms, 40ms, 130ms, and 400ms. Using MatLab software, we are able to capture a high dynamic range with an Arduino that controls the 2448 x 2048 CMOS camera. A major component of this project is to train average citizens to use the software, meaning it needs to be as user friendly as possible. The CATE team is currently working with MathWorks to create a graphic user interface (GUI) that will make data collection run smoothly. This interface will include tabs for alignment, focus, calibration data, drift data, GPS, totality, and a quick look function. This work was made possible through the National Solar Observatory Research Experiences for Undergraduates (REU) Program, which is funded by the National Science Foundation (NSF). The NSO Training for 2017 Citizen CATE Experiment, funded by NASA (NASA NNX16AB92A), also provided support for this project. The National Solar Observatory is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) under cooperative agreement with the NSF.

  5. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  6. Stereoscopic visualization and haptic technology used to create a virtual environment for remote surgery - biomed 2011.

    PubMed

    Bornhoft, J M; Strabala, K W; Wortman, T D; Lehman, A C; Oleynikov, D; Farritor, S M

    2011-01-01

    The objective of this research is to study the effectiveness of using a stereoscopic visualization system for performing remote surgery. The use of stereoscopic vision has become common with the advent of the da Vinci® system (Intuitive, Sunnyvale CA). This system creates a virtual environment that consists of a 3-D display for visual feedback and haptic tactile feedback, together providing an intuitive environment for remote surgical applications. This study will use simple in vivo robotic surgical devices and compare the performance of surgeons using the stereoscopic interfacing system to the performance of surgeons using one dimensional monitors. The stereoscopic viewing system consists of two cameras, two monitors, and four mirrors. The cameras are mounted to a multi-functional miniature in vivo robot; and mimic the depth perception of the actual human eyes. This is done by placing the cameras at a calculated angle and distance apart. Live video streams from the left and right cameras are displayed on the left and right monitors, respectively. A system of angled mirrors allows the left and right eyes to see the video stream from the left and right monitor, respectively, creating the illusion of depth. The haptic interface consists of two PHANTOM Omni® (SensAble, Woburn Ma) controllers. These controllers measure the position and orientation of a pen-like end effector with three degrees of freedom. As the surgeon uses this interface, they see a 3-D image and feel force feedback for collision and workspace limits. The stereoscopic viewing system has been used in several surgical training tests and shows a potential improvement in depth perception and 3-D vision. The haptic system accurately gives force feedback that aids in surgery. Both have been used in non-survival animal surgeries, and have successfully been used in suturing and gallbladder removal. Bench top experiments using the interfacing system have also been conducted. A group of participants completed two different surgical training tasks using both a two dimensional visual system and the stereoscopic visual system. Results suggest that the stereoscopic visual system decreased the amount of time taken to complete the tasks. All participants also reported that the stereoscopic system was easier to utilize than the two dimensional system. Haptic controllers combined with stereoscopic vision provides for a more intuitive virtual environment. This system provides the surgeon with 3-D vision, depth perception, and the ability to receive feedback through forces applied in the haptic controller while performing surgery. These capabilities potentially enable the performance of more complex surgeries with a higher level of precision.

  7. Trained neurons-based motion detection in optical camera communications

    NASA Astrophysics Data System (ADS)

    Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho

    2018-04-01

    A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.

  8. Motmot, an open-source toolkit for realtime video acquisition and analysis.

    PubMed

    Straw, Andrew D; Dickinson, Michael H

    2009-07-22

    Video cameras sense passively from a distance, offer a rich information stream, and provide intuitively meaningful raw data. Camera-based imaging has thus proven critical for many advances in neuroscience and biology, with applications ranging from cellular imaging of fluorescent dyes to tracking of whole-animal behavior at ecologically relevant spatial scales. Here we present 'Motmot': an open-source software suite for acquiring, displaying, saving, and analyzing digital video in real-time. At the highest level, Motmot is written in the Python computer language. The large amounts of data produced by digital cameras are handled by low-level, optimized functions, usually written in C. This high-level/low-level partitioning and use of select external libraries allow Motmot, with only modest complexity, to perform well as a core technology for many high-performance imaging tasks. In its current form, Motmot allows for: (1) image acquisition from a variety of camera interfaces (package motmot.cam_iface), (2) the display of these images with minimal latency and computer resources using wxPython and OpenGL (package motmot.wxglvideo), (3) saving images with no compression in a single-pass, low-CPU-use format (package motmot.FlyMovieFormat), (4) a pluggable framework for custom analysis of images in realtime and (5) firmware for an inexpensive USB device to synchronize image acquisition across multiple cameras, with analog input, or with other hardware devices (package motmot.fview_ext_trig). These capabilities are brought together in a graphical user interface, called 'FView', allowing an end user to easily view and save digital video without writing any code. One plugin for FView, 'FlyTrax', which tracks the movement of fruit flies in real-time, is included with Motmot, and is described to illustrate the capabilities of FView. Motmot enables realtime image processing and display using the Python computer language. In addition to the provided complete applications, the architecture allows the user to write relatively simple plugins, which can accomplish a variety of computer vision tasks and be integrated within larger software systems. The software is available at http://code.astraw.com/projects/motmot.

  9. LinkWinds: An Approach to Visual Data Analysis

    NASA Technical Reports Server (NTRS)

    Jacobson, Allan S.

    1992-01-01

    The Linked Windows Interactive Data System (LinkWinds) is a prototype visual data exploration and analysis system resulting from a NASA/JPL program of research into graphical methods for rapidly accessing, displaying and analyzing large multivariate multidisciplinary datasets. It is an integrated multi-application execution environment allowing the dynamic interconnection of multiple windows containing visual displays and/or controls through a data-linking paradigm. This paradigm, which results in a system much like a graphical spreadsheet, is not only a powerful method for organizing large amounts of data for analysis, but provides a highly intuitive, easy to learn user interface on top of the traditional graphical user interface.

  10. A PIC microcontroller-based system for real-life interfacing of external peripherals with a mobile robot

    NASA Astrophysics Data System (ADS)

    Singh, N. Nirmal; Chatterjee, Amitava; Rakshit, Anjan

    2010-02-01

    The present article describes the development of a peripheral interface controller (PIC) microcontroller-based system for interfacing external add-on peripherals with a real mobile robot, for real life applications. This system serves as an important building block of a complete integrated vision-based mobile robot system, integrated indigenously in our laboratory. The system is composed of the KOALA mobile robot in conjunction with a personal computer (PC) and a two-camera-based vision system where the PIC microcontroller is used to drive servo motors, in interrupt-driven mode, to control additional degrees of freedom of the vision system. The performance of the developed system is tested by checking it under the control of several user-specified commands, issued from the PC end.

  11. DataHawk Flocks: Self-Contained sUAS Modules for High-Resolution Atmospheric Measurements

    DTIC Science & Technology

    2015-08-25

    Gabriel LoDolce (sr. technician) 0.38 Emily Ranquist (jr. technician) 0.20 Gabriel Chapel (jr. technician) 0.04 Russel Temple (jr. technician) 0.04...processor board, including 3 SPI, 3 I2C, 1 CAN, 6 UART, 8 analog, and 1 digital camera interface. 2.2 Flexibility in changing peripherals: The

  12. Medical imaging systems

    DOEpatents

    Frangioni, John V

    2013-06-25

    A medical imaging system provides simultaneous rendering of visible light and diagnostic or functional images. The system may be portable, and may include adapters for connecting various light sources and cameras in open surgical environments or laparascopic or endoscopic environments. A user interface provides control over the functionality of the integrated imaging system. In one embodiment, the system provides a tool for surgical pathology.

  13. NAS (Host/ARTS) IIIA to VME Modem Interface ATC Interface Hardware Manual

    DOT National Transportation Integrated Search

    1990-10-01

    This document is reference material for personnel using the National Airspace : System (NAS) (HOST or ARTS IIIA) Air Traffic Control (ATC) Interface Subsystem. : It was originally developed to be part of the Data Link Test and Analysis System : (DATA...

  14. Uav Cameras: Overview and Geometric Calibration Benchmark

    NASA Astrophysics Data System (ADS)

    Cramer, M.; Przybilla, H.-J.; Zurhorst, A.

    2017-08-01

    Different UAV platforms and sensors are used in mapping already, many of them equipped with (sometimes) modified cameras as known from the consumer market. Even though these systems normally fulfil their requested mapping accuracy, the question arises, which system performs best? This asks for a benchmark, to check selected UAV based camera systems in well-defined, reproducible environments. Such benchmark is tried within this work here. Nine different cameras used on UAV platforms, representing typical camera classes, are considered. The focus is laid on the geometry here, which is tightly linked to the process of geometrical calibration of the system. In most applications the calibration is performed in-situ, i.e. calibration parameters are obtained as part of the project data itself. This is often motivated because consumer cameras do not keep constant geometry, thus, cannot be seen as metric cameras. Still, some of the commercial systems are quite stable over time, as it was proven from repeated (terrestrial) calibrations runs. Already (pre-)calibrated systems may offer advantages, especially when the block geometry of the project does not allow for a stable and sufficient in-situ calibration. Especially for such scenario close to metric UAV cameras may have advantages. Empirical airborne test flights in a calibration field have shown how block geometry influences the estimated calibration parameters and how consistent the parameters from lab calibration can be reproduced.

  15. Spectroscopic interpretation and velocimetry analysis of fluctuations in a cylindrical plasma recorded by a fast camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oldenbuerger, S.; Brandt, C.; Brochard, F.

    2010-06-15

    Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the goodmore » correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.« less

  16. Spectroscopic interpretation and velocimetry analysis of fluctuations in a cylindrical plasma recorded by a fast camera

    NASA Astrophysics Data System (ADS)

    Oldenbürger, S.; Brandt, C.; Brochard, F.; Lemoine, N.; Bonhomme, G.

    2010-06-01

    Fast visible imaging is used on a cylindrical magnetized argon plasma produced by thermionic discharge in the Mirabelle device. To link the information collected with the camera to a physical quantity, fast camera movies of plasma structures are compared to Langmuir probe measurements. High correlation is found between light fluctuations and plasma density fluctuations. Contributions from neutral argon and ionized argon to the overall light intensity are separated by using interference filters and a light intensifier. Light emitting transitions are shown to involve a metastable neutral argon state that can be excited by thermal plasma electrons, thus explaining the good correlation between light and density fluctuations. The propagation velocity of plasma structures is calculated by adapting velocimetry methods to the fast camera movies. The resulting estimates of instantaneous propagation velocity are in agreement with former experiments. The computation of mean velocities is discussed.

  17. Mach-zehnder based optical marker/comb generator for streak camera calibration

    DOEpatents

    Miller, Edward Kirk

    2015-03-03

    This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.

  18. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  19. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change of light intensity. So the camera can keep stable and high positioning accuracy during communication.

  20. Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing

    NASA Technical Reports Server (NTRS)

    Crooke, Julie A.

    2003-01-01

    The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

  1. Stray light lessons learned from the Mars reconnaissance orbiter's optical navigation camera

    NASA Astrophysics Data System (ADS)

    Lowman, Andrew E.; Stauder, John L.

    2004-10-01

    The Optical Navigation Camera (ONC) is a technical demonstration slated to fly on NASA"s Mars Reconnaissance Orbiter in 2005. Conventional navigation methods have reduced accuracy in the days immediately preceding Mars orbit insertion. The resulting uncertainty in spacecraft location limits rover landing sites to relatively safe areas, away from interesting features that may harbor clues to past life on the planet. The ONC will provide accurate navigation on approach for future missions by measuring the locations of the satellites of Mars relative to background stars. Because Mars will be a bright extended object just outside the camera"s field of view, stray light control at small angles is essential. The ONC optomechanical design was analyzed by stray light experts and appropriate baffles were implemented. However, stray light testing revealed significantly higher levels of light than expected at the most critical angles. The primary error source proved to be the interface between ground glass surfaces (and the paint that had been applied to them) and the polished surfaces of the lenses. This paper will describe troubleshooting and correction of the problem, as well as other lessons learned that affected stray light performance.

  2. An affordable wearable video system for emergency response training

    NASA Astrophysics Data System (ADS)

    King-Smith, Deen; Mikkilineni, Aravind; Ebert, David; Collins, Timothy; Delp, Edward J.

    2009-02-01

    Many emergency response units are currently faced with restrictive budgets that prohibit their use of advanced technology-based training solutions. Our work focuses on creating an affordable, mobile, state-of-the-art emergency response training solution through the integration of low-cost, commercially available products. The system we have developed consists of tracking, audio, and video capability, coupled with other sensors that can all be viewed through a unified visualization system. In this paper we focus on the video sub-system which helps provide real time tracking and video feeds from the training environment through a system of wearable and stationary cameras. These two camera systems interface with a management system that handles storage and indexing of the video during and after training exercises. The wearable systems enable the command center to have live video and tracking information for each trainee in the exercise. The stationary camera systems provide a fixed point of reference for viewing action during the exercise and consist of a small Linux based portable computer and mountable camera. The video management system consists of a server and database which work in tandem with a visualization application to provide real-time and after action review capability to the training system.

  3. Dynamic graph system for a semantic database

    DOEpatents

    Mizell, David

    2016-04-12

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  4. Dynamic graph system for a semantic database

    DOEpatents

    Mizell, David

    2015-01-27

    A method and system in a computer system for dynamically providing a graphical representation of a data store of entries via a matrix interface is disclosed. A dynamic graph system provides a matrix interface that exposes to an application program a graphical representation of data stored in a data store such as a semantic database storing triples. To the application program, the matrix interface represents the graph as a sparse adjacency matrix that is stored in compressed form. Each entry of the data store is considered to represent a link between nodes of the graph. Each entry has a first field and a second field identifying the nodes connected by the link and a third field with a value for the link that connects the identified nodes. The first, second, and third fields represent the rows, column, and elements of the adjacency matrix.

  5. Laser differential image-motion monitor for characterization of turbulence during free-space optical communication tests.

    PubMed

    Brown, David M; Juarez, Juan C; Brown, Andrea M

    2013-12-01

    A laser differential image-motion monitor (DIMM) system was designed and constructed as part of a turbulence characterization suite during the DARPA free-space optical experimental network experiment (FOENEX) program. The developed link measurement system measures the atmospheric coherence length (r0), atmospheric scintillation, and power in the bucket for the 1550 nm band. DIMM measurements are made with two separate apertures coupled to a single InGaAs camera. The angle of arrival (AoA) for the wavefront at each aperture can be calculated based on focal spot movements imaged by the camera. By utilizing a single camera for the simultaneous measurement of the focal spots, the correlation of the variance in the AoA allows a straightforward computation of r0 as in traditional DIMM systems. Standard measurements of scintillation and power in the bucket are made with the same apertures by redirecting a percentage of the incoming signals to InGaAs detectors integrated with logarithmic amplifiers for high sensitivity and high dynamic range. By leveraging two, small apertures, the instrument forms a small size and weight configuration for mounting to actively tracking laser communication terminals for characterizing link performance.

  6. Statistical characterization of the optical interaction at a supercavitating interface

    NASA Astrophysics Data System (ADS)

    Walters, Gage; Kane, Tim; Jefferies, Rhett; Antonelli, Lynn

    2016-05-01

    The optical characteristics of an air/water interface have been widely studied for natural interface formations. However, the creation and management of artificial cavities creates a complicated interaction of gas and liquid that makes optical sensing and communication through the interface challenging. A ventilated cavity can reduce friction in underwater vehicles, but the resulting bubble drastically impedes optical and acoustic communication propagation. The complicated interaction at the air/water boundary yields surface waves and turbulence that make modeling and compensating of the optical properties difficult. Our experimental approach uses a narrow laser beam to probe the surface of the interface and measure the beam deflection and lensing effects. Using a vehicle model with a cavitator in a water tunnel, a laser beam is propagated outward from the model through the boundary and projected onto a target grid. The beam projection is captured using a high-speed camera, allowing us to measure and analyze beam shape and deflection. This approach has enabled us to quantify the temporal and spatial periodic variations in the beam propagation through the cavity boundary and fluid.

  7. Shock tube Multiphase Experiments

    NASA Astrophysics Data System (ADS)

    Middlebrooks, John; Allen, Roy; Paudel, Manoj; Young, Calvin; Musick, Ben; McFarland, Jacob

    2017-11-01

    Shock driven multiphase instabilities (SDMI) are unique physical phenomena that have far-reaching practical applications in engineering and science. The instability is present in high energy explosions, scramjet combustors, and supernovae events. The SDMI arises when a multiphase interface is impulsively accelerated by the passage of a shockwave. It is similar in development to the Richtmyer-Meshkov (RM) instability however, particle-to-gas coupling is the driving mechanism of the SDMI. As particle effects such as lag and phase change become more prominent, the SDMI's development begins to significantly deviate from the RM instability. We have developed an experiment for studying the SDMI in our shock tube facility. In our experiments, a multiphase interface is created using a laminar jet and flowed into the shock tube where it is accelerated by the passage of a planar shockwave. The interface development is captured using CCD cameras synchronized with planar laser illumination. This talk will give an overview of new experiments conducted to examine the development of a shocked cylindrical multiphase interface. The effects of Atwood number, particle size, and a second acceleration (reshock) of the interface will be discussed.

  8. Colors of active regions on comet 67P

    NASA Astrophysics Data System (ADS)

    Oklay, N.; Vincent, J.-B.; Sierks, H.; Besse, S.; Fornasier, S.; Barucci, M. A.; Lara, L.; Scholten, F.; Preusker, F.; Lazzarin, M.; Pajola, M.; La Forgia, F.

    2015-10-01

    The OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) scientific imager (Keller et al. 2007) is successfully delivering images of comet 67P/Churyumov-Gerasimenko from its both wide angle camera (WAC) and narrow angle camera (NAC) since ESA's spacecraft Rosetta's arrival to the comet. Both cameras are equipped with filters covering the wavelength range of about 200 nm to 1000 nm. The comet nucleus is mapped with different combination of the filters in resolutions up to 15 cm/px. Besides the determination of the surface morphology in great details (Thomas et al. 2015), such high resolution images provided us a mean to unambiguously link some activity in the coma to a series of pits on the nucleus surface (Vincent et al. 2015).

  9. Time Lapse Photography From Arctic Buoys

    NASA Astrophysics Data System (ADS)

    Valentic, T. A.; Matrai, P.; Woods, J. E.

    2013-12-01

    We have equipped a number of buoys with cameras that have been deployed throughout the Arctic. These systems need to be simple, reliable and low power. The images are transmitted over an Iridium satellite link and assembled into long running movies. We have captured a number of interesting events, observed the ice dynamics through the year and visits by local wildlife. Each of the systems have been deployed for periods of up to a year, with images every hour. The cameras have proved to be a great outreach tool and are routinely watched by number of people on our websites. This talk will present the techniques used in developing these camera systems, the methods used for reliably transmitting the images and the process for generating the movies.

  10. High-Speed Edge-Detecting Line Scan Smart Camera

    NASA Technical Reports Server (NTRS)

    Prokop, Norman F.

    2012-01-01

    A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..

  11. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  12. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    NASA Astrophysics Data System (ADS)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  13. ePix: a class of architectures for second generation LCLS cameras

    DOE PAGES

    Dragone, A.; Caragiulo, P.; Markovic, B.; ...

    2014-03-31

    ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less

  14. A New Remote Sensing Filter Radiometer Employing a Fabry-Perot Etalon and a CCD Camera for Column Measurements of Methane in the Earth Atmosphere

    NASA Technical Reports Server (NTRS)

    Georgieva, E. M.; Huang, W.; Heaps, W. S.

    2012-01-01

    A portable remote sensing system for precision column measurements of methane has been developed, built and tested at NASA GSFC. The sensor covers the spectral range from 1.636 micrometers to 1.646 micrometers, employs an air-gapped Fabry-Perot filter and a CCD camera and has a potential to operate from a variety of platforms. The detector is an XS-1.7-320 camera unit from Xenics Infrared solutions which combines an uncooled InGaAs detector array working up to 1.7 micrometers. Custom software was developed in addition to the graphical user basic interface X-Control provided by the company to help save and process the data. The technique and setup can be used to measure other trace gases in the atmosphere with minimal changes of the etalon and the prefilter. In this paper we describe the calibration of the system using several different approaches.

  15. Flexcam Image Capture Viewing and Spot Tracking

    NASA Technical Reports Server (NTRS)

    Rao, Shanti

    2008-01-01

    Flexcam software was designed to allow continuous monitoring of the mechanical deformation of the telescope structure at Palomar Observatory. Flexcam allows the user to watch the motion of a star with a low-cost astronomical camera, to measure the motion of the star on the image plane, and to feed this data back into the telescope s control system. This automatic interaction between the camera and a user interface facilitates integration and testing. Flexcam is a CCD image capture and analysis tool for the ST-402 camera from Santa Barbara Instruments Group (SBIG). This program will automatically take a dark exposure and then continuously display corrected images. The image size, bit depth, magnification, exposure time, resolution, and filter are always displayed on the title bar. Flexcam locates the brightest pixel and then computes the centroid position of the pixels falling in a box around that pixel. This tool continuously writes the centroid position to a network file that can be used by other instruments.

  16. SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) on the South African Astronomical Observatory's 74-inch telescope

    NASA Astrophysics Data System (ADS)

    Crause, Lisa A.; Carter, Dave; Daniels, Alroy; Evans, Geoff; Fourie, Piet; Gilbank, David; Hendricks, Malcolm; Koorts, Willie; Lategan, Deon; Loubser, Egan; Mouries, Sharon; O'Connor, James E.; O'Donoghue, Darragh E.; Potter, Stephen; Sass, Craig; Sickafoose, Amanda A.; Stoffels, John; Swanevelder, Pieter; Titus, Keegan; van Gend, Carel; Visser, Martin; Worters, Hannah L.

    2016-08-01

    SpUpNIC (Spectrograph Upgrade: Newly Improved Cassegrain) is the extensively upgraded Cassegrain Spectrograph on the South African Astronomical Observatory's 74-inch (1.9-m) telescope. The inverse-Cassegrain collimator mirrors and woefully inefficient Maksutov-Cassegrain camera optics have been replaced, along with the CCD and SDSU controller. All moving mechanisms are now governed by a programmable logic controller, allowing remote configuration of the instrument via an intuitive new graphical user interface. The new collimator produces a larger beam to match the optically faster Folded-Schmidt camera design and nine surface-relief diffraction gratings offer various wavelength ranges and resolutions across the optical domain. The new camera optics (a fused silica Schmidt plate, a slotted fold flat and a spherically figured primary mirror, both Zerodur, and a fused silica field-flattener lens forming the cryostat window) reduce the camera's central obscuration to increase the instrument throughput. The physically larger and more sensitive CCD extends the available wavelength range; weak arc lines are now detectable down to 325 nm and the red end extends beyond one micron. A rear-of-slit viewing camera has streamlined the observing process by enabling accurate target placement on the slit and facilitating telescope focus optimisation. An interactive quick-look data reduction tool further enhances the user-friendliness of SpUpNI

  17. Bruce Grossan's Home Page

    Science.gov Websites

    Generation Rapid-Optical Response GRB Mission (ADS link) now on web (i.e. arxiv link). First Spitzer Paper the Infra-Red (2.1 micron K' band), taken at the Wyoming IR Observatory (WIRO) 4/16/94. The galaxy is largest, most sensitive map of a low-dust region made with the Spitzer 160 µm Camera. The cosmic far-IR

  18. Improving land vehicle situational awareness using a distributed aperture system

    NASA Astrophysics Data System (ADS)

    Fortin, Jean; Bias, Jason; Wells, Ashley; Riddle, Larry; van der Wal, Gooitzen; Piacentino, Mike; Mandelbaum, Robert

    2005-05-01

    U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (NVESD) has performed early work to develop a Distributed Aperture System (DAS). The DAS aims at improving the situational awareness of armored fighting vehicle crews under closed-hatch conditions. The concept is based on a plurality of sensors configured to create a day and night dome of surveillance coupled with heads up displays slaved to the operator's head to give a "glass turret" feel. State-of-the-art image processing is used to produce multiple seamless hemispherical views simultaneously available to the vehicle commander, crew members and dismounting infantry. On-the-move automatic cueing of multiple moving/pop-up low silhouette threats is also done with the possibility to save/revisit/share past events. As a first step in this development program, a contract was awarded to United Defense to further develop the Eagle VisionTM system. The second-generation prototype features two camera heads, each comprising four high-resolution (2048x1536) color sensors, and each covering a field of view of 270°hx150°v. High-bandwidth digital links interface the camera heads with a field programmable gate array (FPGA) based custom processor developed by Sarnoff Corporation. The processor computes the hemispherical stitch and warp functions required for real-time, low latency, immersive viewing (360°hx120°v, 30° down) and generates up to six simultaneous extended graphics array (XGA) video outputs for independent display either on a helmet-mounted display (with associated head tracking device) or a flat panel display (and joystick). The prototype is currently in its last stage of development and will be integrated on a vehicle for user evaluation and testing. Near-term improvements include the replacement of the color camera heads with a pixel-level fused combination of uncooled long wave infrared (LWIR) and low light level intensified imagery. It is believed that the DAS will significantly increase situational awareness by providing the users with a day and night, wide area coverage, immersive visualization capability.

  19. Smart mobility solution with multiple input Output interface.

    PubMed

    Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya

    2017-07-01

    Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.

  20. Graphical user interface for image acquisition and processing

    DOEpatents

    Goldberg, Kenneth A.

    2002-01-01

    An event-driven GUI-based image acquisition interface for the IDL programming environment designed for CCD camera control and image acquisition directly into the IDL environment where image manipulation and data analysis can be performed, and a toolbox of real-time analysis applications. Running the image acquisition hardware directly from IDL removes the necessity of first saving images in one program and then importing the data into IDL for analysis in a second step. Bringing the data directly into IDL creates an opportunity for the implementation of IDL image processing and display functions in real-time. program allows control over the available charge coupled device (CCD) detector parameters, data acquisition, file saving and loading, and image manipulation and processing, all from within IDL. The program is built using IDL's widget libraries to control the on-screen display and user interface.

  1. MS Massimino on aft flight deck during EVA 5

    NASA Image and Video Library

    2002-03-09

    STS109-E-5761 (9 March 2002) --- Astronaut Michael J. Massimino, STS-109 mission specialist, looks through an overhead window on the aft flight deck of the Space Shuttle Columbia during the crew’s final interface with the Hubble Space Telescope (HST). The telescope was released at 4:04 a.m. (CST). The image was recorded with a digital still camera.

  2. Smart Camera Technology Increases Quality

    NASA Technical Reports Server (NTRS)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  3. A Fluorescence Recovery After Photobleaching (FRAP) Technique for the Measurement of Solute Transport Across Surfactant-Laden Interfaces

    NASA Technical Reports Server (NTRS)

    Browne, Edward P.; Hatton, T. Alan

    1996-01-01

    The technique of Fluorescence Recovery After Photobleaching (FRAP) has been applied to the measurement of interfacial transport in two-phase systems. FRAP exploits the loss of fluorescence exhibited by certain fluorophores when over-stimulated (photobleached), so that a two-phase system, originally at equilibrium, can be perturbed without disturbing the interface by strong light from an argon-ion laser and its recovery monitored by a microscope-mounted CCD camera as it relaxes to a new equilibrium. During this relaxation, the concentration profiles of the probe solute are measured on both sides of the interface as a function of time, yielding information about the transport characteristics of the system. To minimize the size of the meniscus between the two phases, a photolithography technique is used to selectively treat the glass walls of the cell in which the phases are contained. This allows concentration measurements to be made very close to the interface and increases the sensitivity of the FRAP technique.

  4. Development of a Mobile User Interface for Image-based Dietary Assessment

    PubMed Central

    Kim, SungYe; Schap, TusaRebecca; Bosch, Marc; Maciejewski, Ross; Delp, Edward J.; Ebert, David S.; Boushey, Carol J.

    2011-01-01

    In this paper, we present a mobile user interface for image-based dietary assessment. The mobile user interface provides a front end to a client-server image recognition and portion estimation software. In the client-server configuration, the user interactively records a series of food images using a built-in camera on the mobile device. Images are sent from the mobile device to the server, and the calorie content of the meal is estimated. In this paper, we describe and discuss the design and development of our mobile user interface features. We discuss the design concepts, through initial ideas and implementations. For each concept, we discuss qualitative user feedback from participants using the mobile client application. We then discuss future designs, including work on design considerations for the mobile application to allow the user to interactively correct errors in the automatic processing while reducing the user burden associated with classical pen-and-paper dietary records. PMID:24455755

  5. Effect of Temperature Change on Interfacial Behavior of an Acoustically Levitated Droplet

    NASA Astrophysics Data System (ADS)

    Kawakami, Masanori; Abe, Yutaka; Kaneko, Akiko; Yamamoto, Yuji; Hasegawa, Koji

    2010-04-01

    Under the microgravity environment, new and high quality materials with a homogeneous crystal structure are expected to be manufactured by undercooling solidification, since the material manufacturing under the microgravity environment is more static than that under the normal gravity. However, the temperature change on the interface of the material in space can affect on the material processing. The purpose of the present study is to investigate effect of the temperature change of interface on the large levitated droplet interface. A water droplet levitated by the acoustic standing wave is heated by YAG laser. In order to heat the water droplet by the laser heating, rhodamine 6G is solved in it to achieve high absorbance of the laser. The droplet diameter is from 4 to 5.5 mm. The deformation of the droplet interface is observed by high speed video camera. The temperature of droplet is measured by the radiation thermometer. It is noticed that the larger droplet under the higher sound pressure tends to oscillate remarkably by the laser heating.

  6. SOAR Optical Imager (SOI) | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER AVAILABLE SOAR ?: ADS link to SOI instrument SPIE paper Last update: C. Briceño, Aug 23, 2017 SOAR Optical Imager

  7. 35. DETAIL VIEW OF GUNITEENCASED CONCRETE PILINGS AT BENT 6, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    35. DETAIL VIEW OF GUNITE-ENCASED CONCRETE PILINGS AT BENT 6, LOOKING SOUTHWEST (CAMERA AGAINST CHAIN-LINK FENCE) - Huntington Beach Municipal Pier, Pacific Coast Highway at Main Street, Huntington Beach, Orange County, CA

  8. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  9. Nekton Interaction Monitoring System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2017-03-15

    The software provides a real-time processing system for sonar to detect and track animals, and to extract water column biomass statistics in order to facilitate continuous monitoring of an underwater environment. The Nekton Interaction Monitoring System (NIMS) extracts and archives tracking and backscatter statistics data from a real-time stream of data from a sonar device. NIMS also sends real-time tracking messages over the network that can be used by other systems to generate other metrics or to trigger instruments such as an optical video camera. A web-based user interface provides remote monitoring and control. NIMS currently supports three popular sonarmore » devices: M3 multi-beam sonar (Kongsberg), EK60 split-beam echo-sounder (Simrad) and BlueView acoustic camera (Teledyne).« less

  10. A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras

    NASA Astrophysics Data System (ADS)

    Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.

    2006-05-01

    A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.

  11. User interface for ground-water modeling: Arcview extension

    USGS Publications Warehouse

    Tsou, Ming‐shu; Whittemore, Donald O.

    2001-01-01

    Numerical simulation for ground-water modeling often involves handling large input and output data sets. A geographic information system (GIS) provides an integrated platform to manage, analyze, and display disparate data and can greatly facilitate modeling efforts in data compilation, model calibration, and display of model parameters and results. Furthermore, GIS can be used to generate information for decision making through spatial overlay and processing of model results. Arc View is the most widely used Windows-based GIS software that provides a robust user-friendly interface to facilitate data handling and display. An extension is an add-on program to Arc View that provides additional specialized functions. An Arc View interface for the ground-water flow and transport models MODFLOW and MT3D was built as an extension for facilitating modeling. The extension includes preprocessing of spatially distributed (point, line, and polygon) data for model input and postprocessing of model output. An object database is used for linking user dialogs and model input files. The Arc View interface utilizes the capabilities of the 3D Analyst extension. Models can be automatically calibrated through the Arc View interface by external linking to such programs as PEST. The efficient pre- and postprocessing capabilities and calibration link were demonstrated for ground-water modeling in southwest Kansas.

  12. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  13. Stereoscopic 3D reconstruction using motorized zoom lenses within an embedded system

    NASA Astrophysics Data System (ADS)

    Liu, Pengcheng; Willis, Andrew; Sui, Yunfeng

    2009-02-01

    This paper describes a novel embedded system capable of estimating 3D positions of surfaces viewed by a stereoscopic rig consisting of a pair of calibrated cameras. Novel theoretical and technical aspects of the system are tied to two aspects of the design that deviate from typical stereoscopic reconstruction systems: (1) incorporation of an 10x zoom lens (Rainbow- H10x8.5) and (2) implementation of the system on an embedded system. The system components include a DSP running μClinux, an embedded version of the Linux operating system, and an FPGA. The DSP orchestrates data flow within the system and performs complex computational tasks and the FPGA provides an interface to the system devices which consist of a CMOS camera pair and a pair of servo motors which rotate (pan) each camera. Calibration of the camera pair is accomplished using a collection of stereo images that view a common chess board calibration pattern for a set of pre-defined zoom positions. Calibration settings for an arbitrary zoom setting are estimated by interpolation of the camera parameters. A low-computational cost method for dense stereo matching is used to compute depth disparities for the stereo image pairs. Surface reconstruction is accomplished by classical triangulation of the matched points from the depth disparities. This article includes our methods and results for the following problems: (1) automatic computation of the focus and exposure settings for the lens and camera sensor, (2) calibration of the system for various zoom settings and (3) stereo reconstruction results for several free form objects.

  14. Publishing Data on Physical Samples Using the GeoLink Ontology and Linked Data Platforms

    NASA Astrophysics Data System (ADS)

    Ji, P.; Arko, R. A.; Lehnert, K. A.; Song, L.; Carter, M. R.; Hsu, L.

    2015-12-01

    Interdisciplinary Earth Data Alliance (IEDA), one of partners in EarthCube GeoLink project, seeks to explore the extent to which the use of GeoLink reusable Ontology Design Patterns (ODPs) and linked data platforms in IEDA data infrastructure can make research data more easily accessible and valuable. Linked data for the System for Earth Sample Registration (SESAR) is the first effort of IEDA to show how linked data enhance the presentation of IEDA data system architecture. SESAR Linked Data maps each table and column in SESAR database to RDF class and property based on GeoLink view, which build on the top of GeoLink ODPs. Then, uses D2RQ dumping the contents of SESAR database into RDF triples on the basis of mapping results. And, the dumped RDF triples is loaded into GRAPHDB, an RDF graph database, as permanent data in the form of atomic facts expressed as subjects, predicates and objects which provide support for semantic interoperability between IEDA and other GeoLink partners. Finally, an integrated browsing and searching interface build on Callimachus, a highly scalable platform for publishing linked data, is introduced to make sense of data stored in triplestore. Drill down and through features are built in the interface to help users locating content efficiently. The drill down feature enables users to explore beyond the summary information in the instance list of a specific class and into the detail from the specific instance page. The drill through feature enables users to jump from one instance to another one by simply clicking the link of the latter nested in the former region. Additionally, OpenLayers map is embedded into the interface to enhance the attractiveness of the presentation of instance which has geospatial information. Furthermore, by linking instances in the SESAR datasets to matching or corresponding instances in external sets, the presentation has been enriched with additional information about related classes like person, cruise, etc.

  15. Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.

    2014-07-01

    The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.

  16. KSC-07pd2199

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew takes a break from activities during their crew equipment interface test, or CEIT, to pose for a portrait in front of one of space shuttle Discovery's main engines. From left are Mission Specialist Scott E. Parazynski, Expedition 16 Flight Engineer Daniel M. Tani, Mission Specialist Stephanie D. Wilson, Commander Pamela A. Melroy, Mission Specialist Douglas H. Wheelock, Pilot George D. Zamka and Mission Specialist Paolo A. Nespoli, a European Space Agency astronaut from Italy. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  17. KSC-07pd2194

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Receiving instruction from Allison Bolinger, an EVA technician with NASA, under space shuttle Discovery in Orbiter Processing Facility bay 3 are, from left in blue flight suits, Mission Specialist Douglas H. Wheelock; Commander Pamela A. Melroy; Expedition 16 Flight Engineer Daniel M. Tani; Pilot George D. Zamka; and Mission Specialists Stephanie D. Wilson, Scott E. Parazynski and Paolo A. Nespoli, a European Space Agency astronaut from Italy. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  18. The effects of time delays on a telepathology user interface.

    PubMed Central

    Carr, D.; Hasegawa, H.; Lemmon, D.; Plaisant, C.

    1992-01-01

    Telepathology enables a pathologist to examine physically distant tissue samples by microscope operation over a communication link. Communication links can impose time delays which cause difficulties in controlling the remote device. Such difficulties were found in a microscope teleoperation system. Since the user interface is critical to pathologist's acceptance of telepathology, we redesigned the user interface for this system, built two different versions (a keypad whose movement commands operated by specifying a start command followed by a stop command and a trackball interface whose movement commands were incremental and directly proportional to the rotation of the trackball). We then conducted a pilot study to determine the effect of time delays on the new user interfaces. In our experiment, the keypad was the faster interface when the time delay is short. There was no evidence to favor either the keypad or trackball when the time delay was longer. Inexperienced participants benefitted by allowing them to move long distances over the microscope slide by dragging the field-of-view indicator on the touchscreen control panel. The experiment suggests that changes could be made to the trackball interface which would improve its performance. PMID:1482878

  19. A novel dual-camera calibration method for 3D optical measurement

    NASA Astrophysics Data System (ADS)

    Gai, Shaoyan; Da, Feipeng; Dai, Xianqiang

    2018-05-01

    A novel dual-camera calibration method is presented. In the classic methods, the camera parameters are usually calculated and optimized by the reprojection error. However, for a system designed for 3D optical measurement, this error does not denote the result of 3D reconstruction. In the presented method, a planar calibration plate is used. In the beginning, images of calibration plate are snapped from several orientations in the measurement range. The initial parameters of the two cameras are obtained by the images. Then, the rotation and translation matrix that link the frames of two cameras are calculated by using method of Centroid Distance Increment Matrix. The degree of coupling between the parameters is reduced. Then, 3D coordinates of the calibration points are reconstructed by space intersection method. At last, the reconstruction error is calculated. It is minimized to optimize the calibration parameters. This error directly indicates the efficiency of 3D reconstruction, thus it is more suitable for assessing the quality of dual-camera calibration. In the experiments, it can be seen that the proposed method is convenient and accurate. There is no strict requirement on the calibration plate position in the calibration process. The accuracy is improved significantly by the proposed method.

  20. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  1. The reliable multicast protocol application programming interface

    NASA Technical Reports Server (NTRS)

    Montgomery , Todd; Whetten, Brian

    1995-01-01

    The Application Programming Interface for the Berkeley/WVU implementation of the Reliable Multicast Protocol is described. This transport layer protocol is implemented as a user library that applications and software buses link against.

  2. ARNICA: the Arcetri Observatory NICMOS3 imaging camera

    NASA Astrophysics Data System (ADS)

    Lisi, Franco; Baffa, Carlo; Hunt, Leslie K.

    1993-10-01

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 micrometers that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1' per pixel, with sky coverage of more than 4' X 4' on the NICMOS 3 (256 X 256 pixels, 40 micrometers side) detector array. The optical path is compact enough to be enclosed in a 25.4 cm diameter dewar; the working temperature is 76 K. The camera is remotely controlled by a 486 PC, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the 486 PC, acquires and stores the frames, and controls the timing of the array. We give an estimate of performance, in terms of sensitivity with an assigned observing time, along with some details on the main parameters of the NICMOS 3 detector.

  3. Link Analysis in the Mission Planning Lab

    NASA Technical Reports Server (NTRS)

    McCarthy, Jessica A.; Cervantes, Benjamin W.; Daugherty, Sarah C.; Arroyo, Felipe; Mago, Divyang

    2011-01-01

    The legacy communications link analysis software currently used at Wallops Flight Facility involves processes that are different for command destruct, radar, and telemetry. There is a clear advantage to developing an easy-to-use tool that combines all the processes in one application. Link Analysis in the Mission Planning Lab (MPL) uses custom software and algorithms integrated with Analytical Graphics Inc. Satellite Toolkit (AGI STK). The MPL link analysis tool uses pre/post-mission data to conduct a dynamic link analysis between ground assets and the launch vehicle. Just as the legacy methods do, the MPL link analysis tool calculates signal strength and signal- to-noise according to the accepted processes for command destruct, radar, and telemetry assets. Graphs and other custom data are generated rapidly in formats for reports and presentations. STK is used for analysis as well as to depict plume angles and antenna gain patterns in 3D. The MPL has developed two interfaces with the STK software (see figure). The first interface is an HTML utility, which was developed in Visual Basic to enhance analysis for plume modeling and to offer a more user friendly, flexible tool. A graphical user interface (GUI) written in MATLAB (see figure upper right-hand corner) is also used to quickly depict link budget information for multiple ground assets. This new method yields a dramatic decrease in the time it takes to provide launch managers with the required link budgets to make critical pre-mission decisions. The software code used for these two custom utilities is a product of NASA's MPL.

  4. Deciding to Change OpenURL Link Resolvers

    ERIC Educational Resources Information Center

    Johnson, Megan; Leonard, Andrea; Wiswell, John

    2015-01-01

    This article will be of interest to librarians, particularly those in consortia that are evaluating OpenURL link resolvers. This case study contrasts WebBridge (an Innovative Interface product) and LinkSource (EBSCO's product). This study assisted us in the decision-making process of choosing an OpenURL link resolver that was sustainable to…

  5. Building a SuAVE browse interface to R2R's Linked Data

    NASA Astrophysics Data System (ADS)

    Clark, D.; Stocks, K. I.; Arko, R. A.; Zaslavsky, I.; Whitenack, T.

    2017-12-01

    The Rolling Deck to Repository program (R2R) is creating and evaluating a new browse portal based on the SuAVE platform and the R2R linked data graph. R2R manages the underway sensor data collected by the fleet of US academic research vessels, and provides a discovery and access point to those data at its website, www.rvdata.us. R2R has a database-driven search interface, but seeks a more capable and extensible browse interface that could be built off of the substantial R2R linked data resources. R2R's Linked Data graph organizes its data holdings around key concepts (e.g. cruise, vessel, device type, operator, award, organization, publication), anchored by persistent identifiers where feasible. The "Survey Analysis via Visual Exploration" or SuAVE platform (suave.sdsc.edu) is a system for online publication, sharing, and analysis of images and metadata. It has been implemented as an interface to diverse data collections, but has not been driven off of linked data in the past. SuAVE supports several features of interest to R2R, including faceted searching, collaborative annotations, efficient subsetting, Google maps-like navigation over an image gallery, and several types of data analysis. Our initial SuAVE-based implementation was through a CSV export from the R2R PostGIS-enabled PostgreSQL database. This served to demonstrate the utility of SuAVE but was static and required reloading as R2R data holdings grew. We are now working to implement a SPARQL-based ("RDF Query Language") service that directly leverages the R2R Linked Data graph and offers the ability to subset and/or customize output.We will show examples of SuAVE faceted searches on R2R linked data concepts, and discuss our experience to date with this work in progress.

  6. KA-102 Film/EO Standoff System

    NASA Astrophysics Data System (ADS)

    Turpin, Richard T.

    1984-12-01

    The KA-102 is an in-flight selectable film or electro-optic (EU) visible reconnaissance camera with a real-time data link. The lens is a 66-in., f/4 refractor with a 4° field-of-view. The focal plane is a continuous line array of 10,240 COD elements that opera tes in the pushbroom mode. In the film mode, the camera use standard 5-in.-wide 3414 or 3412 film. The E0 imagery is transmitted up to 500 n.mi. to the ground station over a 75-Mbit/sec )(- band data link via a relay aircraft (see Figure 1). The camera may be controlled from the ground station via an uplink or from the cockpit control panel. The 8-ft-diameter ground tracking antenna is located on high ground and linked to the ground station via a 1-mile-long, two-way fiber optic system. In the ground station the imagery is calibrated and displayed in real time on three crt's. Selected imagery may be stored on disk and enhanced, analyzed, and annotated in near-real-time. The imagery may be enhanced and magnified in real time. Hardcopy frames may be made on 8 x 10-in. Polaroid, 35-1m film, or dry silver paper. All the received image and engineering data is recorded on a high-density tape recorder. The aircraft track is recorded on a map plotter. Ground support equipment (GSE), manuals, spares, and training are included in the system. Falcon 20 aircraft were modified on a subcontract to Dynelectron--Ft. Worth.

  7. Human-computer interface glove using flexible piezoelectric sensors

    NASA Astrophysics Data System (ADS)

    Cha, Youngsu; Seo, Jeonggyu; Kim, Jun-Sik; Park, Jung-Min

    2017-05-01

    In this note, we propose a human-computer interface glove based on flexible piezoelectric sensors. We select polyvinylidene fluoride as the piezoelectric material for the sensors because of advantages such as a steady piezoelectric characteristic and good flexibility. The sensors are installed in a fabric glove by means of pockets and Velcro bands. We detect changes in the angles of the finger joints from the outputs of the sensors, and use them for controlling a virtual hand that is utilized in virtual object manipulation. To assess the sensing ability of the piezoelectric sensors, we compare the processed angles from the sensor outputs with the real angles from a camera recoding. With good agreement between the processed and real angles, we successfully demonstrate the user interaction system with the virtual hand and interface glove based on the flexible piezoelectric sensors, for four hand motions: fist clenching, pinching, touching, and grasping.

  8. Growth of benzil crystals by vertical dynamic gradient freeze technique in a transparent furnace

    NASA Astrophysics Data System (ADS)

    Lan, C. W.; Song, C. R.

    1997-09-01

    The vertical dynamic gradient freeze technique using a transparent furnace was applied to the growth of benzil single crystals. A flat-bottom ampoule with a <0001> seed was used for growth. During crystal growth, dynamic heating profiles were controlled through a computer, and the growth interface was recorded by a CCD camera. Computer simulation was also conducted, and the calculated convex interface and dynamic growth rate were consistent with the observed ones for various growth conditions. Conditions for growing single crystals were also determined, and they were mainly limited by constitutional supercooling. As the grown crystals were clear in appearance, their optical absorption spectra were insensitive to growth conditions and post-annealing.

  9. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  10. Investigating Coulomb's Law.

    ERIC Educational Resources Information Center

    Noll, Ellis; Koehlinger, Mervin; Kowalski, Ludwik; Swackhamer, Gregg

    1998-01-01

    Describes the use of a computer-linked camera to demonstrate Coulomb's law. Suggests a way of reducing the difficulties in presenting Coulomb's law by teaching the inverse square law of gravity and the inverse square law of electricity in the same unit. (AIM)

  11. Refraction corrected calibration for aquatic locomotion research: application of Snell's law improves spatial accuracy.

    PubMed

    Henrion, Sebastian; Spoor, Cees W; Pieters, Remco P M; Müller, Ulrike K; van Leeuwen, Johan L

    2015-07-07

    Images of underwater objects are distorted by refraction at the water-glass-air interfaces and these distortions can lead to substantial errors when reconstructing the objects' position and shape. So far, aquatic locomotion studies have minimized refraction in their experimental setups and used the direct linear transform algorithm (DLT) to reconstruct position information, which does not model refraction explicitly. Here we present a refraction corrected ray-tracing algorithm (RCRT) that reconstructs position information using Snell's law. We validated this reconstruction by calculating 3D reconstruction error-the difference between actual and reconstructed position of a marker. We found that reconstruction error is small (typically less than 1%). Compared with the DLT algorithm, the RCRT has overall lower reconstruction errors, especially outside the calibration volume, and errors are essentially insensitive to camera position and orientation and the number and position of the calibration points. To demonstrate the effectiveness of the RCRT, we tracked an anatomical marker on a seahorse recorded with four cameras to reconstruct the swimming trajectory for six different camera configurations. The RCRT algorithm is accurate and robust and it allows cameras to be oriented at large angles of incidence and facilitates the development of accurate tracking algorithms to quantify aquatic manoeuvers.

  12. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-01-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a through review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  13. Effects of light refraction on the accuracy of camera calibration and reconstruction in underwater motion analysis.

    PubMed

    Kwon, Young-Hoo; Casebolt, Jeffrey B

    2006-07-01

    One of the most serious obstacles to accurate quantification of the underwater motion of a swimmer's body is image deformation caused by refraction. Refraction occurs at the water-air interface plane (glass) owing to the density difference. Camera calibration-reconstruction algorithms commonly used in aquatic research do not have the capability to correct this refraction-induced nonlinear image deformation and produce large reconstruction errors. The aim of this paper is to provide a thorough review of: the nature of the refraction-induced image deformation and its behaviour in underwater object-space plane reconstruction; the intrinsic shortcomings of the Direct Linear Transformation (DLT) method in underwater motion analysis; experimental conditions that interact with refraction; and alternative algorithms and strategies that can be used to improve the calibration-reconstruction accuracy. Although it is impossible to remove the refraction error completely in conventional camera calibration-reconstruction methods, it is possible to improve the accuracy to some extent by manipulating experimental conditions or calibration frame characteristics. Alternative algorithms, such as the localized DLT and the double-plane method are also available for error reduction. The ultimate solution for the refraction problem is to develop underwater camera calibration and reconstruction algorithms that have the capability to correct refraction.

  14. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.

    PubMed

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H

    2015-02-01

    Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.

  15. CBM First-level Event Selector Input Interface Demonstrator

    NASA Astrophysics Data System (ADS)

    Hutter, Dirk; de Cuveland, Jan; Lindenstruth, Volker

    2017-10-01

    CBM is a heavy-ion experiment at the future FAIR facility in Darmstadt, Germany. Featuring self-triggered front-end electronics and free-streaming read-out, event selection will exclusively be done by the First Level Event Selector (FLES). Designed as an HPC cluster with several hundred nodes its task is an online analysis and selection of the physics data at a total input data rate exceeding 1 TByte/s. To allow efficient event selection, the FLES performs timeslice building, which combines the data from all given input links to self-contained, potentially overlapping processing intervals and distributes them to compute nodes. Partitioning the input data streams into specialized containers allows performing this task very efficiently. The FLES Input Interface defines the linkage between the FEE and the FLES data transport framework. A custom FPGA PCIe board, the FLES Interface Board (FLIB), is used to receive data via optical links and transfer them via DMA to the host’s memory. The current prototype of the FLIB features a Kintex-7 FPGA and provides up to eight 10 GBit/s optical links. A custom FPGA design has been developed for this board. DMA transfers and data structures are optimized for subsequent timeslice building. Index tables generated by the FPGA enable fast random access to the written data containers. In addition the DMA target buffers can directly serve as InfiniBand RDMA source buffers without copying the data. The usage of POSIX shared memory for these buffers allows data access from multiple processes. An accompanying HDL module has been developed to integrate the FLES link into the front-end FPGA designs. It implements the front-end logic interface as well as the link protocol. Prototypes of all Input Interface components have been implemented and integrated into the FLES test framework. This allows the implementation and evaluation of the foreseen CBM read-out chain.

  16. Traffic monitoring with distributed smart cameras

    NASA Astrophysics Data System (ADS)

    Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert

    2012-01-01

    The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.

  17. Characterization of Vegetation using the UC Davis Remote Sensing Testbed

    NASA Astrophysics Data System (ADS)

    Falk, M.; Hart, Q. J.; Bowen, K. S.; Ustin, S. L.

    2006-12-01

    Remote sensing provides information about the dynamics of the terrestrial biosphere with continuous spatial and temporal coverage on many different scales. We present the design and construction of a suite of instrument modules and network infrastructure with size, weight and power constraints suitable for small scale vehicles, anticipating vigorous growth in unmanned aerial vehicles (UAV) and other mobile platforms. Our approach provides the rapid deployment and low cost acquisition of high aerial imagery for applications requiring high spatial resolution and revisits. The testbed supports a wide range of applications, encourages remote sensing solutions in new disciplines and demonstrates the complete range of engineering knowledge required for the successful deployment of remote sensing instruments. The initial testbed is deployed on a Sig Kadet Senior remote controlled plane. It includes an onboard computer with wireless radio, GPS, inertia measurement unit, 3-axis electronic compass and digital cameras. The onboard camera is either a RGB digital camera or a modified digital camera with red and NIR channels. Cameras were calibrated using selective light sources, an integrating spheres and a spectrometer, allowing for the computation of vegetation indices such as the NDVI. Field tests to date have investigated technical challenges in wireless communication bandwidth limits, automated image geolocation, and user interfaces; as well as image applications such as environmental landscape mapping focusing on Sudden Oak Death and invasive species detection, studies on the impact of bird colonies on tree canopies, and precision agriculture.

  18. QWIP technology for both military and civilian applications

    NASA Astrophysics Data System (ADS)

    Gunapala, Sarath D.; Kukkonen, Carl A.; Sirangelo, Mark N.; McQuiston, Barbara K.; Chehayeb, Riad; Kaufmann, M.

    2001-10-01

    Advanced thermal imaging infrared cameras have been a cost effective and reliable method to obtain the temperature of objects. Quantum Well Infrared Photodetector (QWIP) based thermal imaging systems have advanced the state-of-the-art and are the most sensitive commercially available thermal systems. QWIP Technologies LLC, under exclusive agreement with Caltech University, is currently manufacturing the QWIP-ChipTM, a 320 X 256 element, bound-to-quasibound QWIP FPA. The camera performance falls within the long-wave IR band, spectrally peaked at 8.5 μm. The camera is equipped with a 32-bit floating-point digital signal processor combined with multi- tasking software, delivering a digital acquisition resolution of 12-bits using nominal power consumption of less than 50 Watts. With a variety of video interface options, remote control capability via an RS-232 connection, and an integrated control driver circuit to support motorized zoom and focus- compatible lenses, this camera design has excellent application in both the military and commercial sector. In the area of remote sensing, high-performance QWIP systems can be used for high-resolution, target recognition as part of a new system of airborne platforms (including UAVs). Such systems also have direct application in law enforcement, surveillance, industrial monitoring and road hazard detection systems. This presentation will cover the current performance of the commercial QWIP cameras, conceptual platform systems and advanced image processing for use in both military remote sensing and civilian applications currently being developed in road hazard monitoring.

  19. Intelligent viewing control for robotic and automation systems

    NASA Astrophysics Data System (ADS)

    Schenker, Paul S.; Peters, Stephen F.; Paljug, Eric D.; Kim, Won S.

    1994-10-01

    We present a new system for supervisory automated control of multiple remote cameras. Our primary purpose in developing this system has been to provide capability for knowledge- based, `hands-off' viewing during execution of teleoperation/telerobotic tasks. The reported technology has broader applicability to remote surveillance, telescience observation, automated manufacturing workcells, etc. We refer to this new capability as `Intelligent Viewing Control (IVC),' distinguishing it from a simple programmed camera motion control. In the IVC system, camera viewing assignment, sequencing, positioning, panning, and parameter adjustment (zoom, focus, aperture, etc.) are invoked and interactively executed by real-time by a knowledge-based controller, drawing on a priori known task models and constraints, including operator preferences. This multi-camera control is integrated with a real-time, high-fidelity 3D graphics simulation, which is correctly calibrated in perspective to the actual cameras and their platform kinematics (translation/pan-tilt). Such merged graphics- with-video design allows the system user to preview and modify the planned (`choreographed') viewing sequences. Further, during actual task execution, the system operator has available both the resulting optimized video sequence, as well as supplementary graphics views from arbitrary perspectives. IVC, including operator-interactive designation of robot task actions, is presented to the user as a well-integrated video-graphic single screen user interface allowing easy access to all relevant telerobot communication/command/control resources. We describe and show pictorial results of a preliminary IVC system implementation for telerobotic servicing of a satellite.

  20. Applications and Innovations for Use of High Definition and High Resolution Digital Motion Imagery in Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2016-01-01

    The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.

  1. Spatial capture–recapture with partial identity: An application to camera traps

    USGS Publications Warehouse

    Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.

    2018-01-01

    Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.

  2. Utilization of Open Source Technology to Create Cost-Effective Microscope Camera Systems for Teaching.

    PubMed

    Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur

    2018-01-01

    Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.

  3. Hyper Suprime-Cam: Camera dewar design

    NASA Astrophysics Data System (ADS)

    Komiyama, Yutaka; Obuchi, Yoshiyuki; Nakaya, Hidehiko; Kamata, Yukiko; Kawanomoto, Satoshi; Utsumi, Yousuke; Miyazaki, Satoshi; Uraguchi, Fumihiro; Furusawa, Hisanori; Morokuma, Tomoki; Uchida, Tomohisa; Miyatake, Hironao; Mineo, Sogo; Fujimori, Hiroki; Aihara, Hiroaki; Karoji, Hiroshi; Gunn, James E.; Wang, Shiang-Yu

    2018-01-01

    This paper describes the detailed design of the CCD dewar and the camera system which is a part of the wide-field imager Hyper Suprime-Cam (HSC) on the 8.2 m Subaru Telescope. On the 1.°5 diameter focal plane (497 mm in physical size), 116 four-side buttable 2 k × 4 k fully depleted CCDs are tiled with 0.3 mm gaps between adjacent chips, which are cooled down to -100°C by two pulse tube coolers with a capability to exhaust 100 W heat at -100°C. The design of the dewar is basically a natural extension of Suprime-Cam, incorporating some improvements such as (1) a detailed CCD positioning strategy to avoid any collision between CCDs while maximizing the filling factor of the focal plane, (2) a spherical washers mechanism adopted for the interface points to avoid any deformation caused by the tilt of the interface surface to be transferred to the focal plane, (3) the employment of a truncated-cone-shaped window, made of synthetic silica, to save the back focal space, and (4) a passive heat transfer mechanism to exhaust efficiently the heat generated from the CCD readout electronics which are accommodated inside the dewar. Extensive simulations using a finite-element analysis (FEA) method are carried out to verify that the design of the dewar is sufficient to satisfy the assigned errors. We also perform verification tests using the actually assembled CCD dewar to supplement the FEA and demonstrate that the design is adequate to ensure an excellent image quality which is key to the HSC. The details of the camera system, including the control computer system, are described as well as the assembling process of the dewar and the process of installation on the telescope.

  4. A comparison study of visually stimulated brain-computer and eye-tracking interfaces

    NASA Astrophysics Data System (ADS)

    Suefusa, Kaori; Tanaka, Toshihisa

    2017-06-01

    Objective. Brain-computer interfacing (BCI) based on visual stimuli detects the target on a screen on which a user is focusing. The detection of the gazing target can be achieved by tracking gaze positions with a video camera, which is called eye-tracking or eye-tracking interfaces (ETIs). The two types of interface have been developed in different communities. Thus, little work on a comprehensive comparison between these two types of interface has been reported. This paper quantitatively compares the performance of these two interfaces on the same experimental platform. Specifically, our study is focused on two major paradigms of BCI and ETI: steady-state visual evoked potential-based BCIs and dwelling-based ETIs. Approach. Recognition accuracy and the information transfer rate were measured by giving subjects the task of selecting one of four targets by gazing at it. The targets were displayed in three different sizes (with sides 20, 40 and 60 mm long) to evaluate performance with respect to the target size. Main results. The experimental results showed that the BCI was comparable to the ETI in terms of accuracy and the information transfer rate. In particular, when the size of a target was relatively small, the BCI had significantly better performance than the ETI. Significance. The results on which of the two interfaces works better in different situations would not only enable us to improve the design of the interfaces but would also allow for the appropriate choice of interface based on the situation. Specifically, one can choose an interface based on the size of the screen that displays the targets.

  5. A serial digital data communications device. [for real time flight simulation

    NASA Technical Reports Server (NTRS)

    Fetter, J. L.

    1977-01-01

    A general purpose computer peripheral device which is used to provide a full-duplex, serial, digital data transmission link between a Xerox Sigma computer and a wide variety of external equipment, including computers, terminals, and special purpose devices is reported. The interface has an extensive set of user defined options to assist the user in establishing the necessary data links. This report describes those options and other features of the serial communications interface and its performance by discussing its application to a particular problem.

  6. Assessment of cockpit interface concepts for data link retrofit

    NASA Technical Reports Server (NTRS)

    Mccauley, Hugh W.; Miles, William L.; Dwyer, John P.; Erickson, Jeffery B.

    1992-01-01

    The problem is examined of retrofitting older generation aircraft with data link capability. The approach taken analyzes requirements for the cockpit interface, based on review of prior research and opinions obtained from subject matter experts. With this background, essential functions and constraints for a retrofit installation are defined. After an assessment of the technology available to meet the functions and constraints, candidate design concepts are developed. The most promising design concept is described in detail. Finally, needs for further research and development are identified.

  7. Semi-automated sorting using holographic optical tweezers remotely controlled by eye/hand tracking camera

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltan; Keša, Peter; Nikorovič, Matej; Kaůka, Jan; Zemánek, Pavel

    2016-12-01

    We proposed the improved control software for the holographic optical tweezers (HOT) proper for simple semi-automated sorting. The controller receives data from both the human interface sensors and the HOT microscope camera and processes them. As a result, the new positions of active laser traps are calculated, packed into the network format and sent to the remote HOT. Using the photo-polymerization technique, we created a sorting container consisting of two parallel horizontal walls where one wall contains "gates" representing a place where the trapped particle enters into the container. The positions of particles and gates are obtained by image analysis technique which can be exploited to achieve the higher level of automation. Sorting is documented on computer game simulation and the real experiment.

  8. Intelligent robotic tracker

    NASA Technical Reports Server (NTRS)

    Otaguro, W. S.; Kesler, L. O.; Land, K. C.; Rhoades, D. E.

    1987-01-01

    An intelligent tracker capable of robotic applications requiring guidance and control of platforms, robotic arms, and end effectors has been developed. This packaged system capable of supervised autonomous robotic functions is partitioned into a multiple processor/parallel processing configuration. The system currently interfaces to cameras but has the capability to also use three-dimensional inputs from scanning laser rangers. The inputs are fed into an image processing and tracking section where the camera inputs are conditioned for the multiple tracker algorithms. An executive section monitors the image processing and tracker outputs and performs all the control and decision processes. The present architecture of the system is presented with discussion of its evolutionary growth for space applications. An autonomous rendezvous demonstration of this system was performed last year. More realistic demonstrations in planning are discussed.

  9. Using computer graphics to design Space Station Freedom viewing

    NASA Technical Reports Server (NTRS)

    Goldsberry, B. S.; Lippert, B. O.; Mckee, S. D.; Lewis, J. L., Jr.; Mount, F. E.

    1989-01-01

    An important aspect of planning for Space Station Freedom at the United States National Aeronautics and Space Administration (NASA) is the placement of the viewing windows and cameras for optimum crewmember use. Researchers and analysts are evaluating the placement options using a three-dimensional graphics program called PLAID. This program, developed at the NASA Johnson Space Center (JSC), is being used to determine the extent to which the viewing requirements for assembly and operations are being met. A variety of window placement options in specific modules are assessed for accessibility. In addition, window and camera placements are analyzed to insure that viewing areas are not obstructed by the truss assemblies, externally-mounted payloads, or any other station element. Other factors being examined include anthropometric design considerations, workstation interfaces, structural issues, and mechanical elements.

  10. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  11. Media independent interface. Interface control document

    NASA Technical Reports Server (NTRS)

    1987-01-01

    A Media Independent Interface (MII) is specified, using current standards in the industry. The MII is described in hierarchical fashion. At the base are IEEE/International Standards Organization (ISO) documents (standards) which describe the functionality of the software modules or layers and their interconnection. These documents describe primitives which are to transcent the MII. The intent of the MII is to provide a universal interface to one or more Media Access Contols (MACs) for the Logical Link Controller and Station Manager. This interface includes both a standardized electrical and mechanical interface and a standardized functional specification which defines the services expected from the MAC.

  12. The ExoMars PanCam Instrument

    NASA Astrophysics Data System (ADS)

    Griffiths, Andrew; Coates, Andrew; Muller, Jan-Peter; Jaumann, Ralf; Josset, Jean-Luc; Paar, Gerhard; Barnes, David

    2010-05-01

    The ExoMars mission has evolved into a joint European-US mission to deliver a trace gas orbiter and a pair of rovers to Mars in 2016 and 2018 respectively. The European rover will carry the Pasteur exobiology payload including the 1.56 kg Panoramic Camera. PanCam will provide multispectral stereo images with 34 deg horizontal field-of-view (580 microrad/pixel) Wide-Angle Cameras (WAC) and (83 microrad/pixel) colour monoscopic "zoom" images with 5 deg horizontal field-of-view High Resolution Camera (HRC). The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage [1]. Integrated with the WACs and HRC into the PanCam optical bench (which helps the instrument meet its planetary protection requirements) is the PanCam interface unit (PIU); which provides image storage, a Spacewire interface to the rover and DC-DC power conversion. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission [2] as well as providing multispectral geological imaging, colour and stereo panoramic images and solar images for water vapour abundance and dust optical depth measurements. The High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls. Additionally HRC will be used to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. In short, PanCam provides the overview and context for the ExoMars experiment locations, required to enable the exobiology aims of the mission. In addition to these baseline capabilities further enhancements are possible to PanCam to enhance it's effectiveness for astrobiology and planetary exploration: 1. Rover Inspection Mirror (RIM) 2. Organics Detection by Fluorescence Excitation (ODFE) LEDs [3-6] 3. UVIS broadband UV Flux and Opacity Determination (UVFOD) photodiode This paper will discuss the scientific objectives and resource impacts of these enhancements. References: 1. Griffiths, A.D., Coates, A.J., Josset, J.-L., Paar, G., Hofmann, B., Pullan, D., Ruffer, P., Sims, M.R., Pillinger, C.T., The Beagle 2 stereo camera system, Planet. Space Sci. 53, 1466-1488, 2005. 2. Paar, G., Oberst, J., Barnes, D.P., Griffiths, A.D., Jaumann, R., Coates, A.J., Muller, J.P., Gao, Y., Li, R., 2007, Requirements and Solutions for ExoMars Rover Panoramic Camera 3d Vision Processing, abstract submitted to EGU meeting, Vienna, 2007. 3. Storrie-Lombardi, M.C., Hug, W.F., McDonald, G.D., Tsapin, A.I., and Nealson, K.H. 2001. Hollow cathode ion lasers for deep ultraviolet Raman spectroscopy and fluorescence imaging. Rev. Sci. Ins., 72 (12), 4452-4459. 4. Nealson, K.H., Tsapin, A., and Storrie-Lombardi, M. 2002. Searching for life in the universe: unconventional methods for an unconventional problem. International Microbiology, 5, 223-230. 5. Mormile, M.R. and Storrie-Lombardi, M.C. 2005. The use of ultraviolet excitation of native fluorescence for identifying biomarkers in halite crystals. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 246-253. 6. Storrie-Lombardi, M.C. 2005. Post-Bayesian strategies to optimize astrobiology instrument suites: lessons from Antarctica and the Pilbara. Astrobiology and Planetary Missions (R. B. Hoover, G. V. Levin and A. Y. Rozanov, Eds.), Proc. SPIE, 5906, 288-301.

  13. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  14. Innovative uses of GigaPan Technology for Onsite and Distance Education

    NASA Astrophysics Data System (ADS)

    Bentley, C.; Schott, R. C.; Piatek, J. L.; Richards, B.

    2013-12-01

    GigaPans are gigapixel panoramic images that can be viewed at a wide range of magnifications, allowing users to explore them in various degrees of detail from the smallest scale to the full image extent. In addition to panoramic images captured with the GigaPan camera mount ('Dry Falls' - http://www.gigapan.com/gigapans/89093), users can also upload annotated images (For example, 'Massanutten sandstone slab with trace fossils (annotated)', http://www.gigapan.com/gigapans/124295) and satellite images (For example, 'Geology vs. Topography - State of Connecticut', http://www.gigapan.com/gigapans/111265). Panoramas with similar topics have been gathered together on the site in galleries, both user-generated and site-curated (For example, http://www.gigapan.com/galleries?categories=geology&page=1). Further innovations in display technology have also led to the development of improved viewers (for example, the annotations in the image linked above can be explored via paired viewers at http://coursecontent.nic.edu/bdrichards/gigapixelimages/callanview) GigaPan panoramas can be created through use of the GigaPan robotic camera mount and a digital camera (different models of the camera mount are available and work with a wide range of cameras). The camera mount can be used to create high-resolution pans ranging in scale from hand sample to outcrop up to landscape via the stitching software included with the robotic mount. The software can also be used to generate GigaPan images from other sources, such as thin section or satellite images, so these images can also be viewed with the online viewer. GigaPan images are typically viewed via a web-based interface that allows the user to interact with the image from the limits of the image detail up to the full panorama. After uploading, information can be added to panoramas with both text captions and geo-referencing (geo-located panoramas can then be viewed in Google Earth). Users can record specific locations and zoom levels in these images via "snapshots": these snapshots can direct others to the same location in the image as well as generate conversations with attached text comments. Users can also group related GigaPans by creating "galleries" of thematically related images (similar to photo albums). Gigapixel images can also be formatted for processing and viewing in an increasing number of platforms/modes as software vendors and internet browsers begin to provide 'add-in' support. This opens up opportunities for innovative adaptations for geoscience education. (For example, http://coursecontent.nic.edu/bdrichards/gigapixelimages/dryfalls) Specific applications of these images for geoscience educations include classroom activities and independent exercises that encourage students to take an active inquiry-based approach to understanding geoscience concepts at multiple skill levels. GigaPans in field research serve as both records of field locations and additional datasets for detailed analyses, such as observing color changes or variations in grain size. Related GigaPans can be also be presented together when embedded in webpages, useful for generating exercises for education purposes or for analyses of outcrops from the macro (landscape, outcrop) down to the micro scale (hand sample, thin section).

  15. Microwave Scanning System Correlations

    DTIC Science & Technology

    2010-08-11

    The follow equipment is needed for each of the individual scanning systems: Handheld Scanner Equipment list 1. Dell Netbook (with the...proper software installed by Evisive) 2. Bluetooth USB port transmitter 3. Handheld Probe 4. USB to mini-USB Converter (links camera to netbook

  16. Orbiter CIU/IUS communications hardware evaluation

    NASA Technical Reports Server (NTRS)

    Huth, G. K.

    1979-01-01

    The DOD and NASA inertial upper stage communication system design, hardware specifications and interfaces were analyzed to determine their compatibility with the Orbiter payload communications equipment (Payload Interrogator, Payload Signal Processors, Communications Interface Unit, and the Orbiter operational communications equipment (the S-Band and Ku-band systems). Topics covered include (1) IUS/shuttle Orbiter communications interface definition; (2) Orbiter avionics equipment serving the IUS; (3) IUS communication equipment; (4) IUS/shuttle Orbiter RF links; (5) STDN/TDRS S-band related activities; and (6) communication interface unit/Orbiter interface issues. A test requirement plan overview is included.

  17. Controlled impact demonstration on-board (interior) photographic system

    NASA Technical Reports Server (NTRS)

    May, C. J.

    1986-01-01

    Langley Research Center (LaRC) was responsible for the design, manufacture, and integration of all hardware required for the photographic system used to film the interior of the controlled impact demonstration (CID) B-720 aircraft during actual crash conditions. Four independent power supplies were constructed to operate the ten high-speed 16 mm cameras and twenty-four floodlights. An up-link command system, furnished by Ames Dryden Flight Research Facility (ADFRF), was necessary to activate the power supplies and start the cameras. These events were accomplished by initiation of relays located on each of the photo power pallets. The photographic system performed beyond expectations. All four power distribution pallets with their 20 year old Minuteman batteries performed flawlessly. All 24 lamps worked. All ten on-board high speed (400 fps) 16 mm cameras containing good resolution film data were recovered.

  18. Training Spatial Knowledge Acquisition Using Virtual Environments

    DTIC Science & Technology

    2000-04-03

    bands of textures and tiles them together to form long, continuous swaths of texture. This paper summarizes these tools and their function, and...which allows it to control these devices. It also includes a texture- tiling application to precisely line up frames from the camera to create wall...that textures exist a priori and has no support for cropping and tiling textures within the program, much less an interface to hardware specifically

  19. BRESEX: On board supervision, basic architecture and preliminary aspects for payload and space shuttle interface

    NASA Technical Reports Server (NTRS)

    Bergamini, E. W.; Depaula, A. R., Jr.; Martins, R. C. D. O.

    1984-01-01

    Data relative to the on board supervision subsystem are presented which were considered in a conference between INPE and NASA personnel, with the purpose of initiating a joint effort leading to the implementation of the Brazilian remote sensing experiment - (BRESEX). The BRESEX should consist, basically, of a multispectral camera for Earth observation, to be tested in a future space shuttle flight.

  20. Stereo Vision Inside Tire

    DTIC Science & Technology

    2015-08-21

    using the Open Computer Vision ( OpenCV ) libraries [6] for computer vision and the Qt library [7] for the user interface. The software has the...depth. The software application calibrates the cameras using the plane based calibration model from the OpenCV calib3D module and allows the...6] OpenCV . 2015. OpenCV Open Source Computer Vision. [Online]. Available at: opencv.org [Accessed]: 09/01/2015. [7] Qt. 2015. Qt Project home

  1. Double Star Measurements at the Internationale Amateur Sternwarte (IAS) in Namibia in 2008 and 2009

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2010-04-01

    A 40-cm-Cassegrain telescope in Namibia was used for observing double and multiple systems in the southern sky. Digital images were recorded with a CCD camera at high frame rates via a firewire interface directly in a computer. Measurements of 34 double and multiple systems are presented and compared with literature data. Some noteworthy objects are discussed in more detail.

  2. Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design

    DTIC Science & Technology

    2016-10-01

    study of the resulting videos led to a new prosthetics-use taxonomy that is generalizable to various levels of amputation and terminal devices. The...taxonomy was applied to classification of the recorded videos via custom tagging software with midi controller interface. The software creates...a motion capture studio and video cameras to record accurate and detailed upper body motion during a series of standardized tasks. These tasks are

  3. Natural user interface as a supplement of the holographic Raman tweezers

    NASA Astrophysics Data System (ADS)

    Tomori, Zoltan; Kanka, Jan; Kesa, Peter; Jakl, Petr; Sery, Mojmir; Bernatova, Silvie; Antalik, Marian; Zemánek, Pavel

    2014-09-01

    Holographic Raman tweezers (HRT) manipulates with microobjects by controlling the positions of multiple optical traps via the mouse or joystick. Several attempts have appeared recently to exploit touch tablets, 2D cameras or Kinect game console instead. We proposed a multimodal "Natural User Interface" (NUI) approach integrating hands tracking, gestures recognition, eye tracking and speech recognition. For this purpose we exploited "Leap Motion" and "MyGaze" low-cost sensors and a simple speech recognition program "Tazti". We developed own NUI software which processes signals from the sensors and sends the control commands to HRT which subsequently controls the positions of trapping beams, micropositioning stage and the acquisition system of Raman spectra. System allows various modes of operation proper for specific tasks. Virtual tools (called "pin" and "tweezers") serving for the manipulation with particles are displayed on the transparent "overlay" window above the live camera image. Eye tracker identifies the position of the observed particle and uses it for the autofocus. Laser trap manipulation navigated by the dominant hand can be combined with the gestures recognition of the secondary hand. Speech commands recognition is useful if both hands are busy. Proposed methods make manual control of HRT more efficient and they are also a good platform for its future semi-automated and fully automated work.

  4. Study of intermolecular contacts in proteins and oligomer interfaces and preliminary investigations into the design and production of nanomaterials from proteins

    NASA Astrophysics Data System (ADS)

    Iyer, Ganesh Hariharan

    The first part of this research involved a study of the nature and extent of nonbonded interactions at crystal and oligomer interfaces. A survey was compiled of several characteristics of intersubunit contacts in 58 different oligomeric proteins, and of the intermolecular contacts in 223 protein crystal structures. Routines written in "S" language were utilized for the generation of the observed and expected contacts. The information in the Protein Data Bank (PDB) was extracted using the database management system, Protein Knowledge Base (PKB). Potentials of mean force for atom-atom contacts and residue-residue contacts were derived by comparison of the number of observed interactions with the number expected by mass action. Preference association matrices and log-linear analyses were applied to determine the different factors that could contribute to the overall interactions at the interfaces of oligomers and crystals. Surface patches at oligomer and crystal interfaces were also studied to further investigate the origin of the differences in their stabilities. Total number of atoms in contact and the secondary structure elements involved are similar in the two types of interfaces. Crystal contacts result from more numerous interactions by polar residues, compared with a tendency toward nonpolar amino acid prominent in oligomer interfaces. Contact potentials indicate that hydrophobic interactions at oligomer interfaces favor aromatic amino acids and methionine over aliphatic amino acids; and that crystal contacts form in such a way as to avoid inclusion of hydrophobic interactions. The second part involved the development of a new class of biomaterials from two-dimensional arrays of ordered proteins. Point mutations were planned to introduce cysteine residues at appropriate locations to enable cross-linking at the molecular interface within given crystallographic planes. Crystallization and subsequent cross-linking of the modified protein would lead to the formation of arrays on subsequent dissociation of the crystal. Novel protein architectures can be generated from these cross-linked nanostructures. Experiments with model protein, maltose-binding protein (MBP) were performed to develop purification, cross-linking and crystallization techniques. The long-term goal of this project is to apply the experience gained with MBP to the fabrication of nanomaterials from other, application-specific proteins for ultrafiltration and microelectronic devices.

  5. Steered Molecular Dynamics Simulations Predict Conformational Stability of Glutamate Receptors.

    PubMed

    Musgaard, Maria; Biggin, Philip C

    2016-09-26

    The stability of protein-protein interfaces can be essential for protein function. For ionotropic glutamate receptors, a family of ligand-gated ion channels vital for normal function of the central nervous system, such an interface exists between the extracellular ligand binding domains (LBDs). In the full-length protein, the LBDs are arranged as a dimer of dimers. Agonist binding to the LBDs opens the ion channel, and briefly after activation the receptor desensitizes. Several residues at the LBD dimer interface are known to modulate desensitization, and conformational changes around these residues are believed to be involved in the state transition. The general hypothesis is that the interface is disrupted upon desensitization, and structural evidence suggests that the disruption might be substantial. However, when cross-linking the central part of this interface, functional data suggest that the receptor can still undergo desensitization, contradicting the hypothesis of major interface disruption. Here, we illustrate how opening the dimer interface using steered molecular dynamics (SMD) simulations, and analyzing the work values required, provides a quantitative measure for interface stability. For one subtype of glutamate receptors, which is regulated by ion binding to the dimer interface, we show that opening the interface without ions bound requires less work than with ions present, suggesting that ion binding indeed stabilizes the interface. Likewise, for interface mutants with longer-lived active states, the interface is more stable, while the work required to open the interface is reduced for less active mutants. Moreover, a cross-linked mutant can still undergo initial interface opening motions similar to the native receptor and at similar energetic cost. Thus, our results support that interface opening is involved in desensitization. Furthermore, they provide reconciliation of apparently opposing data and demonstrate that SMD simulations can give relevant biological insight into longer time scale processes without the need for expensive calculations.

  6. ACR/NEMA Digital Image Interface Standard (An Illustrated Protocol Overview)

    NASA Astrophysics Data System (ADS)

    Lawrence, G. Robert

    1985-09-01

    The American College of Radiologists (ACR) and the National Electrical Manufacturers Association (NEMA) have sponsored a joint standards committee mandated to develop a universal interface standard for the transfer of radiology images among a variety of PACS imaging devicesl. The resulting standard interface conforms to the ISO/OSI standard reference model for network protocol layering. The standard interface specifies the lower layers of the reference model (Physical, Data Link, Transport and Session) and implies a requirement of the Network Layer should a requirement for a network exist. The message content has been considered and a flexible message and image format specified. The following Imaging Equipment modalities are supported by the standard interface... CT Computed Tomograpy DS Digital Subtraction NM Nuclear Medicine US Ultrasound MR Magnetic Resonance DR Digital Radiology The following data types are standardized over the transmission interface media.... IMAGE DATA DIGITIZED VOICE HEADER DATA RAW DATA TEXT REPORTS GRAPHICS OTHERS This paper consists of text supporting the illustrated protocol data flow. Each layer will be individually treated. Particular emphasis will be given to the Data Link layer (Frames) and the Transport layer (Packets). The discussion utilizes a finite state sequential machine model for the protocol layers.

  7. Sensory Interactive Teleoperator Robotic Grasping

    NASA Technical Reports Server (NTRS)

    Alark, Keli; Lumia, Ron

    1997-01-01

    As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.

  8. Mapping Sequence performed during the STS-117 R-Bar Pitch Maneuver

    NASA Image and Video Library

    2007-06-10

    ISS015-E-11320 (10 June 2007) --- This is one of a series of images, photographed with a digital still camera using an 800mm focal length, featuring the different areas of the Space Shuttle Atlantis as it approached the International Space Station and performed a back-flip to accommodate close scrutiny by eyeballs and cameras. This image shows part of Atlantis' cabin and its docking system, which a short time later was involved in linking up with the orbital outpost. Distance from the station and shuttle at this time was approximately 600 feet.

  9. iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones

    NASA Astrophysics Data System (ADS)

    Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il

    2013-02-01

    The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.

  10. Virtual Keyboard for Hands-Free Operations

    NASA Technical Reports Server (NTRS)

    Abou-Ali, Abdel-Latief; Porter, William A.

    1996-01-01

    The measurement of direction of gaze (d.o.g.) has been used for clinical purposes to detect illness, such as nystagmus, unusual fixation movements and many others. It also is used to determine the points of interest in objects. In this study we employ a measurement of d.o.g. as a computer interface. The interface provides a full keyboard as well as a mouse function. Such an interface is important to computer users with paralysis or in environments where hand-free machine interface is required. The study utilizes the commercially available (ISCAN Model RK426TC) headset which consists of an InfraRed (IR) source and an IR camera to sense deflection of the illuminating beam. It also incorporates image processing package that provides the position of the pupil as well as the pupil size. The study shows the ability of implementing a full keyboard, together with some control functions, imaged on a head mounted monitor screen. This document is composed of four sections: (1) The Nature of the Equipment; (2) The Calibration Process; (3) Running Process; and (4) Conclusions.

  11. Flight 1 technical report for experiment 74-37 contained polycrystalline solidification in low-G

    NASA Technical Reports Server (NTRS)

    Papaziak, J. M.; Kattamis, T. Z.

    1976-01-01

    A .005 M solution of fluorescein in cyclohexanol was directionally solidified in a standard 10 x 10 x 45mm UV silica cuvette, using a bottom thermoelectric chilling device. Progress of the experiment was monitored by time lapse photography. During flight (SPAR I) the camera malfunctioned and only one quarter of the expected data were collected. Comparison of flight and ground specimens indicated that: (1) The dark green layer observed ahead of the solid-liquid interface which is most likely the solute-enriched zone, appears to be wider in the flight specimen; (2) Parasitic nucleation ahead of the solid-liquid interface in the flight sample led to an irregularly shaped interface, smaller grain size, equiaxed grain morphology and a larger average macroscopic growth rate; (3) The formation of equiaxed grains ahead of the solid-liquid interface in the flight specimen may be attributed to ordered islands within the liquid, which survived remelting because of the low degree of superheating (approximately equal to 1.5 C), did not settle because of reduced gravity and acted as nuclei during cooling.

  12. A flight test design for studying airborne applications of air to ground duplex data link communications

    NASA Technical Reports Server (NTRS)

    Scanlon, Charles H.

    1988-01-01

    The Automatic En Route Air Traffic Control (AERA) and the Advanced Automated System (AAS) of the NAS plan, call for utilization of data links for such items as computer generated flight clearances, enroute minimum safe altitude warnings, sector probes, out of conformance check, automated flight services, and flow management of advisories. A major technical challenge remaining is the integration, flight testing, and validation of data link equipment and procedures in the aircraft cockpit. The flight test organizational chart, was designed to have the airplane side of data link experiments implemented in the NASA Langley Research Center (LaRC) experimental Boeing 737 airplane. This design would enable investigations into implementation of data link equipment and pilot interface, operations, and procedures. The illustrated ground system consists of a work station with links to a national weather database and a data link transceiver system. The data link transceiver system could be a Mode-S transponder, ACARS, AVSAT, or another type of radio system such as the military type HF data link. The airborne system was designed so that a data link transceiver, workstation, and touch panel could be interfaced with an input output processor to the aircraft system bus and thus have communications access to other digital airplane systems.

  13. Interpretation and mapping of geological features using mobile devices for 3D outcrop modelling

    NASA Astrophysics Data System (ADS)

    Buckley, Simon J.; Kehl, Christian; Mullins, James R.; Howell, John A.

    2016-04-01

    Advances in 3D digital geometric characterisation have resulted in widespread adoption in recent years, with photorealistic models utilised for interpretation, quantitative and qualitative analysis, as well as education, in an increasingly diverse range of geoscience applications. Topographic models created using lidar and photogrammetry, optionally combined with imagery from sensors such as hyperspectral and thermal cameras, are now becoming commonplace in geoscientific research. Mobile devices (tablets and smartphones) are maturing rapidly to become powerful field computers capable of displaying and interpreting 3D models directly in the field. With increasingly high-quality digital image capture, combined with on-board sensor pose estimation, mobile devices are, in addition, a source of primary data, which can be employed to enhance existing geological models. Adding supplementary image textures and 2D annotations to photorealistic models is therefore a desirable next step to complement conventional field geoscience. This contribution reports on research into field-based interpretation and conceptual sketching on images and photorealistic models on mobile devices, motivated by the desire to utilise digital outcrop models to generate high quality training images (TIs) for multipoint statistics (MPS) property modelling. Representative training images define sedimentological concepts and spatial relationships between elements in the system, which are subsequently modelled using artificial learning to populate geocellular models. Photorealistic outcrop models are underused sources of quantitative and qualitative information for generating TIs, explored further in this research by linking field and office workflows through the mobile device. Existing textured models are loaded to the mobile device, allowing rendering in a 3D environment. Because interpretation in 2D is more familiar and comfortable for users, the developed application allows new images to be captured with the device's digital camera, and an interface is available for annotating (interpreting) the image using lines and polygons. Image-to-geometry registration is then performed using a developed algorithm, initialised using the coarse pose from the on-board orientation and positioning sensors. The annotations made on the captured images are then available in the 3D model coordinate system for overlay and export. This workflow allows geologists to make interpretations and conceptual models in the field, which can then be linked to and refined in office workflows for later MPS property modelling.

  14. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, Josef; Magnera, Thomas F.; David, Donald E.; Harrison, Robin M.

    1999-01-01

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures.

  15. Method of assembly of molecular-sized nets and scaffolding

    DOEpatents

    Michl, J.; Magnera, T.F.; David, D.E.; Harrison, R.M.

    1999-03-02

    The present invention relates to methods and starting materials for forming molecular-sized grids or nets, or other structures based on such grids and nets, by creating molecular links between elementary molecular modules constrained to move in only two directions on an interface or surface by adhesion or bonding to that interface or surface. In the methods of this invention, monomers are employed as the building blocks of grids and more complex structures. Monomers are introduced onto and allowed to adhere or bond to an interface. The connector groups of adjacent adhered monomers are then polymerized with each other to form a regular grid in two dimensions above the interface. Modules that are not bound or adhered to the interface are removed prior to reaction of the connector groups to avoid undesired three-dimensional cross-linking and the formation of non-grid structures. Grids formed by the methods of this invention are useful in a variety of applications, including among others, for separations technology, as masks for forming regular surface structures (i.e., metal deposition) and as templates for three-dimensional molecular-sized structures. 9 figs.

  16. New Airborne Sensors and Platforms for Solving Specific Tasks in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Kemper, G.

    2012-07-01

    A huge number of small and medium sized sensors entered the market. Today's mid format sensors reach 80 MPix and allow to run projects of medium size, comparable with the first big format digital cameras about 6 years ago. New high quality lenses and new developments in the integration prepared the market for photogrammetric work. Companies as Phase One or Hasselblad and producers or integrators as Trimble, Optec, and others utilized these cameras for professional image production. In combination with small camera stabilizers they can be used also in small aircraft and make the equipment small and easy transportable e.g. for rapid assessment purposes. The combination of different camera sensors enables multi or hyper-spectral installations e.g. useful for agricultural or environmental projects. Arrays of oblique viewing cameras are in the market as well, in many cases these are small and medium format sensors combined as rotating or shifting devices or just as a fixed setup. Beside the proper camera installation and integration, also the software that controls the hardware and guides the pilot has to solve much more tasks than a normal FMS did in the past. Small and relatively cheap Laser Scanners (e.g. Riegl) are in the market and a proper combination with MS Cameras and an integrated planning and navigation is a challenge that has been solved by different softwares. Turnkey solutions are available e.g. for monitoring power line corridors where taking images is just a part of the job. Integration of thermal camera systems with laser scanner and video capturing must be combined with specific information of the objects stored in a database and linked when approaching the navigation point.

  17. P-glycoprotein ATPase activity requires lipids to activate a switch at the first transmission interface.

    PubMed

    Loo, Tip W; Clarke, David M

    2016-04-01

    P-glycoprotein (P-gp) is an ABC (ATP-Binding Cassette) drug pump. A common feature of ABC proteins is that they are organized into two wings. Each wing contains a transmembrane domain (TMD) and a nucleotide-binding domain (NBD). Drug substrates and ATP bind at the interface between the TMDs and NBDs, respectively. Drug transport involves ATP-dependent conformational changes between inward- (open, NBDs far apart) and outward-facing (closed, NBDs close together) conformations. P-gps crystallized in the presence of detergent show an open structure. Human P-gp is inactive in detergent but basal ATPase activity is restored upon addition of lipids. The lipids might cause closure of the wings to bring the NBDs close together to allow ATP hydrolysis. We show however, that cross-linking the wings together did not activate ATPase activity when lipids were absent suggesting that lipids may induce other structural changes required for ATPase activity. We then tested the effect of lipids on disulfide cross-linking of mutants at the first transmission interface between intracellular loop 4 (TMD2) and NBD1. Mutants L443C/S909C and L443C/R905C but not G471C/S909C and V472C/S909C were cross-linked with oxidant when in membranes. The mutants were then purified and cross-linked with or without lipids. Mutants G471C/S909C and V472C/S909C cross-linked only in the absence of lipids whereas mutants L443C/S909C and L443C/R905C were cross-linked only in the presence of lipids. The results suggest that lipids activate a switch at the first transmission interface and that the structure of P-gp is different in detergents and lipids. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.

  18. Visual design for the user interface, Part 1: Design fundamentals.

    PubMed

    Lynch, P J

    1994-01-01

    Digital audiovisual media and computer-based documents will be the dominant forms of professional communication in both clinical medicine and the biomedical sciences. The design of highly interactive multimedia systems will shortly become a major activity for biocommunications professionals. The problems of human-computer interface design are intimately linked with graphic design for multimedia presentations and on-line document systems. This article outlines the history of graphic interface design and the theories that have influenced the development of today's major graphic user interfaces.

  19. Indoor integrated navigation and synchronous data acquisition method for Android smartphone

    NASA Astrophysics Data System (ADS)

    Hu, Chunsheng; Wei, Wenjian; Qin, Shiqiao; Wang, Xingshu; Habib, Ayman; Wang, Ruisheng

    2015-08-01

    Smartphones are widely used at present. Most smartphones have cameras and kinds of sensors, such as gyroscope, accelerometer and magnet meter. Indoor navigation based on smartphone is very important and valuable. According to the features of the smartphone and indoor navigation, a new indoor integrated navigation method is proposed, which uses MEMS (Micro-Electro-Mechanical Systems) IMU (Inertial Measurement Unit), camera and magnet meter of smartphone. The proposed navigation method mainly involves data acquisition, camera calibration, image measurement, IMU calibration, initial alignment, strapdown integral, zero velocity update and integrated navigation. Synchronous data acquisition of the sensors (gyroscope, accelerometer and magnet meter) and the camera is the base of the indoor navigation on the smartphone. A camera data acquisition method is introduced, which uses the camera class of Android to record images and time of smartphone camera. Two kinds of sensor data acquisition methods are introduced and compared. The first method records sensor data and time with the SensorManager of Android. The second method realizes open, close, data receiving and saving functions in C language, and calls the sensor functions in Java language with JNI interface. A data acquisition software is developed with JDK (Java Development Kit), Android ADT (Android Development Tools) and NDK (Native Development Kit). The software can record camera data, sensor data and time at the same time. Data acquisition experiments have been done with the developed software and Sumsang Note 2 smartphone. The experimental results show that the first method of sensor data acquisition is convenient but lost the sensor data sometimes, the second method is much better in real-time performance and much less in data losing. A checkerboard image is recorded, and the corner points of the checkerboard are detected with the Harris method. The sensor data of gyroscope, accelerometer and magnet meter have been recorded about 30 minutes. The bias stability and noise feature of the sensors have been analyzed. Besides the indoor integrated navigation, the integrated navigation and synchronous data acquisition method can be applied to outdoor navigation.

  20. Prediction of optical communication link availability: real-time observation of cloud patterns using a ground-based thermal infrared camera

    NASA Astrophysics Data System (ADS)

    Bertin, Clément; Cros, Sylvain; Saint-Antonin, Laurent; Schmutz, Nicolas

    2015-10-01

    The growing demand for high-speed broadband communications with low orbital or geostationary satellites is a major challenge. Using an optical link at 1.55 μm is an advantageous solution which potentially can increase the satellite throughput by a factor 10. Nevertheless, cloud cover is an obstacle for this optical frequency. Such communication requires an innovative management system to optimize the optical link availability between a satellite and several Optical Ground Stations (OGS). The Saint-Exupery Technological Research Institute (France) leads the project ALBS (French acronym for BroadBand Satellite Access). This initiative involving small and medium enterprises, industrial groups and research institutions specialized in aeronautics and space industries, is currently developing various solutions to increase the telecommunication satellite bandwidth. This paper presents the development of a preliminary prediction system preventing the cloud blockage of an optical link between a satellite and a given OGS. An infrared thermal camera continuously observes (night and day) the sky vault. Cloud patterns are observed and classified several times a minute. The impact of the detected clouds on the optical beam (obstruction or not) is determined by the retrieval of the cloud optical depth at the wavelength of communication. This retrieval is based on realistic cloud-modelling on libRadtran. Then, using subsequent images, cloud speed and trajectory are estimated. Cloud blockage over an OGS can then be forecast up to 30 minutes ahead. With this information, the preparation of the new link between the satellite and another OGS under a clear sky can be prepared before the link breaks due to cloud blockage.

  1. Probe interface design consideration. [for interplanetary spacecraft missions

    NASA Technical Reports Server (NTRS)

    Casani, E. K.

    1974-01-01

    Interface design between a probe and a spacecraft requires not only technical considerations but also management planning and mission analysis interactions. Two further aspects of importance are the flyby versus the probe trade-off, and the relay link design and data handling optimization.

  2. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases.

    PubMed

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-07-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org.

  3. A simple optical tweezers for trapping polystyrene particles

    NASA Astrophysics Data System (ADS)

    Shiddiq, Minarni; Nasir, Zulfa; Yogasari, Dwiyana

    2013-09-01

    Optical tweezers is an optical trap. For decades, it has become an optical tool that can trap and manipulate any particle from the very small size like DNA to the big one like bacteria. The trapping force comes from the radiation pressure of laser light which is focused to a group of particles. Optical tweezers has been used in many research areas such as atomic physics, medical physics, biophysics, and chemistry. Here, a simple optical tweezers has been constructed using a modified Leybold laboratory optical microscope. The ocular lens of the microscope has been removed for laser light and digital camera accesses. A laser light from a Coherent diode laser with wavelength λ = 830 nm and power 50 mW is sent through an immersion oil objective lens with magnification 100 × and NA 1.25 to a cell made from microscope slides containing polystyrene particles. Polystyrene particles with size 3 μm and 10 μm are used. A CMOS Thorlabs camera type DCC1545M with USB Interface and Thorlabs camera lens 35 mm are connected to a desktop and used to monitor the trapping and measure the stiffness of the trap. The camera is accompanied by camera software which makes able for the user to capture and save images. The images are analyzed using ImageJ and Scion macro. The polystyrene particles have been trapped successfully. The stiffness of the trap depends on the size of the particles and the power of the laser. The stiffness increases linearly with power and decreases as the particle size larger.

  4. An algorithm of a real time image tracking system using a camera with pan/tilt motors on an embedded system

    NASA Astrophysics Data System (ADS)

    Kim, Hie-Sik; Nam, Chul; Ha, Kwan-Yong; Ayurzana, Odgeral; Kwon, Jong-Won

    2005-12-01

    The embedded systems have been applied to many fields, including households and industrial sites. The user interface technology with simple display on the screen was implemented more and more. The user demands are increasing and the system has more various applicable fields due to a high penetration rate of the Internet. Therefore, the demand for embedded system is tend to rise. An embedded system for image tracking was implemented. This system is used a fixed IP for the reliable server operation on TCP/IP networks. Using an USB camera on the embedded Linux system developed a real time broadcasting of video image on the Internet. The digital camera is connected at the USB host port of the embedded board. All input images from the video camera are continuously stored as a compressed JPEG file in a directory at the Linux web-server. And each frame image data from web camera is compared for measurement of displacement Vector. That used Block matching algorithm and edge detection algorithm for past speed. And the displacement vector is used at pan/tilt motor control through RS232 serial cable. The embedded board utilized the S3C2410 MPU, which used the ARM 920T core form Samsung. The operating system was ported to embedded Linux kernel and mounted of root file system. And the stored images are sent to the client PC through the web browser. It used the network function of Linux and it developed a program with protocol of the TCP/IP.

  5. DAQ: Software Architecture for Data Acquisition in Sounding Rockets

    NASA Technical Reports Server (NTRS)

    Ahmad, Mohammad; Tran, Thanh; Nichols, Heidi; Bowles-Martinez, Jessica N.

    2011-01-01

    A multithreaded software application was developed by Jet Propulsion Lab (JPL) to collect a set of correlated imagery, Inertial Measurement Unit (IMU) and GPS data for a Wallops Flight Facility (WFF) sounding rocket flight. The data set will be used to advance Terrain Relative Navigation (TRN) technology algorithms being researched at JPL. This paper describes the software architecture and the tests used to meet the timing and data rate requirements for the software used to collect the dataset. Also discussed are the challenges of using commercial off the shelf (COTS) flight hardware and open source software. This includes multiple Camera Link (C-link) based cameras, a Pentium-M based computer, and Linux Fedora 11 operating system. Additionally, the paper talks about the history of the software architecture's usage in other JPL projects and its applicability for future missions, such as cubesats, UAVs, and research planes/balloons. Also talked about will be the human aspect of project especially JPL's Phaeton program and the results of the launch.

  6. Connecting quantum dots and bionanoparticles in hybrid nanoscale ultra-thin films

    NASA Astrophysics Data System (ADS)

    Tangirala, Ravisubhash; Hu, Yunxia; Zhang, Qingling; He, Jinbo; Russell, Thomas; Emrick, Todd

    2008-03-01

    Aldehyde-functionalized CdSe quantum dots and nanorods, and horse spleen ferritin bionanoparticles, were co-assembled at an oil-water interface. Reaction of the aldehydes with the surface-available amines on the ferritin particles enabled cross-linking at the interface, converting the assembled nanoparticles into robust ultra-thin films. The cross-linked capsules and sheets thus made by aldehyde-amine conjugation could be disrupted by addition of acid. Reductive amination chemistry could be performed to convert these degradable capsules and sheets into structures with irreversible cross-linking. Fluorescence confocal microscopy, scanning force microscopy and pendant drop tensiometry were used to characterize these hybrid nanoparticle-based materials, and transmission electron microscopy (TEM) confirmed the presence of both the synthetic and naturally derived nanoparticles.

  7. Model-based software engineering for an optical navigation system for spacecraft

    NASA Astrophysics Data System (ADS)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2017-09-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  8. Density Driven Removal of Sediment from a Buoyant Muddy Plume

    NASA Astrophysics Data System (ADS)

    Rouhnia, M.; Strom, K.

    2014-12-01

    Experiments were conducted to study the effect of settling driven instabilities on sediment removal from hypopycnal plumes. Traditional approaches scale removal rates with particle settling velocity however, it has been suggested that the removal from buoyant suspensions happens at higher rates. The enhancement of removal is likely due to gravitational instabilities, such as fingering, at two-fluid interface. Previous studies have all sought to suppress flocculation, and no simple model exists to predict the removal rates under the effect of such instabilities. This study examines whether or not flocculation hampers instability formation and presents a simple removal rate model accounting for gravitational instabilities. A buoyant suspension of flocculated Kaolinite overlying a base of clear saltwater was investigated in a laboratory tank. Concentration was continuously measured in both layers with a pair of OBS sensors, and interface was monitored with digital cameras. Snapshots from the video were used to measure finger velocity. Samples of flocculated particles at the interface were extracted to retrieve floc size data using a floc camera. Flocculation did not stop creation of settling-driven fingers. A simple cylinder-based force balance model was capable of predicting finger velocity. Analogy of fingering process of fine grained suspensions to thermal plume formation and the concept of Grashof number enabled us to model finger spacing as a function of initial concentration. Finally, from geometry, the effective cross-sectional area was correlated to finger spacing. Reformulating the outward flux expression was done by substitution of finger velocity, rather than particle settling velocity, and finger area instead of total area. A box model along with the proposed outward flux was used to predict the SSC in buoyant layer. The model quantifies removal flux based on the initial SSC and is in good agreement with the experimental data.

  9. Model-based software engineering for an optical navigation system for spacecraft

    NASA Astrophysics Data System (ADS)

    Franz, T.; Lüdtke, D.; Maibaum, O.; Gerndt, A.

    2018-06-01

    The project Autonomous Terrain-based Optical Navigation (ATON) at the German Aerospace Center (DLR) is developing an optical navigation system for future landing missions on celestial bodies such as the moon or asteroids. Image data obtained by optical sensors can be used for autonomous determination of the spacecraft's position and attitude. Camera-in-the-loop experiments in the Testbed for Robotic Optical Navigation (TRON) laboratory and flight campaigns with unmanned aerial vehicle (UAV) are performed to gather flight data for further development and to test the system in a closed-loop scenario. The software modules are executed in the C++ Tasking Framework that provides the means to concurrently run the modules in separated tasks, send messages between tasks, and schedule task execution based on events. Since the project is developed in collaboration with several institutes in different domains at DLR, clearly defined and well-documented interfaces are necessary. Preventing misconceptions caused by differences between various development philosophies and standards turned out to be challenging. After the first development cycles with manual Interface Control Documents (ICD) and manual implementation of the complex interactions between modules, we switched to a model-based approach. The ATON model covers a graphical description of the modules, their parameters and communication patterns. Type and consistency checks on this formal level help to reduce errors in the system. The model enables the generation of interfaces and unified data types as well as their documentation. Furthermore, the C++ code for the exchange of data between the modules and the scheduling of the software tasks is created automatically. With this approach, changing the data flow in the system or adding additional components (e.g., a second camera) have become trivial.

  10. Pre-Clinical and Clinical Evaluation of High Resolution, Mobile Gamma Camera and Positron Imaging Devices

    DTIC Science & Technology

    2007-11-01

    accuracy. FPGA ADC data acquisition is controlled by distributed Java -based software. Java -based server application sits on each of the acquisition...JNI ( Java Native Interface) is used to allow Java indirect control of the USB driver. Fig. 5. Photograph of mobile electronics rack...supplies with the monitor and keyboard. The server application on each of these machines is controlled by a remote client Java -based application

  11. Development of a CCD array as an imaging detector for advanced X-ray astrophysics facilities

    NASA Technical Reports Server (NTRS)

    Schwartz, D. A.

    1981-01-01

    The development of a charge coupled device (CCD) X-ray imager for a large aperture, high angular resolution X-ray telescope is discussed. Existing CCDs were surveyed and three candidate concepts were identified. An electronic camera control and computer interface, including software to drive a Fairchild 211 CCD, is described. In addition a vacuum mounting and cooling system is discussed. Performance data for the various components are given.

  12. Light-Field Imaging Toolkit

    NASA Astrophysics Data System (ADS)

    Bolan, Jeffrey; Hall, Elise; Clifford, Chris; Thurow, Brian

    The Light-Field Imaging Toolkit (LFIT) is a collection of MATLAB functions designed to facilitate the rapid processing of raw light field images captured by a plenoptic camera. An included graphical user interface streamlines the necessary post-processing steps associated with plenoptic images. The generation of perspective shifted views and computationally refocused images is supported, in both single image and animated formats. LFIT performs necessary calibration, interpolation, and structuring steps to enable future applications of this technology.

  13. Dichotomous Results Using Polarized Illumination with Single Chip Color Cameras

    DTIC Science & Technology

    2013-01-01

    response is both strain and chemically induced at an interior laminate layer interface. The size and location of the pattern are crucial and not the...the ideal for making photoelastic stress measurements, which were not required for this sample. ...............7 Figure 8. A single laminate as seen...7 Figure 9. The observed response was isolated to a single layer of the laminate structure. The analyzer is in front of the base

  14. Applied Meteorology Unit (AMU) Quarterly Report Fourth Quarter FY-04

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2004-01-01

    This report summarizes the Applied Meteorology Unit (A MU) activities for the fourth quarter of Fiscal Year 2004 (July -Sept 2004). Tasks covered are: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Shuttle Ascent Camera Cloud Obstruction Forecast, (5) Advanced Regional Prediction System (ARPS) Optimization and Training Extension and (5) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest.

  15. A new 3D viewer as an interface between the ASDEX Upgrade CAD models and data from plasma modelling and experiment

    NASA Astrophysics Data System (ADS)

    Lunt, T.; Fuchs, J. C.; Mank, K.; Feng, Y.; Brochard, F.; Herrmann, A.; Rohde, V.; Endstrasser, N.; ASDEX Upgrade Team

    2010-11-01

    A generally available and easy-to-use viewer for the simultaneous visualisation of the ASDEX Upgrade vacuum vessel computer aided design models, diagnostics and magnetic geometry, solutions of 3D plasma simulation codes and 2D camera images was developed. Here we report on the working principle of this software and give several examples of its technical and scientific application.

  16. The Aeronautical Data Link: Decision Framework for Architecture Analysis

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry; Goode, Plesent W.

    2003-01-01

    A decision analytic approach that develops optimal data link architecture configuration and behavior to meet multiple conflicting objectives of concurrent and different airspace operations functions has previously been developed. The approach, premised on a formal taxonomic classification that correlates data link performance with operations requirements, information requirements, and implementing technologies, provides a coherent methodology for data link architectural analysis from top-down and bottom-up perspectives. This paper follows the previous research by providing more specific approaches for mapping and transitioning between the lower levels of the decision framework. The goal of the architectural analysis methodology is to assess the impact of specific architecture configurations and behaviors on the efficiency, capacity, and safety of operations. This necessarily involves understanding the various capabilities, system level performance issues and performance and interface concepts related to the conceptual purpose of the architecture and to the underlying data link technologies. Efficient and goal-directed data link architectural network configuration is conditioned on quantifying the risks and uncertainties associated with complex structural interface decisions. Deterministic and stochastic optimal design approaches will be discussed that maximize the effectiveness of architectural designs.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Antonelli, Perry Edward

    A low-level model-to-model interface is presented that will enable independent models to be linked into an integrated system of models. The interface is based on a standard set of functions that contain appropriate export and import schemas that enable models to be linked with no changes to the models themselves. These ideas are presented in the context of a specific multiscale material problem that couples atomistic-based molecular dynamics calculations to continuum calculations of fluid ow. These simulations will be used to examine the influence of interactions of the fluid with an adjacent solid on the fluid ow. The interface willmore » also be examined by adding it to an already existing modeling code, Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS) and comparing it with our own molecular dynamics code.« less

  18. Design of CMOS imaging system based on FPGA

    NASA Astrophysics Data System (ADS)

    Hu, Bo; Chen, Xiaolai

    2017-10-01

    In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.

  19. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, R.F.

    1983-10-18

    An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.

  20. CICADA -- Configurable Instrument Control and Data Acquisition

    NASA Astrophysics Data System (ADS)

    Young, Peter J.; Roberts, William H.; Sebo, Kim M.

    CICADA (Young et al. 1997) is a multi-process, distributed application for the control of astronomical data acquisition systems. It comprises elements that control the operation of, and data flow from CCD camera systems; and the operation of telescope instrument control systems. CICADA can be used to dynamically configure support for astronomical instruments that can be made up of multiple cameras and multiple instrument controllers. Each camera is described by a hierarchy of parts that are each individually configured and linked together. Most of CICADA is written in C++ and much of the configurability of CICADA comes from the use of inheritance and polymorphism. An example of a multiple part instrument configuration -- a wide field imager (WFI) -- is described here. WFI, presently under construction, is made up of eight 2k x 4k CCDs with dual SDSU II controllers and will be used at Siding Spring's ANU 40in and AAO 3.9m telescopes.

  1. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, Robert F.

    1987-01-01

    An apparatus for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously.

  2. Optical pin apparatus for measuring the arrival time and velocity of shock waves and particles

    DOEpatents

    Benjamin, R.F.

    1987-03-10

    An apparatus is disclosed for the detection of the arrival and for the determination of the velocity of disturbances such as shock-wave fronts and/or projectiles. Optical pins using fluid-filled microballoons as the light source and an optical fiber as a link to a photodetector have been used to investigate shock-waves and projectiles. A microballoon filled with a noble gas is affixed to one end of a fiber-optic cable, and the other end of the cable is attached to a high-speed streak camera. As the shock-front or projectile compresses the microballoon, the gas inside is heated and compressed producing a bright flash of light. The flash of light is transmitted via the optic cable to the streak camera where it is recorded. One image-converter streak camera is capable of recording information from more than 100 microballoon-cable combinations simultaneously. 3 figs.

  3. Gesture-Controlled Interfaces for Self-Service Machines

    NASA Technical Reports Server (NTRS)

    Cohen, Charles J.; Beach, Glenn

    2006-01-01

    Gesture-controlled interfaces are software- driven systems that facilitate device control by translating visual hand and body signals into commands. Such interfaces could be especially attractive for controlling self-service machines (SSMs) for example, public information kiosks, ticket dispensers, gasoline pumps, and automated teller machines (see figure). A gesture-controlled interface would include a vision subsystem comprising one or more charge-coupled-device video cameras (at least two would be needed to acquire three-dimensional images of gestures). The output of the vision system would be processed by a pure software gesture-recognition subsystem. Then a translator subsystem would convert a sequence of recognized gestures into commands for the SSM to be controlled; these could include, for example, a command to display requested information, change control settings, or actuate a ticket- or cash-dispensing mechanism. Depending on the design and operational requirements of the SSM to be controlled, the gesture-controlled interface could be designed to respond to specific static gestures, dynamic gestures, or both. Static and dynamic gestures can include stationary or moving hand signals, arm poses or motions, and/or whole-body postures or motions. Static gestures would be recognized on the basis of their shapes; dynamic gestures would be recognized on the basis of both their shapes and their motions. Because dynamic gestures include temporal as well as spatial content, this gesture- controlled interface can extract more information from dynamic than it can from static gestures.

  4. STRS SpaceWire FPGA Module

    NASA Technical Reports Server (NTRS)

    Lux, James P.; Taylor, Gregory H.; Lang, Minh; Stern, Ryan A.

    2011-01-01

    An FPGA module leverages the previous work from Goddard Space Flight Center (GSFC) relating to NASA s Space Telecommunications Radio System (STRS) project. The STRS SpaceWire FPGA Module is written in the Verilog Register Transfer Level (RTL) language, and it encapsulates an unmodified GSFC core (which is written in VHDL). The module has the necessary inputs/outputs (I/Os) and parameters to integrate seamlessly with the SPARC I/O FPGA Interface module (also developed for the STRS operating environment, OE). Software running on the SPARC processor can access the configuration and status registers within the SpaceWire module. This allows software to control and monitor the SpaceWire functions, but it is also used to give software direct access to what is transmitted and received through the link. SpaceWire data characters can be sent/received through the software interface, as well as through the dedicated interface on the GSFC core. Similarly, SpaceWire time codes can be sent/received through the software interface or through a dedicated interface on the core. This innovation is designed for plug-and-play integration in the STRS OE. The SpaceWire module simplifies the interfaces to the GSFC core, and synchronizes all I/O to a single clock. An interrupt output (with optional masking) identifies time-sensitive events within the module. Test modes were added to allow internal loopback of the SpaceWire link and internal loopback of the client-side data interface.

  5. Interface Supports Multiple Broadcast Transceivers for Flight Applications

    NASA Technical Reports Server (NTRS)

    Block, Gary L.; Whitaker, William D.; Dillon, James W.; Lux, James P.; Ahmad, Mohammad

    2011-01-01

    A wireless avionics interface provides a mechanism for managing multiple broadcast transceivers. This interface isolates the control logic required to support multiple transceivers so that the flight application does not have to manage wireless transceivers. All of the logic to select transceivers, detect transmitter and receiver faults, and take autonomous recovery action is contained in the interface, which is not restricted to using wireless transceivers. Wired, wireless, and mixed transceiver technologies are supported. This design s use of broadcast data technology provides inherent cross strapping of data links. This greatly simplifies the design of redundant flight subsystems. The interface fully exploits the broadcast data link to determine the health of other transceivers used to detect and isolate faults for fault recovery. The interface uses simplified control logic, which can be implemented as an intellectual-property (IP) core in a field-programmable gate array (FPGA). The interface arbitrates the reception of inbound data traffic appearing on multiple receivers. It arbitrates the transmission of outbound traffic. This system also monitors broadcast data traffic to determine the health of transmitters in the network, and then uses this health information to make autonomous decisions for routing traffic through transceivers. Multiple selection strategies are supported, like having an active transceiver with the secondary transceiver powered off except to send periodic health status reports. Transceivers can operate in round-robin for load-sharing and graceful degradation.

  6. 75 FR 70128 - 2011 Changes for Domestic Mailing Services

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-17

    ...LOT, RDI, and Five-Digit ZIP. The Postal Service certifies software meeting its standards until the... Delivery Point Validation (DPV) service in conjunction with CASS-Certified address matching software... interface between address-matching software and the LACS \\Link\\ database service. 1.21.2 Interface...

  7. Biopolymers form a gelatinous microlayer at the air-sea interface when Arctic sea ice melts

    PubMed Central

    Galgani, Luisa; Piontek, Judith; Engel, Anja

    2016-01-01

    The interface layer between ocean and atmosphere is only a couple of micrometers thick but plays a critical role in climate relevant processes, including the air-sea exchange of gas and heat and the emission of primary organic aerosols (POA). Recent findings suggest that low-level cloud formation above the Arctic Ocean may be linked to organic polymers produced by marine microorganisms. Sea ice harbors high amounts of polymeric substances that are produced by cells growing within the sea-ice brine. Here, we report from a research cruise to the central Arctic Ocean in 2012. Our study shows that microbial polymers accumulate at the air-sea interface when the sea ice melts. Proteinaceous compounds represented the major fraction of polymers supporting the formation of a gelatinous interface microlayer and providing a hitherto unrecognized potential source of marine POA. Our study indicates a novel link between sea ice-ocean and atmosphere that may be sensitive to climate change. PMID:27435531

  8. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  9. Cooperative processing user interfaces for AdaNET

    NASA Technical Reports Server (NTRS)

    Gutzmann, Kurt M.

    1991-01-01

    A cooperative processing user interface (CUI) system shares the task of graphical display generation and presentation between the user's computer and a remote host. The communications link between the two computers is typically a modem or Ethernet. The two main purposes of a CUI are reduction of the amount of data transmitted between user and host machines, and provision of a graphical user interface system to make the system easier to use.

  10. Orbital operations study. Appendix A: Interactivity analysis

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Supplemental analyses conducted to verify that safe, feasible, design concepts exist for accomplishing the attendant interface activities of the orbital operations mission are presented. The data are primarily concerned with functions and concepts common to more than one of the interfacing activities or elements. Specific consideration is given to state vector update, payload deployment, communications links, jet plume impingement, attached element operations, docking and structural interface assessment, and propellant transfer.

  11. SpaceWire Driver Software for Special DSPs

    NASA Technical Reports Server (NTRS)

    Clark, Douglas; Lux, James; Nishimoto, Kouji; Lang, Minh

    2003-01-01

    A computer program provides a high-level C-language interface to electronics circuitry that controls a SpaceWire interface in a system based on a space qualified version of the ADSP-21020 digital signal processor (DSP). SpaceWire is a spacecraft-oriented standard for packet-switching data-communication networks that comprise nodes connected through bidirectional digital serial links that utilize low-voltage differential signaling (LVDS). The software is tailored to the SMCS-332 application-specific integrated circuit (ASIC) (also available as the TSS901E), which provides three highspeed (150 Mbps) serial point-to-point links compliant with the proposed Institute of Electrical and Electronics Engineers (IEEE) Standard 1355.2 and equivalent European Space Agency (ESA) Standard ECSS-E-50-12. In the specific application of this software, the SpaceWire ASIC was combined with the DSP processor, memory, and control logic in a Multi-Chip Module DSP (MCM-DSP). The software is a collection of low-level driver routines that provide a simple message-passing application programming interface (API) for software running on the DSP. Routines are provided for interrupt-driven access to the two styles of interface provided by the SMCS: (1) the "word at a time" conventional host interface (HOCI); and (2) a higher performance "dual port memory" style interface (COMI).

  12. Investigation of Rapid Low-Power Microwave-Induction Heating Scheme on the Cross-Linking Process of the Poly(4-vinylphenol) for the Gate Insulator of Pentacene-Based Thin-Film Transistors

    PubMed Central

    Fan, Ching-Lin; Shang, Ming-Chi; Wang, Shea-Jue; Hsia, Mao-Yuan; Lee, Win-Der; Huang, Bohr-Ran

    2017-01-01

    In this study, a proposed Microwave-Induction Heating (MIH) scheme has been systematically studied to acquire suitable MIH parameters including chamber pressure, microwave power and heating time. The proposed MIH means that the thin indium tin oxide (ITO) metal below the Poly(4-vinylphenol) (PVP) film is heated rapidly by microwave irradiation and the heated ITO metal gate can heat the PVP gate insulator, resulting in PVP cross-linking. It is found that the attenuation of the microwave energy decreases with the decreasing chamber pressure. The optimal conditions are a power of 50 W, a heating time of 5 min, and a chamber pressure of 20 mTorr. When suitable MIH parameters were used, the effect of PVP cross-linking and the device performance were similar to those obtained using traditional oven heating, even though the cross-linking time was significantly decreased from 1 h to 5 min. Besides the gate leakage current, the interface trap state density (Nit) was also calculated to describe the interface status between the gate insulator and the active layer. The lowest interface trap state density can be found in the device with the PVP gate insulator cross-linked by using the optimal MIH condition. Therefore, it is believed that the MIH scheme is a good candidate to cross-link the PVP gate insulator for organic thin-film transistor applications as a result of its features of rapid heating (5 min) and low-power microwave-irradiation (50 W). PMID:28773101

  13. Investigation of Rapid Low-Power Microwave-Induction Heating Scheme on the Cross-Linking Process of the Poly(4-vinylphenol) for the Gate Insulator of Pentacene-Based Thin-Film Transistors.

    PubMed

    Fan, Ching-Lin; Shang, Ming-Chi; Wang, Shea-Jue; Hsia, Mao-Yuan; Lee, Win-Der; Huang, Bohr-Ran

    2017-07-03

    In this study, a proposed Microwave-Induction Heating (MIH) scheme has been systematically studied to acquire suitable MIH parameters including chamber pressure, microwave power and heating time. The proposed MIH means that the thin indium tin oxide (ITO) metal below the Poly(4-vinylphenol) (PVP) film is heated rapidly by microwave irradiation and the heated ITO metal gate can heat the PVP gate insulator, resulting in PVP cross-linking. It is found that the attenuation of the microwave energy decreases with the decreasing chamber pressure. The optimal conditions are a power of 50 W, a heating time of 5 min, and a chamber pressure of 20 mTorr. When suitable MIH parameters were used, the effect of PVP cross-linking and the device performance were similar to those obtained using traditional oven heating, even though the cross-linking time was significantly decreased from 1 h to 5 min. Besides the gate leakage current, the interface trap state density (Nit) was also calculated to describe the interface status between the gate insulator and the active layer. The lowest interface trap state density can be found in the device with the PVP gate insulator cross-linked by using the optimal MIH condition. Therefore, it is believed that the MIH scheme is a good candidate to cross-link the PVP gate insulator for organic thin-film transistor applications as a result of its features of rapid heating (5 min) and low-power microwave-irradiation (50 W).

  14. Java-Library for the Access, Storage and Editing of Calibration Metadata of Optical Sensors

    NASA Astrophysics Data System (ADS)

    Firlej, M.; Kresse, W.

    2016-06-01

    The standardization of the calibration of optical sensors in photogrammetry and remote sensing has been discussed for more than a decade. Projects of the German DGPF and the European EuroSDR led to the abstract International Technical Specification ISO/TS 19159-1:2014 "Calibration and validation of remote sensing imagery sensors and data - Part 1: Optical sensors". This article presents the first software interface for a read- and write-access to all metadata elements standardized in the ISO/TS 19159-1. This interface is based on an xml-schema that was automatically derived by ShapeChange from the UML-model of the Specification. The software interface serves two cases. First, the more than 300 standardized metadata elements are stored individually according to the xml-schema. Secondly, the camera manufacturers are using many administrative data that are not a part of the ISO/TS 19159-1. The new software interface provides a mechanism for input, storage, editing, and output of both types of data. Finally, an output channel towards a usual calibration protocol is provided. The interface is written in Java. The article also addresses observations made when analysing the ISO/TS 19159-1 and compiles a list of proposals for maturing the document, i.e. for an updated version of the Specification.

  15. Towards Intelligent Environments: An Augmented Reality–Brain–Machine Interface Operated with a See-Through Head-Mount Display

    PubMed Central

    Takano, Kouji; Hata, Naoki; Kansaku, Kenji

    2011-01-01

    The brain–machine interface (BMI) or brain–computer interface is a new interface technology that uses neurophysiological signals from the brain to control external machines or computers. This technology is expected to support daily activities, especially for persons with disabilities. To expand the range of activities enabled by this type of interface, here, we added augmented reality (AR) to a P300-based BMI. In this new system, we used a see-through head-mount display (HMD) to create control panels with flicker visual stimuli to support the user in areas close to controllable devices. When the attached camera detects an AR marker, the position and orientation of the marker are calculated, and the control panel for the pre-assigned appliance is created by the AR system and superimposed on the HMD. The participants were required to control system-compatible devices, and they successfully operated them without significant training. Online performance with the HMD was not different from that using an LCD monitor. Posterior and lateral (right or left) channel selections contributed to operation of the AR–BMI with both the HMD and LCD monitor. Our results indicate that AR–BMI systems operated with a see-through HMD may be useful in building advanced intelligent environments. PMID:21541307

  16. Interaction of strong converging shock wave with SF6 gas bubble

    NASA Astrophysics Data System (ADS)

    Liang, Yu; Zhai, ZhiGang; Luo, XiSheng

    2018-06-01

    Interaction of a strong converging shock wave with an SF6 gas bubble is studied, focusing on the effects of shock intensity and shock shape on interface evolution. Experimentally, the converging shock wave is generated by shock dynamics theory and the gas bubble is created by soap film technique. The post-shock flow field is captured by a schlieren photography combined with a high-speed video camera. Besides, a three-dimensional program is adopted to provide more details of flow field. After the strong converging shock wave impact, a wide and pronged outward jet, which differs from that in planar shock or weak converging shock condition, is derived from the downstream interface pole. This specific phenomenon is considered to be closely associated with shock intensity and shock curvature. Disturbed by the gas bubble, the converging shocks approaching the convergence center have polygonal shapes, and the relationship between shock intensity and shock radius verifies the applicability of polygonal converging shock theory. Subsequently, the motion of upstream point is discussed, and a modified nonlinear theory considering rarefaction wave and high amplitude effects is proposed. In addition, the effects of shock shape on interface morphology and interface scales are elucidated. These results indicate that the shape as well as shock strength plays an important role in interface evolution.

  17. SITHON: A Wireless Network of in Situ Optical Cameras Applied to the Early Detection-Notification-Monitoring of Forest Fires

    PubMed Central

    Tsiourlis, Georgios; Andreadakis, Stamatis; Konstantinidis, Pavlos

    2009-01-01

    The SITHON system, a fully wireless optical imaging system, integrating a network of in-situ optical cameras linking to a multi-layer GIS database operated by Control Operating Centres, has been developed in response to the need for early detection, notification and monitoring of forest fires. This article presents in detail the architecture and the components of SITHON, and demonstrates the first encouraging results of an experimental test with small controlled fires over Sithonia Peninsula in Northern Greece. The system has already been scheduled to be installed in some fire prone areas of Greece. PMID:22408536

  18. PIV measurements of the single-mode Richtmyer-Meshkov instability.

    NASA Astrophysics Data System (ADS)

    Aure, Roger; Jacobs, Jeff

    2006-11-01

    Experiments will be presented where a system of two gases of different densities (A = 0.66) is impulsively accelerated to produce Richtmeyer-Meshkov (RM) instability. An interface is created by filling the driven section of a 9.8 meter long vertical shock tube with opposing gas flows of air and Sulfur Hexafluoride (SF6). The interface forms in the top of the Plexiglas test section where the two gasses meet and exit through two slots. The gases are seeded with 0.3 μm polystyrene Latex spheres. An initial 2-D perturbation in the form of a standing wave of sinusoidal shape is created by oscillating the driven section in the horizontal direction. The interface between the gases is impulsively accelerated by an M=1.2 shockwave. One image per experiment is captured with a cooled CCD camera. The image is doubly exposed by a double-pulsed ND-YAG laser and is analyzed using autocorrelation PIV techniques. Results will be presented showing velocity and vorticity distribution in the RM flow.

  19. Fluid Surface Deformation by Objects in the Cheerios Effect

    NASA Astrophysics Data System (ADS)

    Nguyen, Khoi; Miller, Michael; Mandre, Shreyas; Mandre Lab Team

    2012-11-01

    Small objects floating on a fluid/air interface deform of the surface depending on material surface properties, density, and geometry. These objects attract each other through capillary interactions, a phenomenon dubbed the ``cheerios effect.'' The attractive force and torque exerted on these objects by the interface can be estimated if the meniscus deformation is known. In addition, the floating objects can also rotate due to such an interaction. We present a series of experiments focused on visualizing the the motions of the floating objects and the deformation of the interface. The experiments involve thin laser-cut acrylic pieces attracting each other on water in a large glass petri dish and a camera set-up to capture the process. Furthermore, optical distortion of a grid pattern is used to visualize the water surface deformation near the edge of the objects. This study of the deformation of the water surface around a floating object, of the attractive/repulsive forces, and of post-contact rotational dynamics are potentially instrumental in the study of colloidal self-assembly.

  20. Guest editorial: Introduction to the special issue on modern control for computer games.

    PubMed

    Argyriou, Vasileios; Kotsia, Irene; Zafeiriou, Stefanos; Petrou, Maria

    2013-12-01

    A typical gaming scenario, as developed in the past 20 years, involves a player interacting with a game using a specialized input device, such as a joystic, a mouse, a keyboard, etc. Recent technological advances and new sensors (for example, low cost commodity depth cameras) have enabled the introduction of more elaborated approaches in which the player is now able to interact with the game using his body pose, facial expressions, actions, and even his physiological signals. A new era of games has already started, employing computer vision techniques, brain-computer interfaces systems, haptic and wearable devices. The future lies in games that will be intelligent enough not only to extract the player's commands provided by his speech and gestures but also his behavioral cues, as well as his/her emotional states, and adjust their game plot accordingly in order to ensure more realistic and satisfactory gameplay experience. This special issue on modern control for computer games discusses several interdisciplinary factors that influence a user's input to a game, something directly linked to the gaming experience. These include, but are not limited to, the following: behavioral affective gaming, user satisfaction and perception, motion capture and scene modeling, and complete software frameworks that address several challenges risen in such scenarios.

  1. A new control system hardware architecture for the Hobby-Eberly Telescope prime focus instrument package

    NASA Astrophysics Data System (ADS)

    Ramiller, Chuck; Taylor, Trey; Rafferty, Tom H.; Cornell, Mark E.; Rafal, Marc; Savage, Richard

    2010-07-01

    The Hobby-Eberly Telescope (HET) will be undergoing a major upgrade as a precursor to the HET Dark Energy Experiment (HETDEX‡). As part of this upgrade, the Prime Focus Instrument Package (PFIP) will be replaced with a new design that supports the HETDEX requirements along with the existing suite of instruments and anticipated future additions. This paper describes the new PFIP control system hardware plus the physical constraints and other considerations driving its design. Because of its location at the top end of the telescope, the new PFIP is essentially a stand-alone remote automation island containing over a dozen subsystems. Within the PFIP, motion controllers and modular IO systems are interconnected using a local Controller Area Network (CAN) bus and the CANOpen messaging protocol. CCD cameras that are equipped only with USB 2.0 interfaces are connected to a local Ethernet network via small microcontroller boards running embedded Linux. Links to ground-level systems pass through a 100 m cable bundle and use Ethernet over fiber optic cable exclusively; communications are either direct or through Ethernet/CAN gateways that pass CANOpen messages transparently. All of the control system hardware components are commercially available, designed for rugged industrial applications, and rated for extended temperature operation down to -10 °C.

  2. KSC-07pd2209

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - In Orbiter Processing Facility bay 3, STS-120 crew members practice handling tools they will use during the mission. Around the table, at center, dressed in blue flight suits are Mission Specialists Scott E. Parazynski, Douglas H. Wheelock, Paolo A. Nespoli and Expedition 16 Flight Engineer Daniel M. Tani. Between Wheelock and Nespoli is Allison Bolinger, an EVA technician with NASA. In the foreground is Dina Contella, a thermal protection system specialist with NASA. Nespoli is a European Space Agency astronaut from Italy. The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT, which includes harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  3. KSC-07pd2188

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles under space shuttle Discovery in Orbiter Processing Facility bay 3 are, from left, Expedition 16 Flight Engineer Daniel M. Tani; Mission Specialist Douglas H. Wheelock; Pilot George D. Zamka; Mission Specialist Paolo A. Nespoli, a European Space Agency astronaut from Italy; Allison Bolinger, an EVA technician with NASA; Mission Specialists Scott E. Parazynski and Stephanie D. Wilson; and Erin Schlichenmaier, of TPS Engineering with United Space Alliance. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  4. KSC-07pd2191

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles under space shuttle Discovery in Orbiter Processing Facility bay 3 are, from left, Expedition 16 Flight Engineer Daniel M. Tani; Mission Specialist Douglas H. Wheelock; Pilot George D. Zamka; Mission Specialist Paolo A. Nespoli (kneeling), a European Space Agency astronaut from Italy; Mission Specialist Scott E. Parazynski; Commander Pamela A. Melroy; Allison Bolinger (kneeling), an EVA technician with NASA; Mission Specialist Stephanie D. Wilson; and Erin Schlichenmaier, with United Space Alliance TPS Engineering. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  5. KSC-07pd2189

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles under space shuttle Discovery in Orbiter Processing Facility bay 3 are, from left, Mission Specialist Douglas H. Wheelock (standing); Pilot George D. Zamka; Mission Specialist Paolo A. Nespoli, a European Space Agency astronaut from Italy; Allison Bolinger (pointing), an EVA technician with NASA; Commander Pamela A. Melroy; Mission Specialists Scott E. Parazynski and Stephanie D. Wilson; two support personnel and Erin Schlichenmaier, with United Space Alliance TPS Engineering. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  6. KSC-07pd2190

    NASA Image and Video Library

    2007-08-03

    KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. Inspecting the thermal protection system, or TPS, tiles under space shuttle Discovery in Orbiter Processing Facility bay 3 are, from left, Expedition 16 Flight Engineer Daniel M. Tani; Mission Specialist Douglas H. Wheelock; Pilot George D. Zamka; Mission Specialist Paolo A. Nespoli (sitting), a European Space Agency astronaut from Italy; Mission Specialist Scott E. Parazynski (pointing); Commander Pamela A. Melroy; Allison Bolinger (kneeling), an EVA technician with NASA; Mission Specialist Stephanie D. Wilson; and Erin Schlichenmaier, with United Space Alliance TPS Engineering. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton

  7. A CLIPS-based tool for aircraft pilot-vehicle interface design

    NASA Technical Reports Server (NTRS)

    Fowler, Thomas D.; Rogers, Steven P.

    1991-01-01

    The Pilot-Vehicle Interface of modern aircraft is the cognitive, sensory, and psychomotor link between the pilot, the avionics modules, and all other systems on board the aircraft. To assist pilot-vehicle interface designers, a C Language Integrated Production System (CLIPS) based tool was developed that allows design information to be stored in a table that can be modified by rules representing design knowledge. Developed for the Apple Macintosh, the tool allows users without any CLIPS programming experience to form simple rules using a point and click interface.

  8. SF3M 2.0: improvement of 3D photo-reconstruction interface based on freely available software

    NASA Astrophysics Data System (ADS)

    Castillo, Carlos; James, Michael R.; Pérez, Rafael; Gómez, Jose A.

    2016-04-01

    During recent years, a number of tools based on Structure-from-Motion algorithms have been released for full image-based 3D reconstruction either freely (e.g. Bundler, PMVS2, VisualSFM, MicMac) or commercially (e.g. Agisoft PhotoScan). The SF3M interface was developed in Matlab® to use link software developments (VisualSFM, CloudCompare) and new applications to create a semi-automated workflow including reconstruction, georeferencing and point-cloud filtering, and has been tested for gully erosion assessment with terrestrial images (Castillo et al., 2015). The main aim of this work to provide an improved freely-available and easy-to-use alternative for 3D reconstruction intended for public agencies, non-profit organisations, researchers and other stakeholders interested in 3D modelling. In this communication we present SF3M 2.0, a new version of the graphical user interface. In this case, the SfM module is based on MicMac, an open-software tool (Pierrot-Deseilligny and Cléry, 2011) which provides advanced features such as camera calibration and constrained bundle adjustment using ground control points. SF3M 2.0 will be tested in two scenarios: a) using the same ground-based image set tested in Castillo et al. (2015) to compare the performance of both versions and b) using aerial images taken from a helium balloon to assess a gully network in a 40-hectares catchment. In this study we explore the advantages of SF3M 2.0, explain its operation and evaluate its accuracy and performance. This tool will be also available for free download. References Castillo, C., James, M.R., Redel-Macías, M. D., Pérez, R., and Gómez, J.A.: SF3M software: 3-D photo-reconstruction for non-expert users and its application to a gully network, SOIL, 1, 583-594. Pierrot-Deseilligny, M and Cléry, I. APERO, an Open Source Bundle Adjusment Software for Automatic Calibration and Orientation of a Set of Images. Proceedings of the ISPRS Commission V Symposium, Image Engineering and Vision Metrology, Trento, Italy, 2-4 March 2011.

  9. CCDs in the Mechanics Lab--A Competitive Alternative (Part II).

    ERIC Educational Resources Information Center

    Pinto, Fabrizio

    1995-01-01

    Describes a system of interactive astronomy whereby nonscience students are able to acquire their own images from a room remotely linked to a telescope. Briefly discusses some applications of Charge-Coupled Device cameras (CCDs) in teaching free fall, projectile motion, and the motion of the pendulum. (JRH)

  10. Goodman High Throughput Spectrograph | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER AVAILABLE SOAR 320-850 nm wavelength range. The paper describing the instrument is Clemens et al. (2004) Applying for IRAF. Publishing results based on Goodman data?: ADS link to 2004 SPIE Goodman Spectrograph paper

  11. Experimental entanglement of 25 individually accessible atomic quantum interfaces.

    PubMed

    Pu, Yunfei; Wu, Yukai; Jiang, Nan; Chang, Wei; Li, Chang; Zhang, Sheng; Duan, Luming

    2018-04-01

    A quantum interface links the stationary qubits in a quantum memory with flying photonic qubits in optical transmission channels and constitutes a critical element for the future quantum internet. Entanglement of quantum interfaces is an important step for the realization of quantum networks. Through heralded detection of photon interference, we generate multipartite entanglement between 25 (or 9) individually addressable quantum interfaces in a multiplexed atomic quantum memory array and confirm genuine 22-partite (or 9-partite) entanglement. This experimental entanglement of a record-high number of individually addressable quantum interfaces makes an important step toward the realization of quantum networks, long-distance quantum communication, and multipartite quantum information processing.

  12. Interactive Internet Based Pendulum for Learning Mechatronics

    NASA Astrophysics Data System (ADS)

    Sethson, Magnus R.

    2003-01-01

    This paper describes an Internet based remote experimental setup of a double lined pendulum mechanism for students experiments at the M. Sc. Level. Some of the first year experience using this web-based setup in classes is referred. In most of the courses given at the division of mechanical engineering systems at Linkoeping Institute of Technology we provide experimental setups to enhance the teaching Of M.Sc. students. Many of these experimental setups involve mechatronical systems. Disciplines like fluid power, electronics, and mechanics and also software technologies are used in each experiment. As our campus has recently been split into two different cities some new concepts for distance learning have been studied. The one described here tries to implement remotely controlled mechatronic setups for teaching basic programming of real-time operating systems and analysis of the dynamics of mechanical systems. The students control the regulators for the pendulum through a web interface and get measurement results and a movie back through their email. The present setup uses a double linked pendulum that is controlled by a DC-motor and monitored through both camera and angular position sensors. All software needed is hosted on a double-processor PC running the RedHat 7.1. distribution complemented with real-time scheduling using DIAPM-RTAI 1.7. The Internet site is presented to the students using PHP, Apache and MySQL. All of the used software originates from the open source domain. The experience from integrating these technologies and security issues is discussed together with the web-camera interface. One of the important experiences from this project so far is the need for a good visual feedback. This is both in terms of video speed but also in resolution. It has been noticed that when the students makes misstates and wants to search the failure they want clear, large images with high resolution to support their personal believes in the cause of the failure. Even if the student does not need a high resolution image to get the idea of the mechanics and the function of the pendulum, they need such high quality images to get confidence in the hardware. It is important to support this when the ability to direct hand-on contact with the hardware is taken away. Some of the experiences in combining open source software; real-time scheduling and measurement hardware into a cost efficient way is also discussed. The pendulum has been available publicly on the Internet but has now been removed due to security issues.

  13. Multi-band infrared camera systems

    NASA Astrophysics Data System (ADS)

    Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John

    1994-12-01

    The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.

  14. An imaging system for PLIF/Mie measurements for a combusting flow

    NASA Technical Reports Server (NTRS)

    Wey, C. C.; Ghorashi, B.; Marek, C. J.; Wey, C.

    1990-01-01

    The equipment required to establish an imaging system can be divided into four parts: (1) the light source and beam shaping optics; (2) camera and recording; (3) image acquisition and processing; and (4) computer and output systems. A pulsed, Nd:YAG-pummped, frequency-doubled dye laser which can freeze motion in the flowfield is used for an illumination source. A set of lenses is used to form the laser beam into a sheet. The induced fluorescence is collected by an UV-enhanced lens and passes through an UV-enhanced microchannel plate intensifier which is optically coupled to a gated solid state CCD camera. The output of the camera is simultaneously displayed on a monitor and recorded on either a laser videodisc set of a Super VHS VCR. This videodisc set is controlled by a minicomputer via a connection to the RS-232C interface terminals. The imaging system is connected to the host computer by a bus repeater and can be multiplexed between four video input sources. Sample images from a planar shear layer experiment are presented to show the processing capability of the imaging system with the host computer.

  15. The Global Coronal Structure Investigation

    NASA Technical Reports Server (NTRS)

    Golub, Leon

    1998-01-01

    During the past year we have completed the changeover from the NIXT program to the new TXI sounding rocket program. The NIXT effort, aimed at evaluating the viability of the remaining portions of the NIXT hardware and design, has been finished and the portions of the NIXT which are viable and flightworthy, such as filters, mirror mounting hardware, electronics and telemetry interface systems, are now part of the new rocket payload. The backup NIXT multilayer-coated x-ray telescope and its mounting hardware have been completely fabricated and are being stored for possible future use in the TXI rocket. The H-alpha camera design is being utilized in the TXI program for real-time pointing verification and control via telemetry. A new H-alpha camera has been built, with a high-resolution RS170 CCD camera output. Two papers, summarizing scientific results from the NIXT rocket program, have been written and published this year: 1. "The Solar X-ray Corona," by L. Golub, Astrophysics and Space Science, 237, 33 (1996). 2. "Difficulties in Observing Coronal Structure," Keynote Paper, Proceedings STEPWG1 Workshop on Measurements and Analyses of the Solar 3D Magnetic Field, Solar Physics, 174, 99 (1997).

  16. Keyboard and message evaluation for cockpit input to data link

    DOT National Transportation Integrated Search

    1971-11-01

    The project reported-herein studied some methods for implementation of the man-machine interface of Digital Data Link for Air Traffic Control. An analysis of information transfer requirements indicated that a vocabulary or less than 200 words could y...

  17. Optical sample-position sensing for electrostatic levitation

    NASA Technical Reports Server (NTRS)

    Sridharan, G.; Chung, S.; Elleman, D.; Whim, W. K.

    1989-01-01

    A comparative study is conducted for optical position-sensing techniques applicable to micro-G conditions sample-levitation systems. CCD sensors are compared with one- and two-dimensional position detectors used in electrostatic particle levitation. In principle, the CCD camera method can be improved from current resolution levels of 200 microns through the incorporation of a higher-pixel device and more complex digital signal processor interface. Nevertheless, the one-dimensional position detectors exhibited superior, better-than-one-micron resolution.

  18. CDS Re Mix

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    CDS (Change Detection Systems) is a mechanism for rapid visual analysis using complex image alignment algorithms. CDS is controlled with a simple interface that has been designed for use for anyone that can operate a digital camera. A challenge of complex industrial systems like nuclear power plants is to accurately identify changes in systems, structures and components that may critically impact the operation of the facility. CDS can provide a means of early intervention before the issues evolve into safety and production challenges.

  19. Emission Spectroscopy of the Interior of Optically Dense Post-Detonation Fireballs

    DTIC Science & Technology

    2013-03-01

    sample. Light from the fiber optics was sent to spectrograph located in a shielded observation room several meters away from the explosive charge. The...spectrograph was constructed from a 1/8 m spectrometer (Oriel) interfaced to a 4096 pixel line-scan camera (Basler Sprint ) with a data collection rate... 400 ) 45 4000 (200) … FIG. 3. Time-resolved emission spectra obtained from detonation of 20 g charges of RDX containing 20 wt. % aluminum nanoparticles

  20. Department of the Air Force Supporting Data for FY 1991 Budget Estimates Submitted to Congress January 1990. Descriptive Summaries, Research, Development, Test and Evaluation

    DTIC Science & Technology

    1990-05-16

    Redondo Beach. CA. Civilian subcontractors are ITEK (cameras). Lexington. MA; Contraves Georz (telescopes), Pittsburgh. PA; and Kentron (operations and...Improvements include a higher maximum takeoff weight , improved air-to-air gun sight algorithms, digital flight controls, and improved pilot interface...ambient propagation loss , significant penetration of sea water, and good performance in a nuclear environment. C. (U) JUSTIFICATION FOR PROJECTS LESS

  1. KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra (facing camera) aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra (facing camera) aids in Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  2. KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra talks to a technician (off-camera) during Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

    NASA Image and Video Library

    2004-02-03

    KENNEDY SPACE CENTER, FLA. - Astronaut Tim Kopra talks to a technician (off-camera) during Intravehicular Activity (IVA) constraints testing on the Italian-built Node 2, a future element of the International Space Station. The second of three Station connecting modules, the Node 2 attaches to the end of the U.S. Lab and provides attach locations for several other elements. Kopra is currently assigned technical duties in the Space Station Branch of the Astronaut Office, where his primary focus involves the testing of crew interfaces for two future ISS modules as well as the implementation of support computers and operational Local Area Network on ISS. Node 2 is scheduled to launch on mission STS-120, Station assembly flight 10A.

  3. Novel graphene-oxide-coated SPR interfaces for biosensing applications

    NASA Astrophysics Data System (ADS)

    Volkov, V. S.; Stebunov, Yu. V.; Yakubovsky, D. I.; Fedyanin, D. Yu.; Arsenin, A. V.

    2017-09-01

    Carbon allotropes-based nanomaterials possess unique physical and chemical properties including high surface area, the possibility of pi-stacking interaction with a wide range of biological objects, rich availability of oxygen-containing functional groups in graphene-oxide (GO), and excellent optical properties, which make them an ideal candidate for use as a universal immobilization platform in SPR biosensing. Here, we propose a new surface plasmon resonance (SPR) biosensing interface for sensitive and selective detection of small molecules. This interface is based on the GO linking layers deposited on the gold/copper surface of SPR sensor chips. To estimate the binding capacity of GO layers, modification of carboxyl groups to N-Hydroxysuccinimide esters was performed in the flow cell of SPR instrument. For comparison, the same procedure was applied to commercial sensor chips based on linking layers of carboxymethylated dextran.

  4. Development of the science instrument CLUPI: the close-up imager on board the ExoMars rover

    NASA Astrophysics Data System (ADS)

    Josset, J.-L.; Beauvivre, S.; Cessa, V.; Martin, P.

    2017-11-01

    First mission of the Aurora Exploration Programme of ESA, ExoMars will demonstrate key flight and in situ enabling technologies, and will pursue fundamental scientific investigations. Planned for launch in 2013, ExoMars will send a robotic rover to the surface of Mars. The Close-UP Imager (CLUPI) instrument is part of the Pasteur Payload of the rover fixed on the robotic arm. It is a robotic replacement of one of the most useful instruments of the field geologist: the hand lens. Imaging of surfaces of rocks, soils and wind drift deposits at high resolution is crucial for the understanding of the geological context of any site where the Pasteur rover may be active on Mars. At the resolution provided by CLUPI (approx. 15 micrometer/pixel), rocks show a plethora of surface and internal structures, to name just a few: crystals in igneous rocks, sedimentary structures such as bedding, fracture mineralization, secondary minerals, details of the surface morphology, sedimentary bedding, sediment components, surface marks in sediments, soil particles. It is conceivable that even textures resulting from ancient biological activity can be visualized, such as fine lamination due to microbial mats (stromatolites) and textures resulting from colonies of filamentous microbes, potentially present in sediments and in palaeocavitites in any rock type. CLUPI is a complete imaging system, consisting of an APS (Active Pixel Sensor) camera with 27° FOV optics. The sensor is sensitive to light between 400 and 900 nm with 12 bits digitization. The fixed focus optics provides well focused images of 4 cm x 2.4 cm rock area at a distance of about 10 cm. This challenging camera system, less than 200g, is an independent scientific instrument linked to the rover on board computer via a SpaceWire interface. After the science goals and specifications presentation, the development of this complex high performance miniaturized imaging system will be described.

  5. Web OPAC Interfaces: An Overview.

    ERIC Educational Resources Information Center

    Babu, B. Ramesh; O'Brien, Ann

    2000-01-01

    Discussion of Web-based online public access catalogs (OPACs) focuses on a review of six Web OPAC interfaces in use in academic libraries in the United Kingdom. Presents a checklist and guidelines of important features and functions that are currently available, including search strategies, access points, display, links, and layout. (Author/LRW)

  6. Data Capture and Analysis Using the BBC Microcomputer--an Interfacing Project Applied to Enzyme Kinetics.

    ERIC Educational Resources Information Center

    Jones, Lawrence; Graham, Ian

    1986-01-01

    Reviews the main principles of interfacing and discusses the software developed to perform kinetic data capture and analysis with a BBC microcomputer linked to a recording spectrophotometer. Focuses on the steps in software development. Includes results of a lactate dehydrogenase assay. (ML)

  7. Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+

    PubMed Central

    Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.

    2015-01-01

    Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851

  8. Experiments on the Richtmyer–Meshkov instability with an imposed, random initial perturbation

    DOE PAGES

    Jacobs, J. W.; Krivets, V. V.; Tsiklashvili, V.; ...

    2013-03-16

    A vertical shock tube is used to perform experiments on the Richtmyer–Meshkov instability with a three-dimensional random initial perturbation. A membraneless flat interface is formed by opposed gas flows in which the light and heavy gases enter the shock tube from the top and from the bottom of the shock tube driven section. An air/SF6 gas combination is used and a Mach number M = 1.2 incident shock wave impulsively accelerates the interface. Initial perturbations on the interface are created by vertically oscillating the gas column within the shock tube to produce Faraday waves on the interface resulting in amore » short wavelength, three-dimensional perturbation. Planar Mie scattering is used to visualize the flow in which light from a laser sheet is scattered by smoke seeded in the air, and image sequences are captured using three high-speed video cameras. Measurements of the integral penetration depth prior to reshock show two growth behaviors, both having power law growth with growth exponents in the range found in previous experiments and simulations. Following reshock, all experiments showvery consistent linear growth with a growth rate in good agreement with those found in previous studies.« less

  9. Microwave Induced Welding of Carbon Nanotube-Thermoplastic Interfaces for Enhanced Mechanical Strength of 3D Printed Parts

    NASA Astrophysics Data System (ADS)

    Sweeney, Charles; Lackey, Blake; Saed, Mohammad; Green, Micah

    Three-dimensional (3D) printed parts produced by fused-filament fabrication of a thermoplastic polymer have become increasingly popular at both the commercial and consumer level. The mechanical integrity of these rapid-prototyped parts however, is severely limited by the interfillament bond strength between adjacent extruded layers. In this report we propose for the first time a method for welding thermoplastic interfaces of 3D printed parts using the extreme heating response of carbon nanotubes (CNTs) to microwave energy. To achieve this, we developed a coaxial printer filament with a pure polylactide (PLA) core and a CNT composite sheath. This produces parts with a thin electrically percolating network of CNTs at the interfaces between adjacent extruded layers. These interfaces are then welded together upon microwave irradiation at 2.45GHz. Our patent-pending method has been shown to increase the tensile toughness by 1000% and tensile strength by 35%. We investigated the dielectric properties of the PLA/CNT composites at microwave frequencies and performed in-situ microwave thermometry using a forward-looking infrared (FLIR) camera to characterize the heating response of the PLA/CNT composites upon microwave irradiation.

  10. Semantic-JSON: a lightweight web service interface for Semantic Web contents integrating multiple life science databases

    PubMed Central

    Kobayashi, Norio; Ishii, Manabu; Takahashi, Satoshi; Mochizuki, Yoshiki; Matsushima, Akihiro; Toyoda, Tetsuro

    2011-01-01

    Global cloud frameworks for bioinformatics research databases become huge and heterogeneous; solutions face various diametric challenges comprising cross-integration, retrieval, security and openness. To address this, as of March 2011 organizations including RIKEN published 192 mammalian, plant and protein life sciences databases having 8.2 million data records, integrated as Linked Open or Private Data (LOD/LPD) using SciNetS.org, the Scientists' Networking System. The huge quantity of linked data this database integration framework covers is based on the Semantic Web, where researchers collaborate by managing metadata across public and private databases in a secured data space. This outstripped the data query capacity of existing interface tools like SPARQL. Actual research also requires specialized tools for data analysis using raw original data. To solve these challenges, in December 2009 we developed the lightweight Semantic-JSON interface to access each fragment of linked and raw life sciences data securely under the control of programming languages popularly used by bioinformaticians such as Perl and Ruby. Researchers successfully used the interface across 28 million semantic relationships for biological applications including genome design, sequence processing, inference over phenotype databases, full-text search indexing and human-readable contents like ontology and LOD tree viewers. Semantic-JSON services of SciNetS.org are provided at http://semanticjson.org. PMID:21632604

  11. Contamination detection NDE for cleaning process inspection

    NASA Technical Reports Server (NTRS)

    Marinelli, W. J.; Dicristina, V.; Sonnenfroh, D.; Blair, D.

    1995-01-01

    In the joining of multilayer materials, and in welding, the cleanliness of the joining surface may play a large role in the quality of the resulting bond. No non-intrusive techniques are currently available for the rapid measurement of contamination on large or irregularly shaped structures prior to the joining process. An innovative technique for the measurement of contaminant levels in these structures using laser based imaging is presented. The approach uses an ultraviolet excimer laser to illuminate large and/or irregular surface areas. The UV light induces fluorescence and is scattered from the contaminants. The illuminated area is viewed by an image-intensified CCD (charge coupled device) camera interfaced to a PC-based computer. The camera measures the fluorescence and/or scattering from the contaminants for comparison with established standards. Single shot measurements of contamination levels are possible. Hence, the technique may be used for on-line NDE testing during manufacturing processes.

  12. FPGA implementation of Santos-Victor optical flow algorithm for real-time image processing: an useful attempt

    NASA Astrophysics Data System (ADS)

    Cobos Arribas, Pedro; Monasterio Huelin Macia, Felix

    2003-04-01

    A FPGA based hardware implementation of the Santos-Victor optical flow algorithm, useful in robot guidance applications, is described in this paper. The system used to do contains an ALTERA FPGA (20K100), an interface with a digital camera, three VRAM memories to contain the data input and some output memories (a VRAM and a EDO) to contain the results. The system have been used previously to develop and test other vision algorithms, such as image compression, optical flow calculation with differential and correlation methods. The designed system let connect the digital camera, or the FPGA output (results of algorithms) to a PC, throw its Firewire or USB port. The problems take place in this occasion have motivated to adopt another hardware structure for certain vision algorithms with special requirements, that need a very hard code intensive processing.

  13. Report of the facility definition team spacelab UV-Optical Telescope Facility

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Scientific requirements for the Spacelab Ultraviolet-Optical Telescope (SUOT) facility are presented. Specific programs involving high angular resolution imagery over wide fields, far ultraviolet spectroscopy, precisely calibrated spectrophotometry and spectropolarimetry over a wide wavelength range, and planetary studies, including high resolution synoptic imagery, are recommended. Specifications for the mounting configuration, instruments for the mounting configuration, instrument mounting system, optical parameters, and the pointing and stabilization system are presented. Concepts for the focal plane instruments are defined. The functional requirements of the direct imaging camera, far ultraviolet spectrograph, and the precisely calibrated spectrophotometer are detailed, and the planetary camera concept is outlined. Operational concepts described in detail are: the makeup and functions of shuttle payload crew, extravehicular activity requirements, telescope control and data management, payload operations control room, orbital constraints, and orbital interfaces (stabilization, maneuvering requirements and attitude control, contamination, utilities, and payload weight considerations).

  14. MONICA: A Compact, Portable Dual Gamma Camera System for Mouse Whole-Body Imaging

    PubMed Central

    Xi, Wenze; Seidel, Jurgen; Karkareka, John W.; Pohida, Thomas J.; Milenic, Diane E.; Proffitt, James; Majewski, Stan; Weisenberger, Andrew G.; Green, Michael V.; Choyke, Peter L.

    2009-01-01

    Introduction We describe a compact, portable dual-gamma camera system (named “MONICA” for MObile Nuclear Imaging CAmeras) for visualizing and analyzing the whole-body biodistribution of putative diagnostic and therapeutic single photon emitting radiotracers in animals the size of mice. Methods Two identical, miniature pixelated NaI(Tl) gamma cameras were fabricated and installed “looking up” through the tabletop of a compact portable cart. Mice are placed directly on the tabletop for imaging. Camera imaging performance was evaluated with phantoms and field performance was evaluated in a weeklong In-111 imaging study performed in a mouse tumor xenograft model. Results Tc-99m performance measurements, using a photopeak energy window of 140 keV ± 10%, yielded the following results: spatial resolution (FWHM at 1-cm), 2.2-mm; sensitivity, 149 cps/MBq (5.5 cps/μCi); energy resolution (FWHM), 10.8%; count rate linearity (count rate vs. activity), r2 = 0.99 for 0–185 MBq (0–5 mCi) in the field-of-view (FOV); spatial uniformity, < 3% count rate variation across the FOV. Tumor and whole-body distributions of the In-111 agent were well visualized in all animals in 5-minute images acquired throughout the 168-hour study period. Conclusion Performance measurements indicate that MONICA is well suited to whole-body single photon mouse imaging. The field study suggests that inter-device communications and user-oriented interfaces included in the MONICA design facilitate use of the system in practice. We believe that MONICA may be particularly useful early in the (cancer) drug development cycle where basic whole-body biodistribution data can direct future development of the agent under study and where logistical factors, e.g. limited imaging space, portability, and, potentially, cost are important. PMID:20346864

  15. The Mars Science Laboratory Curiosity rover Mastcam instruments: Preflight and in-flight calibration, validation, and data archiving

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Godber, A.; McNair, S.; Caplinger, M. A.; Maki, J. N.; Lemmon, M. T.; Van Beek, J.; Malin, M. C.; Wellington, D.; Kinch, K. M.; Madsen, M. B.; Hardgrove, C.; Ravine, M. A.; Jensen, E.; Harker, D.; Anderson, R. B.; Herkenhoff, K. E.; Morris, R. V.; Cisneros, E.; Deen, R. G.

    2017-07-01

    The NASA Curiosity rover Mast Camera (Mastcam) system is a pair of fixed-focal length, multispectral, color CCD imagers mounted 2 m above the surface on the rover's remote sensing mast, along with associated electronics and an onboard calibration target. The left Mastcam (M-34) has a 34 mm focal length, an instantaneous field of view (IFOV) of 0.22 mrad, and a FOV of 20° × 15° over the full 1648 × 1200 pixel span of its Kodak KAI-2020 CCD. The right Mastcam (M-100) has a 100 mm focal length, an IFOV of 0.074 mrad, and a FOV of 6.8° × 5.1° using the same detector. The cameras are separated by 24.2 cm on the mast, allowing stereo images to be obtained at the resolution of the M-34 camera. Each camera has an eight-position filter wheel, enabling it to take Bayer pattern red, green, and blue (RGB) "true color" images, multispectral images in nine additional bands spanning 400-1100 nm, and images of the Sun in two colors through neutral density-coated filters. An associated Digital Electronics Assembly provides command and data interfaces to the rover, 8 Gb of image storage per camera, 11 bit to 8 bit companding, JPEG compression, and acquisition of high-definition video. Here we describe the preflight and in-flight calibration of Mastcam images, the ways that they are being archived in the NASA Planetary Data System, and the ways that calibration refinements are being developed as the investigation progresses on Mars. We also provide some examples of data sets and analyses that help to validate the accuracy and precision of the calibration.

  16. Bridging research with innovative products: a compact hyperspectral camera for investigating artworks: a feasibility study

    NASA Astrophysics Data System (ADS)

    Cucci, Costanza; Casini, Andrea; Stefani, Lorenzo; Picollo, Marcello; Jussila, Jouni

    2017-07-01

    For more than a decade, a number of studies and research projects have been devoted to customize hyperspectral imaging techniques to the specific needs of conservation and applications in museum context. A growing scientific literature definitely demonstrated the effectiveness of reflectance hyperspectral imaging for non-invasive diagnostics and highquality documentation of 2D artworks. Additional published studies tackle the problems of data-processing, with a focus on the development of algorithms and software platforms optimised for visualisation and exploitation of hyperspectral bigdata sets acquired on paintings. This scenario proves that, also in the field of Cultural Heritage (CH), reflectance hyperspectral imaging has nowadays reached the stage of mature technology, and is ready for the transition from the R&D phase to the large-scale applications. In view of that, a novel concept of hyperspectral camera - featuring compactness, lightness and good usability - has been developed by SPECIM, Spectral Imaging Ltd. (Oulu, Finland), a company in manufacturing products for hyperspectral imaging. The camera is proposed as new tool for novel applications in the field of Cultural Heritage. The novelty of this device relies in its reduced dimensions and weight and in its user-friendly interface, which make this camera much more manageable and affordable than conventional hyperspectral instrumentation. The camera operates in the 400-1000nm spectral range and can be mounted on a tripod. It can operate from short-distance (tens of cm) to long distances (tens of meters) with different spatial resolutions. The first release of the prototype underwent a preliminary in-depth experimentation at the IFAC-CNR laboratories. This paper illustrates the feasibility study carried out on the new SPECIM hyperspectral camera, tested under different conditions on laboratory targets and artworks with the specific aim of defining its potentialities and weaknesses in its use in the Cultural Heritage field.

  17. System for critical infrastructure security based on multispectral observation-detection module

    NASA Astrophysics Data System (ADS)

    Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław

    2013-10-01

    Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.

  18. A compact 16-module camera using 64-pixel CsI(Tl)/Si p-i-n photodiode imaging modules

    NASA Astrophysics Data System (ADS)

    Choong, W.-S.; Gruber, G. J.; Moses, W. W.; Derenzo, S. E.; Holland, S. E.; Pedrali-Noy, M.; Krieger, B.; Mandelli, E.; Meddeler, G.; Wang, N. W.; Witt, E. K.

    2002-10-01

    We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm/spl times/9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm/spl times/3 mm/spl times/5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each imaging module plugs into a readout motherboard that controls the modules and interfaces with a data acquisition card inside a computer. For a given event, the motherboard employs a custom winner-take-all IC to identify the module with the largest analog output and to enable the output address bits of the corresponding module's readout IC. These address bits identify the "winner" pixel within the "winner" module. The peak of the largest analog signal is found and held using a peak detect circuit, after which it is acquired by an analog-to-digital converter on the data acquisition card. The camera is currently operated with four imaging modules in order to characterize its performance. At room temperature, the camera demonstrates an average energy resolution of 13.4% full-width at half-maximum (FWHM) for the 140-keV emissions of /sup 99m/Tc. The system spatial resolution is measured using a capillary tube with an inner diameter of 0.7 mm and located 10 cm from the face of the collimator. Images of the line source in air exhibit average system spatial resolutions of 8.7- and 11.2-mm FWHM when using an all-purpose and high-sensitivity parallel hexagonal holes collimator, respectively. These values do not change significantly when an acrylic scattering block is placed between the line source and the camera.

  19. Low-cost thermo-electric infrared FPAs and their automotive applications

    NASA Astrophysics Data System (ADS)

    Hirota, Masaki; Ohta, Yoshimi; Fukuyama, Yasuhiro

    2008-04-01

    This paper describes three low-cost infrared focal plane arrays (FPAs) having a 1,536, 2,304, and 10,800 elements and experimental vehicle systems. They have a low-cost potential because each element consists of p-n polysilicon thermocouples, which allows the use of low-cost ultra-fine microfabrication technology commonly employed in the conventional semiconductor manufacturing processes. To increase the responsivity of FPA, we have developed a precisely patterned Au-black absorber that has high infrared absorptivity of more than 90%. The FPA having a 2,304 elements achieved high resposivity of 4,300 V/W. In order to reduce package cost, we developed a vacuum-sealed package integrated with a molded ZnS lens. The camera aiming the temperature measurement of a passenger cabin is compact and light weight devices that measures 45 x 45 x 30 mm and weighs 190 g. The camera achieves a noise equivalent temperature deviation (NETD) of less than 0.7°C from 0 to 40°C. In this paper, we also present a several experimental systems that use infrared cameras. One experimental system is a blind spot pedestrian warning system that employs four infrared cameras. It can detect the infrared radiation emitted from a human body and alerts the driver when a pedestrian is in a blind spot. The system can also prevent the vehicle from moving in the direction of the pedestrian. Another system uses a visible-light camera and infrared sensors to detect the presence of a pedestrian in a rear blind spot and alerts the driver. The third system is a new type of human-machine interface system that enables the driver to control the car's audio system without letting go of the steering wheel. Uncooled infrared cameras are still costly, which limits their automotive use to high-end luxury cars at present. To promote widespread use of IR imaging sensors on vehicles, we need to reduce their cost further.

  20. Development of the FITS tools package for multiple software environments

    NASA Technical Reports Server (NTRS)

    Pence, W. D.; Blackburn, J. K.

    1992-01-01

    The HEASARC is developing a package of general purpose software for analyzing data files in FITS format. This paper describes the design philosophy which makes the software both machine-independent (it runs on VAXs, Suns, and DEC-stations) and software environment-independent. Currently the software can be compiled and linked to produce IRAF tasks, or alternatively, the same source code can be used to generate stand-alone tasks using one of two implementations of a user-parameter interface library. The machine independence of the software is achieved by writing the source code in ANSI standard Fortran or C, using the machine-independent FITSIO subroutine interface for all data file I/O, and using a standard user-parameter subroutine interface for all user I/O. The latter interface is based on the Fortran IRAF Parameter File interface developed at STScI. The IRAF tasks are built by linking to the IRAF implementation of this parameter interface library. Two other implementations of this parameter interface library, which have no IRAF dependencies, are now available which can be used to generate stand-alone executable tasks. These stand-alone tasks can simply be executed from the machine operating system prompt either by supplying all the task parameters on the command line or by entering the task name after which the user will be prompted for any required parameters. A first release of this FTOOLS package is now publicly available. The currently available tasks are described, along with instructions on how to obtain a copy of the software.

  1. A high speed telemetry data link for an autonomous roving vehicle

    NASA Technical Reports Server (NTRS)

    Cipolle, D. J.

    1980-01-01

    A data link system used on a prototype autonomous roving vehicle is described. This system provides a means of acquiring, formatting, and transmitting information on board the vehicle to a controlling computer. Included is a statement of requirements and the design philosophy. Additionally, interfacing with the rover systems is discussed, along with the overall performance of the telemetry link.

  2. Recombinant human erythropoietin (rHuEPO): cross-linking with disuccinimidyl esters and identification of the interfacing domains in EPO.

    PubMed Central

    Haniu, M.; Narhi, L. O.; Arakawa, T.; Elliott, S.; Rohde, M. F.

    1993-01-01

    Several amino groups of recombinant human erythropoietin are selectively cross-linked by specific cross-linkers including disuccinimidyl suberate or dithiobis(succinimidyl propionate). Intramolecular cross-linkings are obtained without significant change of the protein conformation using appropriate concentrations (0.2 mM) of the cross-linkers, which possess an 11-12-A length of a spacer between two reacting groups. Intramolecularly cross-linked peptides obtained suggest that several amino groups in erythropoietin (EPO) are positioned at a distance of near 12 A in the solution state. These interfacing amino groups include Lys 20-Lys 154, Lys 45-Lys 140, Lys 52-Lys 154, Lys 52-Lys 140, and Ala 1-Lys 116. A comparison of the cross-linking results between nonglycosylated EPO and glycosylated EPO suggests that both proteins retain high similarity regarding protein conformation. These results fit a structural model similar to that of human growth hormone, in which four alpha-helical bundles and a long stretch of beta-sheet structure are involved in the active protein. PMID:8401229

  3. Performance analysis and enhancement for visible light communication using CMOS sensors

    NASA Astrophysics Data System (ADS)

    Guan, Weipeng; Wu, Yuxiang; Xie, Canyu; Fang, Liangtao; Liu, Xiaowei; Chen, Yingcong

    2018-03-01

    Complementary Metal-Oxide-Semiconductor (CMOS) sensors are widely used in mobile-phone and cameras. Hence, it is attractive if these camera can be used as the receivers of visible light communication (VLC). Using the rolling shutter mechanism can increase the data rate of VLC based on CMOS camera, and different techniques have been proposed to improve the demodulation of the rolling shutter mechanism. However, these techniques are too complexity. In this work, we demonstrate and analyze the performance of the VLC link using CMOS camera for different LED luminaires for the first time in our knowledge. Experimental evaluation to compare their bit-error-rate (BER) performances and demodulation are also performed, and it can be summarized that just need to change the LED luminaire with more uniformity light output, the blooming effect would not exist; which not only can reduce the complexity of the demodulation but also enhance the communication quality. In addition, we propose and demonstrate to use contrast limited adaptive histogram equalization to extend the transmission distance and mitigate the influence of the background noise. And the experimental results show that the BER can be decreased by an order of magnitude by using the proposed method.

  4. RNA–protein binding interface in the telomerase ribonucleoprotein

    PubMed Central

    Bley, Christopher J.; Qi, Xiaodong; Rand, Dustin P.; Borges, Chad R.; Nelson, Randall W.; Chen, Julian J.-L.

    2011-01-01

    Telomerase is a specialized reverse transcriptase containing an intrinsic telomerase RNA (TR) which provides the template for telomeric DNA synthesis. Distinct from conventional reverse transcriptases, telomerase has evolved a unique TR-binding domain (TRBD) in the catalytic telomerase reverse transcriptase (TERT) protein, integral for ribonucleoprotein assembly. Two structural elements in the vertebrate TR, the pseudoknot and CR4/5, bind TERT independently and are essential for telomerase enzymatic activity. However, the details of the TR–TERT interaction have remained elusive. In this study, we employed a photoaffinity cross-linking approach to map the CR4/5-TRBD RNA–protein binding interface by identifying RNA and protein residues in close proximity. Photoreactive 5-iodouridines were incorporated into the medaka CR4/5 RNA fragment and UV cross-linked to the medaka TRBD protein fragment. The cross-linking RNA residues were identified by alkaline partial hydrolysis and cross-linked protein residues were identified by mass spectrometry. Three CR4/5 RNA residues (U182, U187, and U205) were found cross-linking to TRBD amino acids Tyr503, Phe355, and Trp477, respectively. This CR4/5 binding pocket is distinct and separate from the previously proposed T pocket in the Tetrahymena TRBD. Based on homologous structural models, our cross-linking data position the essential loop L6.1 adjacent to the TERT C-terminal extension domain. We thus propose that stem-loop 6.1 facilitates proper TERT folding by interacting with both TRBD and C-terminal extension. Revealing the telomerase CR4/5-TRBD binding interface with single-residue resolution provides important insights into telomerase ribonucleoprotein architecture and the function of the essential CR4/5 domain. PMID:22123986

  5. Smart nanogels at the air/water interface: structural studies by neutron reflectivity

    NASA Astrophysics Data System (ADS)

    Zielińska, Katarzyna; Sun, Huihui; Campbell, Richard A.; Zarbakhsh, Ali; Resmini, Marina

    2016-02-01

    The development of effective transdermal drug delivery systems based on nanosized polymers requires a better understanding of the behaviour of such nanomaterials at interfaces. N-Isopropylacrylamide-based nanogels synthesized with different percentages of N,N'-methylenebisacrylamide as cross-linker, ranging from 10 to 30%, were characterized at physiological temperature at the air/water interface, using neutron reflectivity (NR), with isotopic contrast variation, and surface tension measurements; this allowed us to resolve the adsorbed amount and the volume fraction of nanogels at the interface. A large conformational change for the nanogels results in strong deformations at the interface. As the percentage of cross-linker incorporated in the nanogels becomes higher, more rigid matrices are obtained, although less deformed, and the amount of adsorbed nanogels is increased. The data provide the first experimental evidence of structural changes of nanogels as a function of the degree of cross-linking at the air/water interface.The development of effective transdermal drug delivery systems based on nanosized polymers requires a better understanding of the behaviour of such nanomaterials at interfaces. N-Isopropylacrylamide-based nanogels synthesized with different percentages of N,N'-methylenebisacrylamide as cross-linker, ranging from 10 to 30%, were characterized at physiological temperature at the air/water interface, using neutron reflectivity (NR), with isotopic contrast variation, and surface tension measurements; this allowed us to resolve the adsorbed amount and the volume fraction of nanogels at the interface. A large conformational change for the nanogels results in strong deformations at the interface. As the percentage of cross-linker incorporated in the nanogels becomes higher, more rigid matrices are obtained, although less deformed, and the amount of adsorbed nanogels is increased. The data provide the first experimental evidence of structural changes of nanogels as a function of the degree of cross-linking at the air/water interface. Electronic supplementary information (ESI) available. See DOI: 10.1039/c5nr07538f

  6. Recommendation on Transition from Primary/Secondary Radar to Secondary- Only Radar Capability

    DTIC Science & Technology

    1994-10-01

    Radar Beacon Performance Monitor RCIU Remote Control Interface Unit RCL Remote Communications Link R E&D Research, Engineering and Development RML Radar...rate. 3.1.2.5 Maintenance The current LRRs have limited remote maintenance monitoring (RMM) capabilities via the Remote Control Interface Unit ( RCIU ...1, -2 and FPS-20 radars required an upgrade of some of the radar subsystems, namely the RCIU to respond as an RMS and the CD to interface with radar

  7. Video stroke assessment (VSA) project: design and production of a prototype system for the remote diagnosis of stroke

    NASA Astrophysics Data System (ADS)

    Urias, Adrian R.; Draghic, Nicole; Lui, Janet; Cho, Angie; Curtis, Calvin; Espinosa, Joseluis; Wottawa, Christopher; Wiesmann, William P.; Schwamm, Lee H.

    2005-04-01

    Stroke remains the third most frequent cause of death in the United States and the leading cause of disability in adults. Long-term effects of ischemic stroke can be mitigated by the opportune administration of Tissue Plasminogen Activator (t-PA); however, the decision regarding the appropriate use of this therapy is dependant on timely, effective neurological assessment by a trained specialist. The lack of available stroke expertise is a key barrier preventing frequent use of t-PA. We report here on the development of a prototype research system capable of performing a semi-automated neurological examination from an offsite location via the Internet and a Computed Tomography (CT) scanner to facilitate the diagnosis and treatment of acute stroke. The Video Stroke Assessment (VSA) System consists of a video camera, a camera mounting frame, and a computer with software and algorithms to collect, interpret, and store patient neurological responses to stimuli. The video camera is mounted on a mobility track in front of the patient; camera direction and zoom are remotely controlled on a graphical user interface (GUI) by the specialist. The VSA System also performs a partially-autonomous examination based on the NIH Stroke Scale (NIHSS). Various response data indicative of stroke are recorded, analyzed and transmitted in real time to the specialist. The VSA provides unbiased, quantitative results for most categories of the NIHSS along with video and audio playback to assist in accurate diagnosis. The system archives the complete exam and results.

  8. Touchscreen everywhere: on transferring a normal planar surface to a touch-sensitive display.

    PubMed

    Dai, Jingwen; Chung, Chi-Kit Ronald

    2014-08-01

    We address how a human-computer interface with small device size, large display, and touch-input facility can be made possible by a mere projector and camera. The realization is through the use of a properly embedded structured light sensing scheme that enables a regular light-colored table surface to serve the dual roles of both a projection screen and a touch-sensitive display surface. A random binary pattern is employed to code structured light in pixel accuracy, which is embedded into the regular projection display in a way that the user perceives only regular display but not the structured pattern hidden in the display. With the projection display on the table surface being imaged by a camera, the observed image data, plus the known projection content, can work together to probe the 3-D workspace immediately above the table surface, like deciding if there is a finger present and if the finger touches the table surface, and if so, at what position on the table surface the contact is made. All the decisions hinge upon a careful calibration of the projector-camera-table surface system, intelligent segmentation of the hand in the image data, and exploitation of the homography mapping existing between the projector's display panel and the camera's image plane. Extensive experimentation including evaluation of the display quality, hand segmentation accuracy, touch detection accuracy, trajectory tracking accuracy, multitouch capability and system efficiency are shown to illustrate the feasibility of the proposed realization.

  9. Matching brain-machine interface performance to space applications.

    PubMed

    Citi, Luca; Tonet, Oliver; Marinelli, Martina

    2009-01-01

    A brain-machine interface (BMI) is a particular class of human-machine interface (HMI). BMIs have so far been studied mostly as a communication means for people who have little or no voluntary control of muscle activity. For able-bodied users, such as astronauts, a BMI would only be practical if conceived as an augmenting interface. A method is presented for pointing out effective combinations of HMIs and applications of robotics and automation to space. Latency and throughput are selected as performance measures for a hybrid bionic system (HBS), that is, the combination of a user, a device, and a HMI. We classify and briefly describe HMIs and space applications and then compare the performance of classes of interfaces with the requirements of classes of applications, both in terms of latency and throughput. Regions of overlap correspond to effective combinations. Devices requiring simpler control, such as a rover, a robotic camera, or environmental controls are suitable to be driven by means of BMI technology. Free flyers and other devices with six degrees of freedom can be controlled, but only at low-interactivity levels. More demanding applications require conventional interfaces, although they could be controlled by BMIs once the same levels of performance as currently recorded in animal experiments are attained. Robotic arms and manipulators could be the next frontier for noninvasive BMIs. Integrating smart controllers in HBSs could improve interactivity and boost the use of BMI technology in space applications.

  10. Shock interaction with a two-gas interface in a novel dual-driver shock tube

    NASA Astrophysics Data System (ADS)

    Labenski, John R.

    Fluid instabilities exist at the interface between two fluids having different densities if the flow velocity and density gradient are anti-parallel or if a shock wave crosses the boundary. The former case is called the Rayleigh-Taylor (R-T) instability and the latter, the Richtmyer-Meshkov (R-M) instability. Small initial perturbations on the interface destabilize and grow into larger amplitude structures leading to turbulent mixing. Instabilities of this type are seen in inertial confinement fusion (ICF) experiments, laser produced plasmas, supernova explosions, and detonations. A novel dual-driver shock tube was used to investigate the growth rate of the R-M instability. One driver is used to create an argon-refrigerant interface, and the other at the opposite end of the driven section generates a shock to force the interface with compressible flows behind the shock. The refrigerant gas in the first driver is seeded with sub-micron oil droplets for visualization of the interface. The interface travels down the driven section past the test section for a fixed amount of time. A stronger shock of Mach 1.1 to 1.3 drives the interface back past the test section where flow diagnostics are positioned. Two schlieren systems record the density fluctuations while light scattering detectors record the density of the refrigerant as a function of position over the interface. A pair of digital cameras take stereo images of the interface, as mapped out by the tracer particles under illumination by a Q-switched ruby laser. The amount of time that the interface is allowed to travel up the driven section determines the interaction time as a control. Comparisons made between the schlieren signals, light scattering detector outputs, and the images quantify the fingered characteristics of the interface and its growth due to shock forcing. The results show that the interface has a distribution of thickness and that the interaction with a shock further broadens the interface. The growth rate was found to exhibit a dependence on the shock strength.

  11. LINKING THE CMAQ AND HYSPLIT MODELING SYSTEM INTERFACE PROGRAM AND EXAMPLE APPLICATION

    EPA Science Inventory

    A new software tool has been developed to link the Eulerian-based Community Multiscale Air Quality (CMAQ) modeling system with the Lagrangian-based HYSPLIT (HYbrid Single-Particle Lagrangian Integrated Trajectory) model. Both models require many of the same hourly meteorological...

  12. Making and Shaping Participatory Spaces: Resemiotization and Citizenship Agency in South Africa

    ERIC Educational Resources Information Center

    Kerfoot, Caroline

    2011-01-01

    In South Africa, democratic consolidation involves not only building a new state, but also new interfaces between state and society. To strengthen the agency of citizens at these interfaces, recent approaches to development stress the notion of "participatory citizenship." The purpose of this article is to explore the links, rarely…

  13. Magnet measurement interfacing to the G-64 Euro standard bus and testing G-64 modules

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hogrefe, R.L.

    1995-07-01

    The Magnet Measurement system utilizes various modules with a G-64 Euro (Gespac) Standard Interface. All modules are designed to be software controlled, normally under the constraints of the OS-9 operating system with all data transfers to a host computer accomplished by a serial link.

  14. SCAILET - An intelligent assistant for satellite ground terminal operations

    NASA Technical Reports Server (NTRS)

    Shahidi, A. K.; Crapo, J. A.; Schlegelmilch, R. F.; Reinhart, R. C.; Petrik, E. J.; Walters, J. L.; Jones, R. E.

    1992-01-01

    Space communication artificial intelligence for the link evaluation terminal (SCAILET) is an experimenter interface to the link evaluation terminal (LET) developed by NASA through the application of artificial intelligence to an advanced ground terminal. The high-burst-rate (HBR) LET provides the required capabilities for wideband communications experiments with the advanced communications technology satellite (ACTS). The HBR-LET terminal consists of seven major subsystems and is controlled and monitored by a minicomputer through an IEEE-488 or RS-232 interface. Programming scripts configure HBR-LET and allow data acquisition but are difficult to use and therefore the full capabilities of the system are not utilized. An intelligent assistant module was developed as part of the SCAILET module and solves problems encountered during configuration of the HBR-LET system. This assistant is a graphical interface with an expert system running in the background and allows users to configure instrumentation, program sequences and reference documentation. The simplicity of use makes SCAILET a superior interface to the ASCII terminal and continuous monitoring allows nearly flawless configuration and execution of HBR-LET experiments.

  15. Developing A Web-based User Interface for Semantic Information Retrieval

    NASA Technical Reports Server (NTRS)

    Berrios, Daniel C.; Keller, Richard M.

    2003-01-01

    While there are now a number of languages and frameworks that enable computer-based systems to search stored data semantically, the optimal design for effective user interfaces for such systems is still uncle ar. Such interfaces should mask unnecessary query detail from users, yet still allow them to build queries of arbitrary complexity without significant restrictions. We developed a user interface supporting s emantic query generation for Semanticorganizer, a tool used by scient ists and engineers at NASA to construct networks of knowledge and dat a. Through this interface users can select node types, node attribute s and node links to build ad-hoc semantic queries for searching the S emanticOrganizer network.

  16. Experimental entanglement of 25 individually accessible atomic quantum interfaces

    PubMed Central

    Jiang, Nan; Chang, Wei; Li, Chang; Zhang, Sheng

    2018-01-01

    A quantum interface links the stationary qubits in a quantum memory with flying photonic qubits in optical transmission channels and constitutes a critical element for the future quantum internet. Entanglement of quantum interfaces is an important step for the realization of quantum networks. Through heralded detection of photon interference, we generate multipartite entanglement between 25 (or 9) individually addressable quantum interfaces in a multiplexed atomic quantum memory array and confirm genuine 22-partite (or 9-partite) entanglement. This experimental entanglement of a record-high number of individually addressable quantum interfaces makes an important step toward the realization of quantum networks, long-distance quantum communication, and multipartite quantum information processing. PMID:29725621

  17. Applied Meteorology Unit (AMU) Quarterly Report. First Quarter FY-05

    NASA Technical Reports Server (NTRS)

    Bauman, William; Wheeler, Mark; Lambert, Winifred; Case, Jonathan; Short, David

    2005-01-01

    This report summarizes the Applied Meteorology Unit (AMU) activities for the first quarter of Fiscal Year 2005 (October - December 2005). Tasks reviewed include: (1) Objective Lightning Probability Forecast: Phase I, (2) Severe Weather Forecast Decision Aid, (3) Hail Index, (4) Stable Low Cloud Evaluation, (5) Shuttle Ascent Camera Cloud Obstruction Forecast, (6) Range Standardization and Automation (RSA) and Legacy Wind Sensor Evaluation, (7) Advanced Regional Prediction System (ARPS) Optimization and Training Extension, and (8) User Control Interface for ARPS Data Analysis System (ADAS) Data Ingest

  18. Theoretical Limits of Lunar Vision Aided Navigation with Inertial Navigation System

    DTIC Science & Technology

    2015-03-26

    camera model. Light reflected or projected from objects in the scene of the outside world is taken in by the aperture (or opening) shaped as a double...model’s analog aspects with an analog-to-digital interface converting raw images of the outside world scene into digital information a computer can use to...Figure 2.7. Digital Image Coordinate System. Used with permission [30]. Angular Field of View. The angular field of view is the angle of the world scene

  19. KSC-07pd2612

    NASA Image and Video Library

    2007-09-28

    KENNEDY SPACE CENTER, FLA. -- In the Orbiter Processing Facility, STS-122 Mission Specialist Rex Walheim reaches toward the wing of space shuttle Atlantis. The crew is at Kennedy to take part in a crew equipment interface test, or CEIT, which helps familiarize them with equipment and payloads for the mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. STS-122 is targeted for launch in December. Photo credit: NASA/Kim Shiflett

  20. ESTminer: a Web interface for mining EST contig and cluster databases.

    PubMed

    Huang, Yecheng; Pumphrey, Janie; Gingle, Alan R

    2005-03-01

    ESTminer is a Web application and database schema for interactive mining of expressed sequence tag (EST) contig and cluster datasets. The Web interface contains a query frame that allows the selection of contigs/clusters with specific cDNA library makeup or a threshold number of members. The results are displayed as color-coded tree nodes, where the color indicates the fractional size of each cDNA library component. The nodes are expandable, revealing library statistics as well as EST or contig members, with links to sequence data, GenBank records or user configurable links. Also, the interface allows 'queries within queries' where the result set of a query is further filtered by the subsequent query. ESTminer is implemented in Java/JSP and the package, including MySQL and Oracle schema creation scripts, is available from http://cggc.agtec.uga.edu/Data/download.asp agingle@uga.edu.

Top