Sample records for pixels downlinked video

  1. Selecting Pixels for Kepler Downlink

    NASA Technical Reports Server (NTRS)

    Bryson, Stephen T.; Jenkins, Jon M.; Klaus, Todd C.; Cote, Miles T.; Quintana, Elisa V.; Hall, Jennifer R.; Ibrahim, Khadeejah; Chandrasekaran, Hema; Caldwell, Douglas A.; Van Cleve, Jeffrey E.; hide

    2010-01-01

    The Kepler mission monitors > 100,000 stellar targets using 42 2200 1024 pixel CCDs. Bandwidth constraints prevent the downlink of all 96 million pixels per 30-minute cadence, so the Kepler spacecraft downlinks a specified collection of pixels for each target. These pixels are selected by considering the object brightness, background and the signal-to-noise of each pixel, and are optimized to maximize the signal-to-noise ratio of the target. This paper describes pixel selection, creation of spacecraft apertures that efficiently capture selected pixels, and aperture assignment to a target. Diagnostic apertures, short-cadence targets and custom specified shapes are discussed.

  2. Bubble and Drop Nonlinear Dynamics experiment

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Bubble and Drop Nonlinear Dynamics (BDND) experiment was designed to improve understanding of how the shape and behavior of bubbles respond to ultrasound pressure. By understanding this behavior, it may be possible to counteract complications bubbles cause during materials processing on the ground. This 12-second sequence came from video downlinked from STS-94, July 5 1997, MET:3/19:15 (approximate). The BDND guest investigator was Gary Leal of the University of California, Santa Barbara. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced fluid dynamics experiments will be a part of investigations plarned for the International Space Station. (189KB JPEG, 1293 x 1460 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300163.html.

  3. Bubbles Responding to Ultrasound Pressure

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Bubble and Drop Nonlinear Dynamics (BDND) experiment was designed to improve understanding of how the shape and behavior of bubbles respond to ultrasound pressure. By understanding this behavior, it may be possible to counteract complications bubbles cause during materials processing on the ground. This 12-second sequence came from video downlinked from STS-94, July 5 1997, MET:3/19:15 (approximate). The BDND guest investigator was Gary Leal of the University of California, Santa Barbara. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced fluid dynamics experiments will be a part of investigations plarned for the International Space Station. (435KB, 13-second MPEG, screen 160 x 120 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300162.html.

  4. Innovative Video Diagnostic Equipment for Material Science

    NASA Technical Reports Server (NTRS)

    Capuano, G.; Titomanlio, D.; Soellner, W.; Seidel, A.

    2012-01-01

    Materials science experiments under microgravity increasingly rely on advanced optical systems to determine the physical properties of the samples under investigation. This includes video systems with high spatial and temporal resolution. The acquisition, handling, storage and transmission to ground of the resulting video data are very challenging. Since the available downlink data rate is limited, the capability to compress the video data significantly without compromising the data quality is essential. We report on the development of a Digital Video System (DVS) for EML (Electro Magnetic Levitator) which provides real-time video acquisition, high compression using advanced Wavelet algorithms, storage and transmission of a continuous flow of video with different characteristics in terms of image dimensions and frame rates. The DVS is able to operate with the latest generation of high-performance cameras acquiring high resolution video images up to 4Mpixels@60 fps or high frame rate video images up to about 1000 fps@512x512pixels.

  5. Live Ultra-High Definition from the International Space Station

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; George, Sandy

    2017-01-01

    The first ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a 'Super Session' at the National Association of Broadcasters (NAB) in April 2017. The Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. HEVC may also enable live Virtual Reality video downlinks from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a 'live' event was staged when the UHD coming from the ISS had a latency of 10+ seconds. Finally, the paper will discuss how NASA is leveraging commercial technologies for use on-orbit vs. creating technology as was required during the Apollo Moon Program and early space age.

  6. Engineering a Live UHD Program from the International Space Station

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; George, Sandy

    2017-01-01

    The first-ever live downlink of Ultra-High Definition (UHD) video from the International Space Station (ISS) was the highlight of a “Super Session” at the National Association of Broadcasters (NAB) Show in April 2017. Ultra-High Definition is four times the resolution of “full HD” or “1080P” video. Also referred to as “4K”, the Ultra-High Definition video downlink from the ISS all the way to the Las Vegas Convention Center required considerable planning, pushed the limits of conventional video distribution from a space-craft, and was the first use of High Efficiency Video Coding (HEVC) from a space-craft. The live event at NAB will serve as a pathfinder for more routine downlinks of UHD as well as use of HEVC for conventional HD downlinks to save bandwidth. A similar demonstration was conducted in 2006 with the Discovery Channel to demonstrate the ability to stream HDTV from the ISS. This paper will describe the overall work flow and routing of the UHD video, how audio was synchronized even though the video and audio were received many seconds apart from each other, and how the demonstration paves the way for not only more efficient video distribution from the ISS, but also serves as a pathfinder for more complex video distribution from deep space. The paper will also describe how a “live” event was staged when the UHD video coming from the ISS had a latency of 10+ seconds. In addition, the paper will touch on the unique collaboration between the inherently governmental aspects of the ISS, commercial partners Amazon and Elemental, and the National Association of Broadcasters.

  7. Portable color multimedia training systems based on monochrome laptop computers (CBT-in-a-briefcase), with spinoff implications for video uplink and downlink in spaceflight operations

    NASA Technical Reports Server (NTRS)

    Scott, D. W.

    1994-01-01

    This report describes efforts to use digital motion video compression technology to develop a highly portable device that would convert 1990-91 era IBM-compatible and/or MacIntosh notebook computers into full-color, motion-video capable multimedia training systems. An architecture was conceived that would permit direct conversion of existing laser-disk-based multimedia courses with little or no reauthoring. The project did not physically demonstrate certain critical video keying techniques, but their implementation should be feasible. This investigation of digital motion video has spawned two significant spaceflight projects at MSFC: one to downlink multiple high-quality video signals from Spacelab, and the other to uplink videoconference-quality video in realtime and high quality video off-line, plus investigate interactive, multimedia-based techniques for enhancing onboard science operations. Other airborne or spaceborne spinoffs are possible.

  8. Droplet Combustion Experiment movie

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Droplet Combustion Experiment (DCE) was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1 mission (STS-83, April 4-8 1997; the shortened mission was reflown as MSL-1R on STS-94). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (1.1 MB, 12-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available)A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300164.html.

  9. SATCOM Supply Versus Demand and the Impact on Remotely Piloted Aircraft ISR

    DTIC Science & Technology

    2016-03-01

    produced a laser transmitter called OPALS , which successfully transmitted both text and video from the International Space Station to a ground control...station. In one test, a video which took 12 hours to upload via traditional radio frequency was downloaded in a mere seven seconds using OPALS .52 ESA...enhanced Ku-band IntelsatEpic satellites to be launched in 2016 will provide 200Mbps downlink data rate, while OPALS and EDRS provide 1.8Gbps downlink

  10. Applications and Innovations for Use of High Definition and High Resolution Digital Motion Imagery in Space Operations

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney

    2016-01-01

    The first live High Definition Television (HDTV) from a spacecraft was in November, 2006, nearly ten years before the 2016 SpaceOps Conference. Much has changed since then. Now, live HDTV from the International Space Station (ISS) is routine. HDTV cameras stream live video views of the Earth from the exterior of the ISS every day on UStream, and HDTV has even flown around the Moon on a Japanese Space Agency spacecraft. A great deal has been learned about the operations applicability of HDTV and high resolution imagery since that first live broadcast. This paper will discuss the current state of real-time and file based HDTV and higher resolution video for space operations. A potential roadmap will be provided for further development and innovations of high-resolution digital motion imagery, including gaps in technology enablers, especially for deep space and unmanned missions. Specific topics to be covered in the paper will include: An update on radiation tolerance and performance of various camera types and sensors and ramifications on the future applicability of these types of cameras for space operations; Practical experience with downlinking very large imagery files with breaks in link coverage; Ramifications of larger camera resolutions like Ultra-High Definition, 6,000 [pixels] and 8,000 [pixels] in space applications; Enabling technologies such as the High Efficiency Video Codec, Bundle Streaming Delay Tolerant Networking, Optical Communications and Bayer Pattern Sensors and other similar innovations; Likely future operations scenarios for deep space missions with extreme latency and intermittent communications links.

  11. 47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 2 2011-10-01 2011-10-01 false Analog video transmissions in the Fixed...) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions in the Fixed-Satellite Services. (a) Downlink analog video transmissions in the band 3700-4200 MHz...

  12. 47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 2 2013-10-01 2013-10-01 false Analog video transmissions in the Fixed...) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions in the Fixed-Satellite Services. (a) Downlink analog video transmissions in the band 3700-4200 MHz...

  13. 47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 2 2014-10-01 2014-10-01 false Analog video transmissions in the Fixed...) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions in the Fixed-Satellite Services. (a) Downlink analog video transmissions in the band 3700-4200 MHz...

  14. 47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 2 2012-10-01 2012-10-01 false Analog video transmissions in the Fixed...) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions in the Fixed-Satellite Services. (a) Downlink analog video transmissions in the band 3700-4200 MHz...

  15. 47 CFR 25.211 - Analog video transmissions in the Fixed-Satellite Services.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 2 2010-10-01 2010-10-01 false Analog video transmissions in the Fixed...) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Technical Standards § 25.211 Analog video transmissions in the Fixed-Satellite Services. (a) Downlink analog video transmissions in the band 3700-4200 MHz...

  16. Melting a Gold Sample within TEMPUS

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A gold sample is heated by the TEMPUS electromagnetic levitation furnace on STS-94, 1997, MET:10/09:20 (approximate). The sequence shows the sample being positioned electromagnetically and starting to be heated to melting. TEMPUS (stands for Tiegelfreies Elektromagnetisches Prozessiere unter Schwerelosigkeit (containerless electromagnetic processing under weightlessness). It was developed by the German Space Agency (DARA) for flight aboard Spacelab. The DARA project scientist was Igon Egry. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). DARA and NASA are exploring the possibility of flying an advanced version of TEMPUS on the International Space Station. (378KB JPEG, 2380 x 2676 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300191.html.

  17. Burbank uses video camera during installation and routing of HRCS Video Cables

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060104 (1 Feb. 2012) --- NASA astronaut Dan Burbank, Expedition 30 commander, uses a video camera in the Destiny laboratory of the International Space Station during installation and routing of video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  18. Potential digitization/compression techniques for Shuttle video

    NASA Technical Reports Server (NTRS)

    Habibi, A.; Batson, B. H.

    1978-01-01

    The Space Shuttle initially will be using a field-sequential color television system but it is possible that an NTSC color TV system may be used for future missions. In addition to downlink color TV transmission via analog FM links, the Shuttle will use a high resolution slow-scan monochrome system for uplink transmission of text and graphics information. This paper discusses the characteristics of the Shuttle video systems, and evaluates digitization and/or bandwidth compression techniques for the various links. The more attractive techniques for the downlink video are based on a two-dimensional DPCM encoder that utilizes temporal and spectral as well as the spatial correlation of the color TV imagery. An appropriate technique for distortion-free coding of the uplink system utilizes two-dimensional HCK codes.

  19. Gold Sample Heating within the TEMPUS Electromagnetic Levitation Furnace

    NASA Technical Reports Server (NTRS)

    2003-01-01

    A gold sample is heated by the TEMPUS electromagnetic levitation furnace on STS-94, 1997, MET:10/09:20 (approximate). The sequence shows the sample being positioned electromagnetically and starting to be heated to melting. TEMPUS (stands for Tiegelfreies Elektromagnetisches Prozessiere unter Schwerelosigkeit (containerless electromagnetic processing under weightlessness). It was developed by the German Space Agency (DARA) for flight aboard Spacelab. The DARA project scientist was Igon Egry. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). DARA and NASA are exploring the possibility of flying an advanced version of TEMPUS on the International Space Station. (460KB, 14-second MPEG, screen 160 x 120 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300190.html.

  20. Laminar Jet Diffusion Flame Burning

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Study of the downlink data from the Laminar Soot Processes (LSP) experiment quickly resulted in discovery of a new mechanism of flame extinction caused by radiation of soot. Scientists found that the flames emit soot sooner than expected. These findings have direct impact on spacecraft fire safety, as well as the theories predicting the formation of soot -- which is a major factor as a pollutant and in the spread of unwanted fires. This sequence, using propane fuel, was taken STS-94, July 4 1997, MET:2/05:30 (approximate). LSP investigated fundamental questions regarding soot, a solid byproduct of the combustion of hydrocarbon fuels. The experiment was performed using a laminar jet diffusion flame, which is created by simply flowing fuel-like ethylene or propane -- through a nozzle and igniting it, much like a butane cigarette lighter. The LSP principal investigator was Gerard Faeth, University of Michigan, Arn Arbor. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). LSP results led to a reflight for extended investigations on the STS-107 research mission in January 2003. Advanced combustion experiments will be a part of investigations planned for the International Space Station. (983KB, 9-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300184.html.

  1. A Series of Laminar Jet Flame

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Study of the downlink data from the Laminar Soot Processes (LSP) experiment quickly resulted in discovery of a new mechanism of flame extinction caused by radiation of soot. Scientists found that the flames emit soot sooner than expected. These findings have direct impact on spacecraft fire safety, as well as the theories predicting the formation of soot -- which is a major factor as a pollutant and in the spread of unwanted fires. This sequence, using propane fuel, was taken STS-94, July 4 1997, MET:2/05:30 (approximate). LSP investigated fundamental questions regarding soot, a solid byproduct of the combustion of hydrocarbon fuels. The experiment was performed using a laminar jet diffusion flame, which is created by simply flowing fuel-like ethylene or propane -- through a nozzle and igniting it, much like a butane cigarette lighter. The LSP principal investigator was Gerard Faeth, University of Michigan, Arn Arbor. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). LSP results led to a reflight for extended investigations on the STS-107 research mission in January 2003. Advanced combustion experiments will be a part of investigations planned for the International Space Station. (249KB JPEG, 1350 x 1524 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300185.html.

  2. Kuipers installs and routes RCS Video Cables in the U.S. Laboratory

    NASA Image and Video Library

    2012-02-01

    ISS030-E-060117 (1 Feb. 2012) --- In the International Space Station?s Destiny laboratory, European Space Agency astronaut Andre Kuipers, Expedition 30 flight engineer, routes video cable for the High Rate Communication System (HRCS). HRCS will allow for two additional space-to-ground audio channels and two additional downlink video channels.

  3. Students Speak with the ISS

    NASA Image and Video Library

    2012-11-15

    Students from D.C.'s Stuart-Hobson Middle School participate in a live video downlink with astronauts aboard the International Space Station at the Smithsonian National Air and Space Museum, Thursday, Nov. 15, 2012 in Washington. The downlink is an annual event held in honor of International Education Week, and was co-hosted with the Department of Education and the National Center for Earth and Space Science Education (NCESSE). Photo Credit: (NASA/Carla Cioffi)

  4. Series of Laminar Soot Processes Experiment

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Study of the downlink data from the Laminar Soot Processes (LSP) experiment quickly resulted in discovery of a new mechanism of flame extinction caused by radiation of soot. Scientists found that the flames emit soot sooner than expected. These findings have direct impact on spacecraft fire safety, as well as the theories predicting the formation of soot -- which is a major factor as a pollutant and in the spread of unwanted fires. This sequence was taken July 15, 1997, MET:14/10:34 (approximate) and shows the ignition and extinction of this flame. LSP investigated fundamental questions regarding soot, a solid byproduct of the combustion of hydrocarbon fuels. The experiment was performed using a laminar jet diffusion flame, which is created by simply flowing fuel -- like ethylene or propane -- through a nozzle and igniting it, much like a butane cigarette lighter. The LSP principal investigator was Gerard Faeth, University of Michigan, Arn Arbor. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). LSP results led to a reflight for extended investigations on the STS-107 research mission in January 2003. Advanced combustion experiments will be a part of investigations planned for the International Space Station. (189KB JPEG, 1350 x 1517 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300183.html.

  5. Burning Laminar Jet Diffusion Flame

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Study of the downlink data from the Laminar Soot Processes (LSP) experiment quickly resulted in discovery of a new mechanism of flame extinction caused by radiation of soot. Scientists found that the flames emit soot sooner than expected. These findings have direct impact on spacecraft fire safety, as well as the theories predicting the formation of soot -- which is a major factor as a pollutant and in the spread of unwanted fires. This sequence was taken July 15, 1997, MET:14/10:34 (approximate) and shows the ignition and extinction of this flame. LSP investigated fundamental questions regarding soot, a solid byproduct of the combustion of hydrocarbon fuels. The experiment was performed using a laminar jet diffusion flame, which is created by simply flowing fuel -- like ethylene or propane -- through a nozzle and igniting it, much like a butane cigarette lighter. The LSP principal investigator was Gerard Faeth, University of Michigan, Arn Arbor. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). LSP results led to a reflight for extended investigations on the STS-107 research mission in January 2003. Advanced combustion experiments will be a part of investigations planned for the International Space Station. (518KB, 20-second MPEG, screen 160 x 120 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300182.html.

  6. Two-Level Scheduling for Video Transmission over Downlink OFDMA Networks

    PubMed Central

    Tham, Mau-Luen

    2016-01-01

    This paper presents a two-level scheduling scheme for video transmission over downlink orthogonal frequency-division multiple access (OFDMA) networks. It aims to maximize the aggregate quality of the video users subject to the playback delay and resource constraints, by exploiting the multiuser diversity and the video characteristics. The upper level schedules the transmission of video packets among multiple users based on an overall target bit-error-rate (BER), the importance level of packet and resource consumption efficiency factor. Instead, the lower level renders unequal error protection (UEP) in terms of target BER among the scheduled packets by solving a weighted sum distortion minimization problem, where each user weight reflects the total importance level of the packets that has been scheduled for that user. Frequency-selective power is then water-filled over all the assigned subcarriers in order to leverage the potential channel coding gain. Realistic simulation results demonstrate that the proposed scheme significantly outperforms the state-of-the-art scheduling scheme by up to 6.8 dB in terms of peak-signal-to-noise-ratio (PSNR). Further test evaluates the suitability of equal power allocation which is the common assumption in the literature. PMID:26906398

  7. Several Flame Balls Burning

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Structure of Flameballs at Low Lewis Numbers (SOFBALL) experiments aboard the space shuttle in 1997 a series of sturningly successful burns. This sequence was taken during STS-94, July 12, 1997, MET:10/08:18 (approximate). It was thought these extremely dim flameballs (1/20 the power of a kitchen match) could last up to 200 seconds -- in fact, they can last for at least 500 seconds. This has ramifications in fuel-spray design in combustion engines, as well as fire safety in space. The SOFBALL principal investigator was Paul Ronney, University of Southern California, Los Angeles. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations planned for the International Space Station. (925KB, 9-second MPEG spanning 10 minutes, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300186.html.

  8. Flame Balls

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Structure of Flameballs at Low Lewis Numbers (SOFBALL) experiments aboard the space shuttle in 1997 a series of sturningly successful burns. This sequence was taken during STS-94, July 12, 1997, MET:10/08:18 (approximate). It was thought these extremely dim flameballs (1/20 the power of a kitchen match) could last up to 200 seconds -- in fact, they can last for at least 500 seconds. This has ramifications in fuel-spray design in combustion engines, as well as fire safety in space. The SOFBALL principal investigator was Paul Ronney, University of Southern California, Los Angeles. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations planned for the International Space Station. (563KB JPEG, 1798 x 1350 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300187.html.

  9. Video image stabilization and registration--plus

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor)

    2009-01-01

    A method of stabilizing a video image displayed in multiple video fields of a video sequence includes the steps of: subdividing a selected area of a first video field into nested pixel blocks; determining horizontal and vertical translation of each of the pixel blocks in each of the pixel block subdivision levels from the first video field to a second video field; and determining translation of the image from the first video field to the second video field by determining a change in magnification of the image from the first video field to the second video field in each of horizontal and vertical directions, and determining shear of the image from the first video field to the second video field in each of the horizontal and vertical directions.

  10. Method and System for Temporal Filtering in Video Compression Systems

    NASA Technical Reports Server (NTRS)

    Lu, Ligang; He, Drake; Jagmohan, Ashish; Sheinin, Vadim

    2011-01-01

    Three related innovations combine improved non-linear motion estimation, video coding, and video compression. The first system comprises a method in which side information is generated using an adaptive, non-linear motion model. This method enables extrapolating and interpolating a visual signal, including determining the first motion vector between the first pixel position in a first image to a second pixel position in a second image; determining a second motion vector between the second pixel position in the second image and a third pixel position in a third image; determining a third motion vector between the first pixel position in the first image and the second pixel position in the second image, the second pixel position in the second image, and the third pixel position in the third image using a non-linear model; and determining a position of the fourth pixel in a fourth image based upon the third motion vector. For the video compression element, the video encoder has low computational complexity and high compression efficiency. The disclosed system comprises a video encoder and a decoder. The encoder converts the source frame into a space-frequency representation, estimates the conditional statistics of at least one vector of space-frequency coefficients with similar frequencies, and is conditioned on previously encoded data. It estimates an encoding rate based on the conditional statistics and applies a Slepian-Wolf code with the computed encoding rate. The method for decoding includes generating a side-information vector of frequency coefficients based on previously decoded source data and encoder statistics and previous reconstructions of the source frequency vector. It also performs Slepian-Wolf decoding of a source frequency vector based on the generated side-information and the Slepian-Wolf code bits. The video coding element includes receiving a first reference frame having a first pixel value at a first pixel position, a second reference frame having a second pixel value at a second pixel position, and a third reference frame having a third pixel value at a third pixel position. It determines a first motion vector between the first pixel position and the second pixel position, a second motion vector between the second pixel position and the third pixel position, and a fourth pixel value for a fourth frame based upon a linear or nonlinear combination of the first pixel value, the second pixel value, and the third pixel value. A stationary filtering process determines the estimated pixel values. The parameters of the filter may be predetermined constants.

  11. OPALS: A COTS-based Tech Demo of Optical Communications

    NASA Technical Reports Server (NTRS)

    Oaida, Bogdan

    2012-01-01

    I. Objective: Deliver video from ISS to optical ground terminal via an optical communications link. a) JPL Phaeton/Early Career Hire (ECH) training project. b) Implemented as Class-D payload. c) Downlink at approx.30Mb/s. II. Flight System a) Optical Head Beacon Acquisition Camera. Downlink Transmitter. 2-axis Gimbal. b) Sealed Container Laser Avionics Power distribution Digital I/O board III. Implementation: a) Ground Station - Optical Communications Telescope Laboratory at Table Mountain Facility b) Flight System mounted to ISS FRAM as standard I/F. Attached externally on Express Logistics Carrier.

  12. Students Speak with the ISS

    NASA Image and Video Library

    2012-11-15

    Leland Melvin, NASA Associate Administrator for Education and two-time space shuttle astronaut, answers a question from a student in a live video downlink at the Smithsonian National Air and Space Museum, Thursday, Nov. 15, 2012 in Washington. The students, participants from the Student Spaceflight Experiments Program (SSEP) conducted a live conversation with astronauts aboard the International Space Station. The downlink is an annual event held in honor of International Education Week, and was co-hosted with the Department of Education and the National Center for Earth and Space Science Education (NCESSE). Photo Credit: (NASA/Carla Cioffi)

  13. The Effects of Radiation on Imagery Sensors in Space

    NASA Technical Reports Server (NTRS)

    Mathis, Dylan

    2007-01-01

    Recent experience using high definition video on the International Space Station reveals camera pixel degradation due to particle radiation to be a much more significant problem with high definition cameras than with standard definition video. Although it may at first appear that increased pixel density on the imager is the logical explanation for this, the ISS implementations of high definition suggest a more complex causal and mediating factor mix. The degree of damage seems to vary from one type of camera to another, and this variation prompts a reconsideration of the possible factors in pixel loss, such as imager size, number of pixels, pixel aperture ratio, imager type (CCD or CMOS), method of error correction/concealment, and the method of compression used for recording or transmission. The problem of imager pixel loss due to particle radiation is not limited to out-of-atmosphere applications. Since particle radiation increases with altitude, it is not surprising to find anecdotal evidence that video cameras subject to many hours of airline travel show an increased incidence of pixel loss. This is even evident in some standard definition video applications, and pixel loss due to particle radiation only stands to become a more salient issue considering the continued diffusion of high definition video cameras in the marketplace.

  14. Droplet Combustion Experiment Operates

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Fuel ignites and burns in the Droplet Combustion Experiment (DCE) on STS-94 on July 12, 1997, MET:11/07:00 (approximate). DCE used various fuels -- in drops ranging from 1 mm (0.04 inches) to 5 mm (0.2 inches) -- and mixtures of oxidizers and inert gases to learn more about the physics of combustion in the simplest burning configuration, a sphere. The DCE was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. The experiment elapsed time is shown at the bottom of the composite image. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (119KB JPEG, 658 x 982 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300171.html.

  15. Burning Fuel Droplet

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Fuel ignites and burns in the Droplet Combustion Experiment (DCE) on STS-94 on July 4 1997, MET:2/05:40 (approximate). The DCE was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. DCE used various fuels -- in drops ranging from 1 mm (0.04 inches) to 5 mm (0.2 inches) -- and mixtures of oxidizers and inert gases to learn more about the physics of combustion in the simplest burning configuration, a sphere. The experiment elapsed time is shown at the bottom of the composite image. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (121KB JPEG, 654 x 977 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300169.html.

  16. Kadenza: Kepler/K2 Raw Cadence Data Reader

    NASA Astrophysics Data System (ADS)

    Barentsen, Geert; Cardoso, José Vinícius de Miranda

    2018-03-01

    Kadenza enables time-critical data analyses to be carried out using NASA's Kepler Space Telescope. It enables users to convert Kepler's raw data files into user-friendly Target Pixel Files upon downlink from the spacecraft. The primary motivation for this tool is to enable the microlensing, supernova, and exoplanet communities to create quicklook lightcurves for transient events which require rapid follow-up.

  17. Radiofrequency-electromagnetic field exposures in kindergarten children.

    PubMed

    Bhatt, Chhavi Raj; Redmayne, Mary; Billah, Baki; Abramson, Michael J; Benke, Geza

    2017-09-01

    The aim of this study was to assess environmental and personal radiofrequency-electromagnetic field (RF-EMF) exposures in kindergarten children. Ten children and 20 kindergartens in Melbourne, Australia participated in personal and environmental exposure measurements, respectively. Order statistics of RF-EMF exposures were computed for 16 frequency bands between 88 MHz and 5.8 GHz. Of the 16 bands, the three highest sources of environmental RF-EMF exposures were: Global System for Mobile Communications (GSM) 900 MHz downlink (82 mV/m); Universal Mobile Telecommunications System (UMTS) 2100MHz downlink (51 mV/m); and GSM 900 MHz uplink (45 mV/m). Similarly, the three highest personal exposure sources were: GSM 900 MHz downlink (50 mV/m); UMTS 2100 MHz downlink, GSM 900 MHz uplink and GSM 1800 MHz downlink (20 mV/m); and Frequency Modulation radio, Wi-Fi 2.4 GHz and Digital Video Broadcasting-Terrestrial (10 mV/m). The median environmental exposures were: 179 mV/m (total all bands), 123 mV/m (total mobile phone base station downlinks), 46 mV/m (total mobile phone base station uplinks), and 16 mV/m (Wi-Fi 2.4 GHz). Similarly, the median personal exposures were: 81 mV/m (total all bands), 62 mV/m (total mobile phone base station downlinks), 21 mV/m (total mobile phone base station uplinks), and 9 mV/m (Wi-Fi 2.4 GHz). The measurements showed that environmental RF-EMF exposure levels exceeded the personal RF-EMF exposure levels at kindergartens.

  18. 47 CFR 79.102 - Closed caption decoder requirements for digital television receivers and converter boxes.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... COMMISSION (CONTINUED) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING... separated from the underlying video by a sufficient number of background pixels to insure the foreground is... the trailing “white” pixels of the last character on a row do not bleed into the underlying video. (i...

  19. 47 CFR 79.102 - Closed caption decoder requirements for digital television receivers and converter boxes.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... COMMISSION (CONTINUED) BROADCAST RADIO SERVICES CLOSED CAPTIONING AND VIDEO DESCRIPTION OF VIDEO PROGRAMMING... separated from the underlying video by a sufficient number of background pixels to insure the foreground is... the trailing “white” pixels of the last character on a row do not bleed into the underlying video. (i...

  20. Video requirements for materials processing experiments in the space station US laboratory

    NASA Technical Reports Server (NTRS)

    Baugher, Charles R.

    1989-01-01

    Full utilization of the potential of the materials research on the Space Station can be achieved only if adequate means are available for interactive experimentation between the science facilities and ground-based investigators. Extensive video interfaces linking these three elements are the only alternative for establishing a viable relation. Because of the limit in the downlink capability, a comprehensive complement of on-board video processing, and video compression is needed. The application of video compression will be an absolute necessity since it's effectiveness will directly impact the quantity of data which will be available to ground investigator teams, and their ability to review the effects of process changes and the experiment progress. Video data compression utilization on the Space Station is discussed.

  1. Error-free holographic frames encryption with CA pixel-permutation encoding algorithm

    NASA Astrophysics Data System (ADS)

    Li, Xiaowei; Xiao, Dan; Wang, Qiong-Hua

    2018-01-01

    The security of video data is necessary in network security transmission hence cryptography is technique to make video data secure and unreadable to unauthorized users. In this paper, we propose a holographic frames encryption technique based on the cellular automata (CA) pixel-permutation encoding algorithm. The concise pixel-permutation algorithm is used to address the drawbacks of the traditional CA encoding methods. The effectiveness of the proposed video encoding method is demonstrated by simulation examples.

  2. Impulsive noise removal from color video with morphological filtering

    NASA Astrophysics Data System (ADS)

    Ruchay, Alexey; Kober, Vitaly

    2017-09-01

    This paper deals with impulse noise removal from color video. The proposed noise removal algorithm employs a switching filtering for denoising of color video; that is, detection of corrupted pixels by means of a novel morphological filtering followed by removal of the detected pixels on the base of estimation of uncorrupted pixels in the previous scenes. With the help of computer simulation we show that the proposed algorithm is able to well remove impulse noise in color video. The performance of the proposed algorithm is compared in terms of image restoration metrics with that of common successful algorithms.

  3. Tri-state delta modulation system for Space Shuttle digital TV downlink

    NASA Technical Reports Server (NTRS)

    Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.

    1981-01-01

    Future requirements for Shuttle Orbiter downlink communication may include transmission of digital video which, in addition to black and white, may also be either field-sequential or NTSC color format. The use of digitized video could provide for picture privacy at the expense of additional onboard hardware, together with an increased bandwidth due to the digitization process. A general objective for the Space Shuttle application is to develop a digitization technique that is compatible with data rates in the 20-30 Mbps range but still provides good quality pictures. This paper describes a tri-state delta modulation/demodulation (TSDM) technique which is a good compromise between implementation complexity and performance. The unique feature of TSDM is that it provides for efficient run-length encoding of constant-intensity segments of a TV picture. Axiomatix has developed a hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV and field-sequential color. The hardware complexity of this TSDM implementation is summarized in the paper.

  4. Video of Tissue Grown in Space in NASA Bioreactor

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Principal investigator Leland Chung grew prostate cancer and bone stromal cells aboard the Space Shuttle Columbia during the STS-107 mission. Although the experiment samples were lost along with the ill-fated spacecraft and crew, he did obtain downlinked video of the experiment that indicates the enormous potential of growing tissues in microgravity. Cells grown aboard Columbia had grown far larger tissue aggregates at day 5 than did the cells grown in a NASA bioreactor on the ground.

  5. Evaluating video digitizer errors

    NASA Astrophysics Data System (ADS)

    Peterson, C.

    2016-01-01

    Analog output video cameras remain popular for recording meteor data. Although these cameras uniformly employ electronic detectors with fixed pixel arrays, the digitization process requires resampling the horizontal lines as they are output in order to reconstruct the pixel data, usually resulting in a new data array of different horizontal dimensions than the native sensor. Pixel timing is not provided by the camera, and must be reconstructed based on line sync information embedded in the analog video signal. Using a technique based on hot pixels, I present evidence that jitter, sync detection, and other timing errors introduce both position and intensity errors which are not present in cameras which internally digitize their sensors and output the digital data directly.

  6. 47 CFR 79.102 - Closed caption decoder requirements for digital television receivers and converter boxes.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... COMMISSION (CONTINUED) BROADCAST RADIO SERVICES ACCESSIBILITY OF VIDEO PROGRAMMING Apparatus § 79.102 Closed... separated from the underlying video by a sufficient number of background pixels to insure the foreground is... the trailing “white” pixels of the last character on a row do not bleed into the underlying video. (i...

  7. Mentoring for Youth in Schools and Communities. National Satellite Videoconference. [Videotape].

    ERIC Educational Resources Information Center

    Department of Justice, Washington, DC. Office of Juvenile Justice and Delinquency Prevention.

    This videotape presents the National Satellite Videoconference on mentoring for youth. The video opens with a discussion of mentoring and presents panelists who make statements about mentoring and youth programs and respond to questions called in by videoconference participants at approximately 500 downlinked sites. Panelists were: (1) Shay…

  8. Telecommunications in Higher Education: Creating New Information Sources.

    ERIC Educational Resources Information Center

    Brown, Fred D.

    1986-01-01

    Discusses the telecommunications systems in operation at Buena Vista College in Iowa. Describes the systems' uses in linking all offices and classrooms on the campus, downlinking satellite communications through a dish, transmitting audio and video information to any set of defined studio or classroom space, and teleconferencing. (TW)

  9. Adaptive optics correction into single mode fiber for a low Earth orbiting space to ground optical communication link using the OPALS downlink.

    PubMed

    Wright, Malcolm W; Morris, Jeffery F; Kovalik, Joseph M; Andrews, Kenneth S; Abrahamson, Matthew J; Biswas, Abhijit

    2015-12-28

    An adaptive optics (AO) testbed was integrated to the Optical PAyload for Lasercomm Science (OPALS) ground station telescope at the Optical Communications Telescope Laboratory (OCTL) as part of the free space laser communications experiment with the flight system on board the International Space Station (ISS). Atmospheric turbulence induced aberrations on the optical downlink were adaptively corrected during an overflight of the ISS so that the transmitted laser signal could be efficiently coupled into a single mode fiber continuously. A stable output Strehl ratio of around 0.6 was demonstrated along with the recovery of a 50 Mbps encoded high definition (HD) video transmission from the ISS at the output of the single mode fiber. This proof of concept demonstration validates multi-Gbps optical downlinks from fast slewing low-Earth orbiting (LEO) spacecraft to ground assets in a manner that potentially allows seamless space to ground connectivity for future high data-rates network.

  10. Fuel Droplet Burning During Droplet Combustion Experiment

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Fuel ignites and burns in the Droplet Combustion Experiment (DCE) on STS-94 on July 4 1997, MET:2/05:40 (approximate). The DCE was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. DCE used various fuels -- in drops ranging from 1 mm (0.04 inches) to 5 mm (0.2 inches) -- and mixtures of oxidizers and inert gases to learn more about the physics of combustion in the simplest burning configuration, a sphere. The experiment elapsed time is shown at the bottom of the composite image. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (1.4MB, 13-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available)A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300168.html.

  11. Droplet Combustion Experiment on STS-94

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Fuel ignites and burns in the Droplet Combustion Experiment (DCE) on STS-94 on July 12, 1997, MET:11/07:00 (approximate). DCE used various fuels -- in drops ranging from 1 mm (0.04 inches) to 5 mm (0.2 inches) -- and mixtures of oxidizers and inert gases to learn more about the physics of combustion in the simplest burning configuration, a sphere. The DCE was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. The experiment elapsed time is shown at the bottom of the composite image. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (1.3MB, 13-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300170.html.

  12. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2002-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  13. Video Image Stabilization and Registration

    NASA Technical Reports Server (NTRS)

    Hathaway, David H. (Inventor); Meyer, Paul J. (Inventor)

    2003-01-01

    A method of stabilizing and registering a video image in multiple video fields of a video sequence provides accurate determination of the image change in magnification, rotation and translation between video fields, so that the video fields may be accurately corrected for these changes in the image in the video sequence. In a described embodiment, a key area of a key video field is selected which contains an image which it is desired to stabilize in a video sequence. The key area is subdivided into nested pixel blocks and the translation of each of the pixel blocks from the key video field to a new video field is determined as a precursor to determining change in magnification, rotation and translation of the image from the key video field to the new video field.

  14. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  15. Predictable Programming on a Precision Timed Architecture

    DTIC Science & Technology

    2008-04-18

    Application: A Video Game Figure 6: Structure of the Video Game Example Inspired by an example game sup- plied with the Hydra development board [17...we implemented a sim- ple video game in C targeted to our PRET architecture. Our example centers on rendering graphics and is otherwise fairly simple...background image. 13 Figure 10: A Screen Dump From Our Video Game Ultimately, each displayed pixel is one of only four col- ors, but the pixels in

  16. Geology of the Icy Galilean Satellites: Understanding Crustal Processes and Geologic Histories Through the JIMO Mission

    NASA Technical Reports Server (NTRS)

    Figueredo, P. H.; Tanaka, K.; Senske, D.; Greeley, R.

    2003-01-01

    Knowledge of the geology, style and time history of crustal processes on the icy Galilean satellites is necessary to understanding how these bodies formed and evolved. Data from the Galileo mission have provided a basis for detailed geologic and geo- physical analysis. Due to constrained downlink, Galileo Solid State Imaging (SSI) data consisted of global coverage at a -1 km/pixel ground sampling and representative, widely spaced regional maps at -200 m/pixel. These two data sets provide a general means to extrapolate units identified at higher resolution to lower resolution data. A sampling of key sites at much higher resolution (10s of m/pixel) allows evaluation of processes on local scales. We are currently producing the first global geological map of Europa using Galileo global and regional-scale data. This work is demonstrating the necessity and utility of planet-wide contiguous image coverage at global, regional, and local scales.

  17. High-Order Model and Dynamic Filtering for Frame Rate Up-Conversion.

    PubMed

    Bao, Wenbo; Zhang, Xiaoyun; Chen, Li; Ding, Lianghui; Gao, Zhiyong

    2018-08-01

    This paper proposes a novel frame rate up-conversion method through high-order model and dynamic filtering (HOMDF) for video pixels. Unlike the constant brightness and linear motion assumptions in traditional methods, the intensity and position of the video pixels are both modeled with high-order polynomials in terms of time. Then, the key problem of our method is to estimate the polynomial coefficients that represent the pixel's intensity variation, velocity, and acceleration. We propose to solve it with two energy objectives: one minimizes the auto-regressive prediction error of intensity variation by its past samples, and the other minimizes video frame's reconstruction error along the motion trajectory. To efficiently address the optimization problem for these coefficients, we propose the dynamic filtering solution inspired by video's temporal coherence. The optimal estimation of these coefficients is reformulated into a dynamic fusion of the prior estimate from pixel's temporal predecessor and the maximum likelihood estimate from current new observation. Finally, frame rate up-conversion is implemented using motion-compensated interpolation by pixel-wise intensity variation and motion trajectory. Benefited from the advanced model and dynamic filtering, the interpolated frame has much better visual quality. Extensive experiments on the natural and synthesized videos demonstrate the superiority of HOMDF over the state-of-the-art methods in both subjective and objective comparisons.

  18. Computational imaging with a single-pixel detector and a consumer video projector

    NASA Astrophysics Data System (ADS)

    Sych, D.; Aksenov, M.

    2018-02-01

    Single-pixel imaging is a novel rapidly developing imaging technique that employs spatially structured illumination and a single-pixel detector. In this work, we experimentally demonstrate a fully operating modular single-pixel imaging system. Light patterns in our setup are created with help of a computer-controlled digital micromirror device from a consumer video projector. We investigate how different working modes and settings of the projector affect the quality of reconstructed images. We develop several image reconstruction algorithms and compare their performance for real imaging. Also, we discuss the potential use of the single-pixel imaging system for quantum applications.

  19. Reflectance Prediction Modelling for Residual-Based Hyperspectral Image Coding

    PubMed Central

    Xiao, Rui; Gao, Junbin; Bossomaier, Terry

    2016-01-01

    A Hyperspectral (HS) image provides observational powers beyond human vision capability but represents more than 100 times the data compared to a traditional image. To transmit and store the huge volume of an HS image, we argue that a fundamental shift is required from the existing “original pixel intensity”-based coding approaches using traditional image coders (e.g., JPEG2000) to the “residual”-based approaches using a video coder for better compression performance. A modified video coder is required to exploit spatial-spectral redundancy using pixel-level reflectance modelling due to the different characteristics of HS images in their spectral and shape domain of panchromatic imagery compared to traditional videos. In this paper a novel coding framework using Reflectance Prediction Modelling (RPM) in the latest video coding standard High Efficiency Video Coding (HEVC) for HS images is proposed. An HS image presents a wealth of data where every pixel is considered a vector for different spectral bands. By quantitative comparison and analysis of pixel vector distribution along spectral bands, we conclude that modelling can predict the distribution and correlation of the pixel vectors for different bands. To exploit distribution of the known pixel vector, we estimate a predicted current spectral band from the previous bands using Gaussian mixture-based modelling. The predicted band is used as the additional reference band together with the immediate previous band when we apply the HEVC. Every spectral band of an HS image is treated like it is an individual frame of a video. In this paper, we compare the proposed method with mainstream encoders. The experimental results are fully justified by three types of HS dataset with different wavelength ranges. The proposed method outperforms the existing mainstream HS encoders in terms of rate-distortion performance of HS image compression. PMID:27695102

  20. Onboard Science and Applications Algorithm for Hyperspectral Data Reduction

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Davies, Ashley G.; Silverman, Dorothy; Mandl, Daniel

    2012-01-01

    An onboard processing mission concept is under development for a possible Direct Broadcast capability for the HyspIRI mission, a Hyperspectral remote sensing mission under consideration for launch in the next decade. The concept would intelligently spectrally and spatially subsample the data as well as generate science products onboard to enable return of key rapid response science and applications information despite limited downlink bandwidth. This rapid data delivery concept focuses on wildfires and volcanoes as primary applications, but also has applications to vegetation, coastal flooding, dust, and snow/ice applications. Operationally, the HyspIRI team would define a set of spatial regions of interest where specific algorithms would be executed. For example, known coastal areas would have certain products or bands downlinked, ocean areas might have other bands downlinked, and during fire seasons other areas would be processed for active fire detections. Ground operations would automatically generate the mission plans specifying the highest priority tasks executable within onboard computation, setup, and data downlink constraints. The spectral bands of the TIR (thermal infrared) instrument can accurately detect the thermal signature of fires and send down alerts, as well as the thermal and VSWIR (visible to short-wave infrared) data corresponding to the active fires. Active volcanism also produces a distinctive thermal signature that can be detected onboard to enable spatial subsampling. Onboard algorithms and ground-based algorithms suitable for onboard deployment are mature. On HyspIRI, the algorithm would perform a table-driven temperature inversion from several spectral TIR bands, and then trigger downlink of the entire spectrum for each of the hot pixels identified. Ocean and coastal applications include sea surface temperature (using a small spectral subset of TIR data, but requiring considerable ancillary data), and ocean color applications to track biological activity such as harmful algal blooms. Measuring surface water extent to track flooding is another rapid response product leveraging VSWIR spectral information.

  1. Rapid Damage Assessment. Volume II. Development and Testing of Rapid Damage Assessment System.

    DTIC Science & Technology

    1981-02-01

    pixels/s Camera Line Rate 732.4 lines/s Pixels per Line 1728 video 314 blank 4 line number (binary) 2 run number (BCD) 2048 total Pixel Resolution 8 bits...sists of an LSI-ll microprocessor, a VDI -200 video display processor, an FD-2 dual floppy diskette subsystem, an FT-I function key-trackball module...COMPONENT LIST FOR IMAGE PROCESSOR SYSTEM IMAGE PROCESSOR SYSTEM VIEWS I VDI -200 Display Processor Racks, Table FD-2 Dual Floppy Diskette Subsystem FT-l

  2. Capturing and displaying microscopic images used in medical diagnostics and forensic science using 4K video resolution - an application in higher education.

    PubMed

    Maier, Hans; de Heer, Gert; Ortac, Ajda; Kuijten, Jan

    2015-11-01

    To analyze, interpret and evaluate microscopic images, used in medical diagnostics and forensic science, video images for educational purposes were made with a very high resolution of 4096 × 2160 pixels (4K), which is four times as many pixels as High-Definition Video (1920 × 1080 pixels). The unprecedented high resolution makes it possible to see details that remain invisible to any other video format. The images of the specimens (blood cells, tissue sections, hair, fibre, etc.) are recorded using a 4K video camera which is attached to a light microscope. After processing, this resulted in very sharp and highly detailed images. This material was then used in education for classroom discussion. Spoken explanation by experts in the field of medical diagnostics and forensic science was also added to the high-resolution video images to make it suitable for self-study. © 2015 The Authors. Journal of Microscopy published by John Wiley & Sons Ltd on behalf of Royal Microscopical Society.

  3. Pixel-By Estimation of Scene Motion in Video

    NASA Astrophysics Data System (ADS)

    Tashlinskii, A. G.; Smirnov, P. V.; Tsaryov, M. G.

    2017-05-01

    The paper considers the effectiveness of motion estimation in video using pixel-by-pixel recurrent algorithms. The algorithms use stochastic gradient decent to find inter-frame shifts of all pixels of a frame. These vectors form shift vectors' field. As estimated parameters of the vectors the paper studies their projections and polar parameters. It considers two methods for estimating shift vectors' field. The first method uses stochastic gradient descent algorithm to sequentially process all nodes of the image row-by-row. It processes each row bidirectionally i.e. from the left to the right and from the right to the left. Subsequent joint processing of the results allows compensating inertia of the recursive estimation. The second method uses correlation between rows to increase processing efficiency. It processes rows one after the other with the change in direction after each row and uses obtained values to form resulting estimate. The paper studies two criteria of its formation: gradient estimation minimum and correlation coefficient maximum. The paper gives examples of experimental results of pixel-by-pixel estimation for a video with a moving object and estimation of a moving object trajectory using shift vectors' field.

  4. Fast and efficient search for MPEG-4 video using adjacent pixel intensity difference quantization histogram feature

    NASA Astrophysics Data System (ADS)

    Lee, Feifei; Kotani, Koji; Chen, Qiu; Ohmi, Tadahiro

    2010-02-01

    In this paper, a fast search algorithm for MPEG-4 video clips from video database is proposed. An adjacent pixel intensity difference quantization (APIDQ) histogram is utilized as the feature vector of VOP (video object plane), which had been reliably applied to human face recognition previously. Instead of fully decompressed video sequence, partially decoded data, namely DC sequence of the video object are extracted from the video sequence. Combined with active search, a temporal pruning algorithm, fast and robust video search can be realized. The proposed search algorithm has been evaluated by total 15 hours of video contained of TV programs such as drama, talk, news, etc. to search for given 200 MPEG-4 video clips which each length is 15 seconds. Experimental results show the proposed algorithm can detect the similar video clip in merely 80ms, and Equal Error Rate (ERR) of 2 % in drama and news categories are achieved, which are more accurately and robust than conventional fast video search algorithm.

  5. Burning Heptane Droplets on STS-94

    NASA Technical Reports Server (NTRS)

    2003-01-01

    Fuel ignites and burns in the Droplet Combustion Experiment (DCE) on STS-94 on July 11, 1997. This round of experiments burned heptane droplets in 1/2 atmosphere pressure consisting of oxygen and helium. During this mission, scientists have seen for the first time droplets which stop burning due to heat loss by radiation. From these data, the investigators hope to understand the physical and chemical processes that take place in droplet combustion in different environments, including conditions under which the flames extinguish, the chemistry of the combustion reaction, and the production of pollutants such as nitrogen oxides and soot particles. The DCE was designed to investigate the fundamental combustion aspects of single, isolated droplets under different pressures and ambient oxygen concentrations for a range of droplet sizes varying between 2 and 5 mm. The DCE principal investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station.(983KB, 9-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300172.html.

  6. Balancing Uplink and Downlink under Asymmetric Traffic Environments Using Distributed Receive Antennas

    NASA Astrophysics Data System (ADS)

    Sohn, Illsoo; Lee, Byong Ok; Lee, Kwang Bok

    Recently, multimedia services are increasing with the widespread use of various wireless applications such as web browsers, real-time video, and interactive games, which results in traffic asymmetry between the uplink and downlink. Hence, time division duplex (TDD) systems which provide advantages in efficient bandwidth utilization under asymmetric traffic environments have become one of the most important issues in future mobile cellular systems. It is known that two types of intercell interference, referred to as crossed-slot interference, additionally arise in TDD systems; the performances of the uplink and downlink transmissions are degraded by BS-to-BS crossed-slot interference and MS-to-MS crossed-slot interference, respectively. The resulting performance unbalance between the uplink and downlink makes network deployment severely inefficient. Previous works have proposed intelligent time slot allocation algorithms to mitigate the crossed-slot interference problem. However, they require centralized control, which causes large signaling overhead in the network. In this paper, we propose to change the shape of the cellular structure itself. The conventional cellular structure is easily transformed into the proposed cellular structure with distributed receive antennas (DRAs). We set up statistical Markov chain traffic model and analyze the bit error performances of the conventional cellular structure and proposed cellular structure under asymmetric traffic environments. Numerical results show that the uplink and downlink performances of the proposed cellular structure become balanced with the proper number of DRAs and thus the proposed cellular structure is notably cost-effective in network deployment compared to the conventional cellular structure. As a result, extending the conventional cellular structure into the proposed cellular structure with DRAs is a remarkably cost-effective solution to support asymmetric traffic environments in future mobile cellular systems.

  7. Chinese Spacesuit Analysis

    NASA Technical Reports Server (NTRS)

    Croog, Lewis

    2010-01-01

    In 2008, China became only the 3rd nation to perform an Extravehicular Activity (EVA) from a spacecraft. An overview of the Chinese spacesuit and life support system were assessed from video downlinks during their EVA; from those assessments, spacesuit characteristics were identified. The spacesuits were compared against the Russian Orlan Spacesuit and the U.S. Extravehicular Mobility Unit (EMU). China's plans for future missions also were presented.

  8. Chaos based video encryption using maps and Ikeda time delay system

    NASA Astrophysics Data System (ADS)

    Valli, D.; Ganesan, K.

    2017-12-01

    Chaos based cryptosystems are an efficient method to deal with improved speed and highly secured multimedia encryption because of its elegant features, such as randomness, mixing, ergodicity, sensitivity to initial conditions and control parameters. In this paper, two chaos based cryptosystems are proposed: one is the higher-dimensional 12D chaotic map and the other is based on the Ikeda delay differential equation (DDE) suitable for designing a real-time secure symmetric video encryption scheme. These encryption schemes employ a substitution box (S-box) to diffuse the relationship between pixels of plain video and cipher video along with the diffusion of current input pixel with the previous cipher pixel, called cipher block chaining (CBC). The proposed method enhances the robustness against statistical, differential and chosen/known plain text attacks. Detailed analysis is carried out in this paper to demonstrate the security and uniqueness of the proposed scheme.

  9. Fast image interpolation for motion estimation using graphics hardware

    NASA Astrophysics Data System (ADS)

    Kelly, Francis; Kokaram, Anil

    2004-05-01

    Motion estimation and compensation is the key to high quality video coding. Block matching motion estimation is used in most video codecs, including MPEG-2, MPEG-4, H.263 and H.26L. Motion estimation is also a key component in the digital restoration of archived video and for post-production and special effects in the movie industry. Sub-pixel accurate motion vectors can improve the quality of the vector field and lead to more efficient video coding. However sub-pixel accuracy requires interpolation of the image data. Image interpolation is a key requirement of many image processing algorithms. Often interpolation can be a bottleneck in these applications, especially in motion estimation due to the large number pixels involved. In this paper we propose using commodity computer graphics hardware for fast image interpolation. We use the full search block matching algorithm to illustrate the problems and limitations of using graphics hardware in this way.

  10. ACE: Automatic Centroid Extractor for real time target tracking

    NASA Technical Reports Server (NTRS)

    Cameron, K.; Whitaker, S.; Canaris, J.

    1990-01-01

    A high performance video image processor has been implemented which is capable of grouping contiguous pixels from a raster scan image into groups and then calculating centroid information for each object in a frame. The algorithm employed to group pixels is very efficient and is guaranteed to work properly for all convex shapes as well as most concave shapes. Processing speeds are adequate for real time processing of video images having a pixel rate of up to 20 million pixels per second. Pixels may be up to 8 bits wide. The processor is designed to interface directly to a transputer serial link communications channel with no additional hardware. The full custom VLSI processor was implemented in a 1.6 mu m CMOS process and measures 7200 mu m on a side.

  11. Remote stereoscopic video play platform for naked eyes based on the Android system

    NASA Astrophysics Data System (ADS)

    Jia, Changxin; Sang, Xinzhu; Liu, Jing; Cheng, Mingsheng

    2014-11-01

    As people's life quality have been improved significantly, the traditional 2D video technology can not meet people's urgent desire for a better video quality, which leads to the rapid development of 3D video technology. Simultaneously people want to watch 3D video in portable devices,. For achieving the above purpose, we set up a remote stereoscopic video play platform. The platform consists of a server and clients. The server is used for transmission of different formats of video and the client is responsible for receiving remote video for the next decoding and pixel restructuring. We utilize and improve Live555 as video transmission server. Live555 is a cross-platform open source project which provides solutions for streaming media such as RTSP protocol and supports transmission of multiple video formats. At the receiving end, we use our laboratory own player. The player for Android, which is with all the basic functions as the ordinary players do and able to play normal 2D video, is the basic structure for redevelopment. Also RTSP is implemented into this structure for telecommunication. In order to achieve stereoscopic display, we need to make pixel rearrangement in this player's decoding part. The decoding part is the local code which JNI interface calls so that we can extract video frames more effectively. The video formats that we process are left and right, up and down and nine grids. In the design and development, a large number of key technologies from Android application development have been employed, including a variety of wireless transmission, pixel restructuring and JNI call. By employing these key technologies, the design plan has been finally completed. After some updates and optimizations, the video player can play remote 3D video well anytime and anywhere and meet people's requirement.

  12. High-speed imaging using CMOS image sensor with quasi pixel-wise exposure

    NASA Astrophysics Data System (ADS)

    Sonoda, T.; Nagahara, H.; Endo, K.; Sugiyama, Y.; Taniguchi, R.

    2017-02-01

    Several recent studies in compressive video sensing have realized scene capture beyond the fundamental trade-off limit between spatial resolution and temporal resolution using random space-time sampling. However, most of these studies showed results for higher frame rate video that were produced by simulation experiments or using an optically simulated random sampling camera, because there are currently no commercially available image sensors with random exposure or sampling capabilities. We fabricated a prototype complementary metal oxide semiconductor (CMOS) image sensor with quasi pixel-wise exposure timing that can realize nonuniform space-time sampling. The prototype sensor can reset exposures independently by columns and fix these amount of exposure by rows for each 8x8 pixel block. This CMOS sensor is not fully controllable via the pixels, and has line-dependent controls, but it offers flexibility when compared with regular CMOS or charge-coupled device sensors with global or rolling shutters. We propose a method to realize pseudo-random sampling for high-speed video acquisition that uses the flexibility of the CMOS sensor. We reconstruct the high-speed video sequence from the images produced by pseudo-random sampling using an over-complete dictionary.

  13. "They may be pixels, but they're MY pixels:" developing a metric of character attachment in role-playing video games.

    PubMed

    Lewis, Melissa L; Weber, René; Bowman, Nicholas David

    2008-08-01

    This paper proposes a new and reliable metric for measuring character attachment (CA), the connection felt by a video game player toward a video game character. Results of construct validity analyses indicate that the proposed CA scale has a significant relationship with self-esteem, addiction, game enjoyment, and time spent playing games; all of these relationships are predicted by theory. Additionally, CA levels for role-playing games differ significantly from CA levels of other character-driven games.

  14. Video repairing under variable illumination using cyclic motions.

    PubMed

    Jia, Jiaya; Tai, Yu-Wing; Wu, Tai-Pang; Tang, Chi-Keung

    2006-05-01

    This paper presents a complete system capable of synthesizing a large number of pixels that are missing due to occlusion or damage in an uncalibrated input video. These missing pixels may correspond to the static background or cyclic motions of the captured scene. Our system employs user-assisted video layer segmentation, while the main processing in video repair is fully automatic. The input video is first decomposed into the color and illumination videos. The necessary temporal consistency is maintained by tensor voting in the spatio-temporal domain. Missing colors and illumination of the background are synthesized by applying image repairing. Finally, the occluded motions are inferred by spatio-temporal alignment of collected samples at multiple scales. We experimented on our system with some difficult examples with variable illumination, where the capturing camera can be stationary or in motion.

  15. Mars Color Imager (MARCI) on the Mars Climate Orbiter

    USGS Publications Warehouse

    Malin, M.C.; Bell, J.F.; Calvin, W.; Clancy, R.T.; Haberle, R.M.; James, P.B.; Lee, S.W.; Thomas, P.C.; Caplinger, M.A.

    2001-01-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (???6 x 6 x 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 x 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 1/3 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6?? FOV, which covers ???40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals. Copyright 2001 by the American Geophysical Union.

  16. The Mars Color Imager (MARCI) on the Mars Climate Orbiter

    NASA Astrophysics Data System (ADS)

    Malin, M. C.; Calvin, W.; Clancy, R. T.; Haberle, R. M.; James, P. B.; Lee, S. W.; Thomas, P. C.; Caplinger, M. A.

    2001-08-01

    The Mars Color Imager, or MARCI, experiment on the Mars Climate Orbiter (MCO) consists of two cameras with unique optics and identical focal plane assemblies (FPAs), Data Acquisition System (DAS) electronics, and power supplies. Each camera is characterized by small physical size and mass (~6 × 6 × 12 cm, including baffle; <500 g), low power requirements (<2.5 W, including power supply losses), and high science performance (1000 × 1000 pixel, low noise). The Wide Angle (WA) camera will have the capability to map Mars in five visible and two ultraviolet spectral bands at a resolution of better than 8 km/pixel under the worst case downlink data rate. Under better downlink conditions the WA will provide kilometer-scale global maps of atmospheric phenomena such as clouds, hazes, dust storms, and the polar hood. Limb observations will provide additional detail on atmospheric structure at 13 scale-height resolution. The Medium Angle (MA) camera is designed to study selected areas of Mars at regional scale. From 400 km altitude its 6° FOV, which covers ~40 km at 40 m/pixel, will permit all locations on the planet except the poles to be accessible for image acquisitions every two mapping cycles (roughly 52 sols). Eight spectral channels between 425 and 1000 nm provide the ability to discriminate both atmospheric and surface features on the basis of composition. The primary science objectives of MARCI are to (1) observe Martian atmospheric processes at synoptic scales and mesoscales, (2) study details of the interaction of the atmosphere with the surface at a variety of scales in both space and time, and (3) examine surface features characteristic of the evolution of the Martian climate over time. MARCI will directly address two of the three high-level goals of the Mars Surveyor Program: Climate and Resources. Life, the third goal, will be addressed indirectly through the environmental factors associated with the other two goals.

  17. Video image processor on the Spacelab 2 Solar Optical Universal Polarimeter /SL2 SOUP/

    NASA Technical Reports Server (NTRS)

    Lindgren, R. W.; Tarbell, T. D.

    1981-01-01

    The SOUP instrument is designed to obtain diffraction-limited digital images of the sun with high photometric accuracy. The Video Processor originated from the requirement to provide onboard real-time image processing, both to reduce the telemetry rate and to provide meaningful video displays of scientific data to the payload crew. This original concept has evolved into a versatile digital processing system with a multitude of other uses in the SOUP program. The central element in the Video Processor design is a 16-bit central processing unit based on 2900 family bipolar bit-slice devices. All arithmetic, logical and I/O operations are under control of microprograms, stored in programmable read-only memory and initiated by commands from the LSI-11. Several functions of the Video Processor are described, including interface to the High Rate Multiplexer downlink, cosmetic and scientific data processing, scan conversion for crew displays, focus and exposure testing, and use as ground support equipment.

  18. A Benchmark Dataset and Saliency-guided Stacked Autoencoders for Video-based Salient Object Detection.

    PubMed

    Li, Jia; Xia, Changqun; Chen, Xiaowu

    2017-10-12

    Image-based salient object detection (SOD) has been extensively studied in past decades. However, video-based SOD is much less explored due to the lack of large-scale video datasets within which salient objects are unambiguously defined and annotated. Toward this end, this paper proposes a video-based SOD dataset that consists of 200 videos. In constructing the dataset, we manually annotate all objects and regions over 7,650 uniformly sampled keyframes and collect the eye-tracking data of 23 subjects who free-view all videos. From the user data, we find that salient objects in a video can be defined as objects that consistently pop-out throughout the video, and objects with such attributes can be unambiguously annotated by combining manually annotated object/region masks with eye-tracking data of multiple subjects. To the best of our knowledge, it is currently the largest dataset for videobased salient object detection. Based on this dataset, this paper proposes an unsupervised baseline approach for video-based SOD by using saliencyguided stacked autoencoders. In the proposed approach, multiple spatiotemporal saliency cues are first extracted at the pixel, superpixel and object levels. With these saliency cues, stacked autoencoders are constructed in an unsupervised manner that automatically infers a saliency score for each pixel by progressively encoding the high-dimensional saliency cues gathered from the pixel and its spatiotemporal neighbors. In experiments, the proposed unsupervised approach is compared with 31 state-of-the-art models on the proposed dataset and outperforms 30 of them, including 19 imagebased classic (unsupervised or non-deep learning) models, six image-based deep learning models, and five video-based unsupervised models. Moreover, benchmarking results show that the proposed dataset is very challenging and has the potential to boost the development of video-based SOD.

  19. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    Advances in very large-scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible and potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for a DPCM-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the CODEC are described, and performance results are provided.

  20. Digital CODEC for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A.

    1991-01-01

    Advances in very large scale integration and recent work in the field of bandwidth efficient digital modulation techniques have combined to make digital video processing technically feasible an potentially cost competitive for broadcast quality television transmission. A hardware implementation was developed for DPCM (differential pulse code midulation)-based digital television bandwidth compression algorithm which processes standard NTSC composite color television signals and produces broadcast quality video in real time at an average of 1.8 bits/pixel. The data compression algorithm and the hardware implementation of the codec are described, and performance results are provided.

  1. The Youngest Crater on Charon?

    NASA Image and Video Library

    2015-10-29

    NASA's New Horizons scientists have discovered a striking contrast between one of the fresh craters on Pluto's largest moon Charon and a neighboring crater dotting the moon's Pluto-facing hemisphere. The crater, informally named Organa, caught scientists' attention as they were studying New Horizons' highest-resolution infrared compositional scan of Charon. Organa and portions of the surrounding material ejected from it show infrared absorption at wavelengths of about 2.2 microns, indicating that the crater is rich in frozen ammonia -- and, from what scientists have seen so far, unique on Pluto's largest moon. The infrared spectrum of nearby Skywalker crater, for example, is similar to the rest of Charon's craters and surface, with features dominated by ordinary water ice. This composite image is based on observations from the New Horizons Ralph/LEISA instrument made at 10:25 UT (6:25 a.m. EDT) on July 14, 2015, when New Horizons was 50,000 miles (81,000 kilometers) from Charon. The spatial resolution is 3 miles (5 kilometers) per pixel. The LEISA data were downlinked Oct. 1-4, 2015, and processed into a map of Charon's 2.2 micron ammonia-ice absorption band. Long Range Reconnaissance Imager (LORRI) panchromatic images used as the background in this composite were taken about 8:33 UT (4:33 a.m. EDT) July 14 at a resolution of 0.6 miles (0.9 kilometers) per pixel and downlinked Oct. 5-6. The ammonia absorption map from LEISA is shown in green on the LORRI image. The region covered by the yellow box is 174 miles across (280 kilometers). http://photojournal.jpl.nasa.gov/catalog/PIA20036

  2. Droplet Suspended on a Wire Begins Ignition

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Fiber Supported Droplet Combustion Experiment completing a number of successful burns on STS-94, July 11, 1997, MET:9/17:40 (approximate). The photo shows a droplet of 95% heptane and 5% hexadecane, suspended and positioned by the fiber wire, just as it is being ignited by the glowing coil beneath. Study of the physical properties of burning fuel from this experiment is expected to contribute to more efficient use of fossil fuels and reduction of combustion by-products on Earth. The sequence is from a time-lapse movie (34 seconds condensed to 12 seconds), and clearly shows particles emanating from the droplet during the burn. The droplet shrank to nothing as it was consumed. FSDC-2 studied fundamental phenomena related to liquid fuel droplet combustion in air. Pure fuels and mixtures of fuels were burned as isolated single and dual droplets with and without forced air convection. The FSDC guest investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (1.2 MB, 11-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300180.html.

  3. Ignition of Droplet Suspended on a Wire

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Fiber Supported Droplet Combustion Experiment completing a number of successful burns on STS-94, July 11, 1997, MET:9/17:40 (approximate). The photo shows a droplet of 95% heptane and 5% hexadecane, suspended and positioned by the fiber wire, just as it is being ignited by the glowing coil beneath. Study of the physical properties of burning fuel from this experiment is expected to contribute to more efficient use of fossil fuels and reduction of combustion by-products on Earth. The sequence is from a time-lapse movie (34 seconds condensed to 12 seconds), and clearly shows particles emanating from the droplet during the burn. The droplet shrank to nothing as it was consumed. FSDC-2 studied fundamental phenomena related to liquid fuel droplet combustion in air. Pure fuels and mixtures of fuels were burned as isolated single and dual droplets with and without forced air convection. The FSDC guest investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations plarned for the International Space Station. (133KB JPEG, 656 x 741 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300181.html.

  4. A high sensitivity 20Mfps CMOS image sensor with readout speed of 1Tpixel/sec for visualization of ultra-high speed phenomena

    NASA Astrophysics Data System (ADS)

    Kuroda, R.; Sugawa, S.

    2017-02-01

    Ultra-high speed (UHS) CMOS image sensors with on-chop analog memories placed on the periphery of pixel array for the visualization of UHS phenomena are overviewed in this paper. The developed UHS CMOS image sensors consist of 400H×256V pixels and 128 memories/pixel, and the readout speed of 1Tpixel/sec is obtained, leading to 10 Mfps full resolution video capturing with consecutive 128 frames, and 20 Mfps half resolution video capturing with consecutive 256 frames. The first development model has been employed in the high speed video camera and put in practical use in 2012. By the development of dedicated process technologies, photosensitivity improvement and power consumption reduction were simultaneously achieved, and the performance improved version has been utilized in the commercialized high-speed video camera since 2015 that offers 10 Mfps with ISO16,000 photosensitivity. Due to the improved photosensitivity, clear images can be captured and analyzed even under low light condition, such as under a microscope as well as capturing of UHS light emission phenomena.

  5. What is the Value of Space Exploration? - A Prairie Perspective

    NASA Technical Reports Server (NTRS)

    1995-01-01

    The symposium addresses different topics within Space Exploration. The symposium was fed, using satellite downlinks, to several communities in North Dakota, the first such symposium of its type ever held. The specific topics presented by different community members within the state of North Dakota were: the economic, cultural, scientific and technical, political, educational and social value of Space Exploration. Included is a 22 minute VHS video cassette highlighting the symposium.

  6. An Efficient, FPGA-Based, Cluster Detection Algorithm Implementation for a Strip Detector Readout System in a Time Projection Chamber Polarimeter

    NASA Technical Reports Server (NTRS)

    Gregory, Kyle J.; Hill, Joanne E. (Editor); Black, J. Kevin; Baumgartner, Wayne H.; Jahoda, Keith

    2016-01-01

    A fundamental challenge in a spaceborne application of a gas-based Time Projection Chamber (TPC) for observation of X-ray polarization is handling the large amount of data collected. The TPC polarimeter described uses the APV-25 Application Specific Integrated Circuit (ASIC) to readout a strip detector. Two dimensional photoelectron track images are created with a time projection technique and used to determine the polarization of the incident X-rays. The detector produces a 128x30 pixel image per photon interaction with each pixel registering 12 bits of collected charge. This creates challenging requirements for data storage and downlink bandwidth with only a modest incidence of photons and can have a significant impact on the overall mission cost. An approach is described for locating and isolating the photoelectron track within the detector image, yielding a much smaller data product, typically between 8x8 pixels and 20x20 pixels. This approach is implemented using a Microsemi RT-ProASIC3-3000 Field-Programmable Gate Array (FPGA), clocked at 20 MHz and utilizing 10.7k logic gates (14% of FPGA), 20 Block RAMs (17% of FPGA), and no external RAM. Results will be presented, demonstrating successful photoelectron track cluster detection with minimal impact to detector dead-time.

  7. The trigger system of the JEM-EUSO Project

    NASA Astrophysics Data System (ADS)

    Bertaina, M.; Ebisuzaki, T.; Hamada, T.; Ikeda, H.; Kawasai, Y.; Sawabe, T.; Takahashi, Y.; JEM-EUSO Collaboration

    The trigger system of JEM-EUSO should face different major challenging points: a) cope with the limited down-link transmission rate from the ISS to Earth, by operating a severe on-board and on-time data reduction; b) use very fast, low power consuming and radiation hard electronics; c) have a high signal-over-noise performance and flexibility in order to lower as much as possible the energy threshold of the detector, adjust the system to a variable nightglow background, and trigger on different categories of events (images insisting on the same pixels or crossing huge portions of the entire focal surface). Based on the above stringent requirements, the main ingredients for the trigger logic are: the Gate Time Unit (GTU); the minimum number Nthresh of photo-electrons piling up in a GTU in a pixel to be fired; the persistency level Npers, in which fired pixels are over threshold; the localization and correlation in space and time of the fired pixels, that distinguish a real EAS from an accidental background enhancement. The core of the trigger logic is the Track Trigger Algorithm that has been specifically developed for this purpose. Its characteristics, preliminary performance and its possible implementation on FPGA or DSP will be discussed together with a general overview of the architecture of the triggering system of JEM-EUSO.

  8. Precise color images a high-speed color video camera system with three intensified sensors

    NASA Astrophysics Data System (ADS)

    Oki, Sachio; Yamakawa, Masafumi; Gohda, Susumu; Etoh, Takeharu G.

    1999-06-01

    High speed imaging systems have been used in a large field of science and engineering. Although the high speed camera systems have been improved to high performance, most of their applications are only to get high speed motion pictures. However, in some fields of science and technology, it is useful to get some other information, such as temperature of combustion flame, thermal plasma and molten materials. Recent digital high speed video imaging technology should be able to get such information from those objects. For this purpose, we have already developed a high speed video camera system with three-intensified-sensors and cubic prism image splitter. The maximum frame rate is 40,500 pps (picture per second) at 64 X 64 pixels and 4,500 pps at 256 X 256 pixels with 256 (8 bit) intensity resolution for each pixel. The camera system can store more than 1,000 pictures continuously in solid state memory. In order to get the precise color images from this camera system, we need to develop a digital technique, which consists of a computer program and ancillary instruments, to adjust displacement of images taken from two or three image sensors and to calibrate relationship between incident light intensity and corresponding digital output signals. In this paper, the digital technique for pixel-based displacement adjustment are proposed. Although the displacement of the corresponding circle was more than 8 pixels in original image, the displacement was adjusted within 0.2 pixels at most by this method.

  9. An efficient interpolation filter VLSI architecture for HEVC standard

    NASA Astrophysics Data System (ADS)

    Zhou, Wei; Zhou, Xin; Lian, Xiaocong; Liu, Zhenyu; Liu, Xiaoxiang

    2015-12-01

    The next-generation video coding standard of High-Efficiency Video Coding (HEVC) is especially efficient for coding high-resolution video such as 8K-ultra-high-definition (UHD) video. Fractional motion estimation in HEVC presents a significant challenge in clock latency and area cost as it consumes more than 40 % of the total encoding time and thus results in high computational complexity. With aims at supporting 8K-UHD video applications, an efficient interpolation filter VLSI architecture for HEVC is proposed in this paper. Firstly, a new interpolation filter algorithm based on the 8-pixel interpolation unit is proposed in this paper. It can save 19.7 % processing time on average with acceptable coding quality degradation. Based on the proposed algorithm, an efficient interpolation filter VLSI architecture, composed of a reused data path of interpolation, an efficient memory organization, and a reconfigurable pipeline interpolation filter engine, is presented to reduce the implement hardware area and achieve high throughput. The final VLSI implementation only requires 37.2k gates in a standard 90-nm CMOS technology at an operating frequency of 240 MHz. The proposed architecture can be reused for either half-pixel interpolation or quarter-pixel interpolation, which can reduce the area cost for about 131,040 bits RAM. The processing latency of our proposed VLSI architecture can support the real-time processing of 4:2:0 format 7680 × 4320@78fps video sequences.

  10. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  11. Digital codec for real-time processing of broadcast quality video signals at 1.8 bits/pixel

    NASA Technical Reports Server (NTRS)

    Shalkhauser, Mary JO; Whyte, Wayne A., Jr.

    1989-01-01

    The authors present the hardware implementation of a digital television bandwidth compression algorithm which processes standard NTSC (National Television Systems Committee) composite color television signals and produces broadcast-quality video in real time at an average of 1.8 b/pixel. The sampling rate used with this algorithm results in 768 samples over the active portion of each video line by 512 active video lines per video frame. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a nonadaptive predictor, nonuniform quantizer, and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The nonadaptive predictor and multilevel Huffman coder combine to set this technique apart from prior-art DPCM encoding algorithms. The authors describe the data compression algorithm and the hardware implementation of the codec and provide performance results.

  12. Adaptive Video Streaming Using Bandwidth Estimation for 3.5G Mobile Network

    NASA Astrophysics Data System (ADS)

    Nam, Hyeong-Min; Park, Chun-Su; Jung, Seung-Won; Ko, Sung-Jea

    Currently deployed mobile networks including High Speed Downlink Packet Access (HSDPA) offer only best-effort Quality of Service (QoS). In wireless best effort networks, the bandwidth variation is a critical problem, especially, for mobile devices with small buffers. This is because the bandwidth variation leads to packet losses caused by buffer overflow as well as picture freezing due to high transmission delay or buffer underflow. In this paper, in order to provide seamless video streaming over HSDPA, we propose an efficient real-time video streaming method that consists of the available bandwidth (AB) estimation for the HSDPA network and the transmission rate control to prevent buffer overflows/underflows. In the proposed method, the client estimates the AB and the estimated AB is fed back to the server through real-time transport control protocol (RTCP) packets. Then, the server adaptively adjusts the transmission rate according to the estimated AB and the buffer state obtained from the RTCP feedback information. Experimental results show that the proposed method achieves seamless video streaming over the HSDPA network providing higher video quality and lower transmission delay.

  13. Initial results from a video-laser rangefinder device

    Treesearch

    Neil A. Clark

    2000-01-01

    Three hundred and nine width measurements at various heights to 10 m on a metal light pole were calculated from video images captured with a prototype video-laser rangefinder instrument. Data were captured at distances from 6 to 15 m. The endpoints for the width measurements were manually selected to the nearest pixel from individual video frames.Chi-square...

  14. Particle Engulfment and Pushing by Solidifying Interfaces: USMP-4 One Year Report

    NASA Technical Reports Server (NTRS)

    Stefanescu, D. M.; Juretzko, F. R.; Catalina, A. V.; Sen, S.; Curreri, P.; Schmitt, C.

    1999-01-01

    The experiment Particle Pushing and Engulfment by Solidifying Interfaces (PEP) was conducted during the USMP-4 (United States Microgravity Payload-4) mission on board the shuttle Columbia in November 1997. This experiment has its place within the framework of a long-term scientific effort to understand the physics of particle pushing. The first flight experiment of this kind was performed with a metal matrix composite on board STS-78 in the summer of 1996. The use of opaque matrices limits the evaluation to pre-and post-flight comparison of particle locations within the sample. By using transparent matrices the interaction of one or multiple particles with an advancing solid/liquid (SL) interface can be studied in-situ. If this observation can then directly be transmitted from the orbiter to the scientists by video down-link, a real-time execution of the experiment is possible in a micro-gravity environment. Part of this experiment was an extensive training of the payload specialists to perform the experiment in orbit. This was further enhanced by the availability of video down-link and direct communication with the astronauts. Even though the PEP experiment is aimed at understanding the interaction of a liquid/solid interface with insoluble particles and thus is fundamental in scope, the prospective applications are not. Possible applications range from improved metal matrix composites to understanding and preventing frost heaving affecting roads.

  15. Video Conferences through the Internet: How to Survive in a Hostile Environment

    PubMed Central

    Fernández, Carlos; Fernández-Navajas, Julián; Sequeira, Luis; Casadesus, Luis

    2014-01-01

    This paper analyzes and compares two different video conference solutions, widely used in corporate and home environments, with a special focus on the mechanisms used for adapting the traffic to the network status. The results show how these mechanisms are able to provide a good quality in the hostile environment of the public Internet, a best effort network without delay or delivery guarantees. Both solutions are evaluated in a laboratory, where different network impairments (bandwidth limit, delay, and packet loss) are set, in both the uplink and the downlink, and the reaction of the applications is measured. The tests show how these solutions modify their packet size and interpacket time, in order to increase or reduce the sent data. One of the solutions also uses a scalable video codec, able to adapt the traffic to the network status and to the end devices. PMID:24605066

  16. Digital TV processing system

    NASA Technical Reports Server (NTRS)

    1975-01-01

    Two digital video data compression systems directly applicable to the Space Shuttle TV Communication System were described: (1) For the uplink, a low rate monochrome data compressor is used. The compression is achieved by using a motion detection technique in the Hadamard domain. To transform the variable source rate into a fixed rate, an adaptive rate buffer is provided. (2) For the downlink, a color data compressor is considered. The compression is achieved first by intra-color transformation of the original signal vector, into a vector which has lower information entropy. Then two-dimensional data compression techniques are applied to the Hadamard transformed components of this last vector. Mathematical models and data reliability analyses were also provided for the above video data compression techniques transmitted over a channel encoded Gaussian channel. It was shown that substantial gains can be achieved by the combination of video source and channel coding.

  17. Video Image Tracking Engine

    NASA Technical Reports Server (NTRS)

    Howard, Richard T. (Inventor); Bryan, ThomasC. (Inventor); Book, Michael L. (Inventor)

    2004-01-01

    A method and system for processing an image including capturing an image and storing the image as image pixel data. Each image pixel datum is stored in a respective memory location having a corresponding address. Threshold pixel data is selected from the image pixel data and linear spot segments are identified from the threshold pixel data selected.. Ihe positions of only a first pixel and a last pixel for each linear segment are saved. Movement of one or more objects are tracked by comparing the positions of fust and last pixels of a linear segment present in the captured image with respective first and last pixel positions in subsequent captured images. Alternatively, additional data for each linear data segment is saved such as sum of pixels and the weighted sum of pixels i.e., each threshold pixel value is multiplied by that pixel's x-location).

  18. Fluorescence imaging to quantify crop residue cover

    NASA Technical Reports Server (NTRS)

    Daughtry, C. S. T.; Mcmurtrey, J. E., III; Chappelle, E. W.

    1994-01-01

    Crop residues, the portion of the crop left in the field after harvest, can be an important management factor in controlling soil erosion. Methods to quantify residue cover are needed that are rapid, accurate, and objective. Scenes with known amounts of crop residue were illuminated with long wave ultraviolet (UV) radiation and fluorescence images were recorded with an intensified video camera fitted with a 453 to 488 nm band pass filter. A light colored soil and a dark colored soil were used as background for the weathered soybean stems. Residue cover was determined by counting the proportion of the pixels in the image with fluorescence values greater than a threshold. Soil pixels had the lowest gray levels in the images. The values of the soybean residue pixels spanned nearly the full range of the 8-bit video data. Classification accuracies typically were within 3(absolute units) of measured cover values. Video imaging can provide an intuitive understanding of the fraction of the soil covered by residue.

  19. Normalized Temperature Contrast Processing in Flash Infrared Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2016-01-01

    The paper presents further development in normalized contrast processing of flash infrared thermography method by the author given in US 8,577,120 B1. The method of computing normalized image or pixel intensity contrast, and normalized temperature contrast are provided, including converting one from the other. Methods of assessing emissivity of the object, afterglow heat flux, reflection temperature change and temperature video imaging during flash thermography are provided. Temperature imaging and normalized temperature contrast imaging provide certain advantages over pixel intensity normalized contrast processing by reducing effect of reflected energy in images and measurements, providing better quantitative data. The subject matter for this paper mostly comes from US 9,066,028 B1 by the author. Examples of normalized image processing video images and normalized temperature processing video images are provided. Examples of surface temperature video images, surface temperature rise video images and simple contrast video images area also provided. Temperature video imaging in flash infrared thermography allows better comparison with flash thermography simulation using commercial software which provides temperature video as the output. Temperature imaging also allows easy comparison of surface temperature change to camera temperature sensitivity or noise equivalent temperature difference (NETD) to assess probability of detecting (POD) anomalies.

  20. Model-based video segmentation for vision-augmented interactive games

    NASA Astrophysics Data System (ADS)

    Liu, Lurng-Kuo

    2000-04-01

    This paper presents an architecture and algorithms for model based video object segmentation and its applications to vision augmented interactive game. We are especially interested in real time low cost vision based applications that can be implemented in software in a PC. We use different models for background and a player object. The object segmentation algorithm is performed in two different levels: pixel level and object level. At pixel level, the segmentation algorithm is formulated as a maximizing a posteriori probability (MAP) problem. The statistical likelihood of each pixel is calculated and used in the MAP problem. Object level segmentation is used to improve segmentation quality by utilizing the information about the spatial and temporal extent of the object. The concept of an active region, which is defined based on motion histogram and trajectory prediction, is introduced to indicate the possibility of a video object region for both background and foreground modeling. It also reduces the overall computation complexity. In contrast with other applications, the proposed video object segmentation system is able to create background and foreground models on the fly even without introductory background frames. Furthermore, we apply different rate of self-tuning on the scene model so that the system can adapt to the environment when there is a scene change. We applied the proposed video object segmentation algorithms to several prototype virtual interactive games. In our prototype vision augmented interactive games, a player can immerse himself/herself inside a game and can virtually interact with other animated characters in a real time manner without being constrained by helmets, gloves, special sensing devices, or background environment. The potential applications of the proposed algorithms including human computer gesture interface and object based video coding such as MPEG-4 video coding.

  1. Video Altimeter and Obstruction Detector for an Aircraft

    NASA Technical Reports Server (NTRS)

    Delgado, Frank J.; Abernathy, Michael F.; White, Janis; Dolson, William R.

    2013-01-01

    Video-based altimetric and obstruction detection systems for aircraft have been partially developed. The hardware of a system of this type includes a downward-looking video camera, a video digitizer, a Global Positioning System receiver or other means of measuring the aircraft velocity relative to the ground, a gyroscope based or other attitude-determination subsystem, and a computer running altimetric and/or obstruction-detection software. From the digitized video data, the altimetric software computes the pixel velocity in an appropriate part of the video image and the corresponding angular relative motion of the ground within the field of view of the camera. Then by use of trigonometric relationships among the aircraft velocity, the attitude of the camera, the angular relative motion, and the altitude, the software computes the altitude. The obstruction-detection software performs somewhat similar calculations as part of a larger task in which it uses the pixel velocity data from the entire video image to compute a depth map, which can be correlated with a terrain map, showing locations of potential obstructions. The depth map can be used as real-time hazard display and/or to update an obstruction database.

  2. Series of Two Droplets on Fiber Approaching Ignition

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Fiber-Supported Droplet Combustion (FSDC) uses two droplets positioned on the fiber wire, instead of the usual one. Two droplets more closely simulates the environment in engines, which ignite many fuel droplets at once. The behavior of the burning was also unexpected -- the droplets moved together after ignition, generating quite a bit of data for understanding the interaction of fuel droplets while they burn. Because FSDC is backlit (the bright glow behind the drops), you carnot see the glow of the droplets while they burn -- instead, you see them shrink! The small blobs left on the wire after the burn are the beads used to center the fuel droplet on the wire. This image was taken on STS-94, July 12, 1997, MET:10/19:13 (approximate). FSDC-2 studied fundamental phenomena related to liquid fuel droplet combustion in air. Pure fuels and mixtures of fuels were burned as isolated single and dual droplets with and without forced air convection. The FSDC guest investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations planned for the International Space Station. (251KB JPEG, 1350 x 1523 pixels; downlinked video, higher quality not available) The MPG from which this composite was made is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300179.html

  3. First Global Analysis of Saturation Artifacts in the VIIRS Infrared Channels and the Effects of Detector Aggregation

    NASA Astrophysics Data System (ADS)

    Wang, J.; Polivka, T. N.; Hyer, E. J.; Peterson, D. A.

    2014-12-01

    Unlike previous space-borne Earth-observing sensors, the Visible Infrared Imaging Radiometer Suite (VIIRS) employs aggregation to reduce downlink bandwidth requirements and preserve spatial resolution across the swath. To examine the potentially deleterious impacts of aggregation when encountering detector saturation, nearly four months of NOAA's Nightfire product were analyzed, which contains a subset of the hottest observed nighttime pixels. An empirical method for identifying saturation was devised. The 3.69 µm band (M12) was the most frequently-saturating band with 0.15% of the Nightfire pixels at or near the ~359 K hard saturation limit, with possible saturation also occurring in M14, M15, and M16. Artifacts consistent with detector saturation were seen with M12 temperatures as low as 330 K in the scene center. This partial saturation and aggregation influence must be considered when using VIIRS radiances for quantitative characterization of hot emission sources such as fires and gas flaring.

  4. Low-power coprocessor for Haar-like feature extraction with pixel-based pipelined architecture

    NASA Astrophysics Data System (ADS)

    Luo, Aiwen; An, Fengwei; Fujita, Yuki; Zhang, Xiangyu; Chen, Lei; Jürgen Mattausch, Hans

    2017-04-01

    Intelligent analysis of image and video data requires image-feature extraction as an important processing capability for machine-vision realization. A coprocessor with pixel-based pipeline (CFEPP) architecture is developed for real-time Haar-like cell-based feature extraction. Synchronization with the image sensor’s pixel frequency and immediate usage of each input pixel for the feature-construction process avoids the dependence on memory-intensive conventional strategies like integral-image construction or frame buffers. One 180 nm CMOS prototype can extract the 1680-dimensional Haar-like feature vectors, applied in the speeded up robust features (SURF) scheme, using an on-chip memory of only 96 kb (kilobit). Additionally, a low power dissipation of only 43.45 mW at 1.8 V supply voltage is achieved during VGA video procession at 120 MHz frequency with more than 325 fps. The Haar-like feature-extraction coprocessor is further evaluated by the practical application of vehicle recognition, achieving the expected high accuracy which is comparable to previous work.

  5. Fuzzy Filtering Method for Color Videos Corrupted by Additive Noise

    PubMed Central

    Ponomaryov, Volodymyr I.; Montenegro-Monroy, Hector; Nino-de-Rivera, Luis

    2014-01-01

    A novel method for the denoising of color videos corrupted by additive noise is presented in this paper. The proposed technique consists of three principal filtering steps: spatial, spatiotemporal, and spatial postprocessing. In contrast to other state-of-the-art algorithms, during the first spatial step, the eight gradient values in different directions for pixels located in the vicinity of a central pixel as well as the R, G, and B channel correlation between the analogous pixels in different color bands are taken into account. These gradient values give the information about the level of contamination then the designed fuzzy rules are used to preserve the image features (textures, edges, sharpness, chromatic properties, etc.). In the second step, two neighboring video frames are processed together. Possible local motions between neighboring frames are estimated using block matching procedure in eight directions to perform interframe filtering. In the final step, the edges and smoothed regions in a current frame are distinguished for final postprocessing filtering. Numerous simulation results confirm that this novel 3D fuzzy method performs better than other state-of-the-art techniques in terms of objective criteria (PSNR, MAE, NCD, and SSIM) as well as subjective perception via the human vision system in the different color videos. PMID:24688428

  6. Automated extraction of temporal motor activity signals from video recordings of neonatal seizures based on adaptive block matching.

    PubMed

    Karayiannis, Nicolaos B; Sami, Abdul; Frost, James D; Wise, Merrill S; Mizrahi, Eli M

    2005-04-01

    This paper presents an automated procedure developed to extract quantitative information from video recordings of neonatal seizures in the form of motor activity signals. This procedure relies on optical flow computation to select anatomical sites located on the infants' body parts. Motor activity signals are extracted by tracking selected anatomical sites during the seizure using adaptive block matching. A block of pixels is tracked throughout a sequence of frames by searching for the most similar block of pixels in subsequent frames; this search is facilitated by employing various update strategies to account for the changing appearance of the block. The proposed procedure is used to extract temporal motor activity signals from video recordings of neonatal seizures and other events not associated with seizures.

  7. Human Flight to Lunar and Beyond - Re-Learning Operations Paradigms

    NASA Technical Reports Server (NTRS)

    Kenny, Ted; Statman, Joseph

    2016-01-01

    For the first time since the Apollo era, NASA is planning on sending astronauts on flights beyond Low-Earth Orbit (LEO). The Human Space Flight (HSF) program started with a successful initial flight in Earth orbit, in December 2014. The program will continue with two Exploration Missions (EM) to Lunar orbit: EM-1 will be unmanned and EM-2, carrying astronauts, will follow. NASA established a multi-center team to address the communications, and related navigation, needs. This paper will focus on the lessons learned in the team, planning for the missions' parts that are beyond Earth orbit. Many of these lessons had to be re-learned, as the HSF program after operated for many years in Earth orbit. Fortunately, the experience base from tracking robotic missions in deep space by the Deep Space Network (DSN) and close interaction with the HSF community to understand the unique needs (e.g. 2-way voice) resulted in a ConOps that leverages of both the deep space robotic and the Human LEO experiences. Several examples will be used to highlight the unique operational needs for HSF missions beyond Earth Orbit, including: - Navigation. At LEO, HSF missions can rely on Global Positioning System (GPS) devices for orbit determination. For Lunar-and-beyond HSF missions, techniques such as precision 2-way and 3-way Doppler and ranging, Delta-Difference-of-range, and eventually on-board navigation will be used. - Impact of latency - the delay associated with Round-Trip-Light-Time (RTLT). Imagine trying to have a 2-way discussion (audio or video) with an astronaut, with a 2-3 sec delay inserted (for Lunar distances) or 20 minutes delay (for Mars distances). - Balanced communications link. For robotic missions, there has been a heavy emphasis on the downlink data rates, bringing back science data from the instruments on-board the spacecraft. Uplink data rates were of secondary importance, used to send commands to the spacecraft. The ratio of downlink-to-uplink data rates was often 10:1 or more. For HSF, rates for uplink and downlink, at least for high-quality video, need to be similar.

  8. Aerial surveillance vehicles augment security at shipping ports

    NASA Astrophysics Data System (ADS)

    Huck, Robert C.; Al Akkoumi, Muhammad K.; Cheng, Samuel; Sluss, James J., Jr.; Landers, Thomas L.

    2008-10-01

    With the ever present threat to commerce, both politically and economically, technological innovations provide a means to secure the transportation infrastructure that will allow efficient and uninterrupted freight-flow operations for trade. Currently, freight coming into United States ports is "spot checked" upon arrival and stored in a container yard while awaiting the next mode of transportation. For the most part, only fences and security patrols protect these container storage yards. To augment these measures, the authors propose the use of aerial surveillance vehicles equipped with video cameras and wireless video downlinks to provide a birds-eye view of port facilities to security control centers and security patrols on the ground. The initial investigation described in this paper demonstrates the use of unmanned aerial surveillance vehicles as a viable method for providing video surveillance of container storage yards. This research provides the foundation for a follow-on project to use autonomous aerial surveillance vehicles coordinated with autonomous ground surveillance vehicles for enhanced port security applications.

  9. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  10. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  11. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  12. Background: Preflight Screening, In-flight Capabilities, and Postflight Testing

    NASA Technical Reports Server (NTRS)

    Gibson, Charles Robert; Duncan, James

    2009-01-01

    Recommendations for minimal in-flight capabilities: Retinal Imaging - provide in-flight capability for the visual monitoring of ocular health (specifically, imaging of the retina and optic nerve head) with the capability of downlinking video/still images. Tonometry - provide more accurate and reliable in-flight capability for measuring intraocular pressure. Ultrasound - explore capabilities of current on-board system for monitoring ocular health. We currently have limited in-flight capabilities on board the International Space Station for performing an internal ocular health assessment. Visual Acuity, Direct Ophthalmoscope, Ultrasound, Tonometry(Tonopen):

  13. Producing a Live HDTV Program from Space

    NASA Technical Reports Server (NTRS)

    Grubbs, Rodney; Fontanot, Carlos; Hames, Kevin

    2007-01-01

    By the year 2000, NASA had flown HDTV camcorders on three Space Shuttle missions: STS-95, STS-93 and STS-99. All three flights of these camcorders were accomplished with cooperation from the Japanese space agency (then known as NASDA and now known as JAXA). The cameras were large broadcast-standard cameras provided by NASDA and flight certified by both NASA and NASDA. The high-definition video shot during these missions was spectacular. Waiting for the return of the tapes to Earth emphasized the next logical step: finding a way to downlink the HDTV live from space. Both the Space Shuttle and the International Space Station (ISS) programs were interested in live HDTV from space, but neither had the resources to fully fund the technology. Technically, downlinking from the ISS was the most effective approach. Only when the Japanese broadcaster NHK and the Japanese space agency expressed interest in covering a Japanese astronaut's journey to the ISS did the project become possible. Together, JAXA and NHK offered equipment, technology, and funding toward the project. In return, NHK asked for a live HDTV downlink during one of its broadcast programs. NASA and the ISS Program sought a US partner to broadcast a live HDTV program and approached the Discovery Channel. The Discovery Channel had proposed a live HDTV project in response to NASA's previous call for offers. The Discovery Channel agreed to provide addItional resources. With the final partner in place, the project was under way. Engineers in the Avionics Systems Division at NASA's Johnson Space Center (JSC) had already studied the various options for downlinking HDTV from the ISS. They concluded that the easiest way was to compress the HDTV so that the resulting data stream would "look" like a payload data stream. The flight system would consist of a professional HDTV camcorder with live HD-SDI output, an HDTV MPEG-2 encoder, and a packetizer/protocol converter.

  14. Quantitative Spatial and Temporal Analysis of Fluorescein Angiography Dynamics in the Eye

    PubMed Central

    Hui, Flora; Nguyen, Christine T. O.; Bedggood, Phillip A.; He, Zheng; Fish, Rebecca L.; Gurrell, Rachel; Vingrys, Algis J.; Bui, Bang V.

    2014-01-01

    Purpose We describe a novel approach to analyze fluorescein angiography to investigate fluorescein flow dynamics in the rat posterior retina as well as identify abnormal areas following laser photocoagulation. Methods Experiments were undertaken in adult Long Evans rats. Using a rodent retinal camera, videos were acquired at 30 frames per second for 30 seconds following intravenous introduction of sodium fluorescein in a group of control animals (n = 14). Videos were image registered and analyzed using principle components analysis across all pixels in the field. This returns fluorescence intensity profiles from which, the half-rise (time to 50% brightness), half-fall (time for 50% decay) back to an offset (plateau level of fluorescence). We applied this analysis to video fluorescein angiography data collected 30 minutes following laser photocoagulation in a separate group of rats (n = 7). Results Pixel-by-pixel analysis of video angiography clearly delineates differences in the temporal profiles of arteries, veins and capillaries in the posterior retina. We find no difference in half-rise, half-fall or offset amongst the four quadrants (inferior, nasal, superior, temporal). We also found little difference with eccentricity. By expressing the parameters at each pixel as a function of the number of standard deviation from the average of the entire field, we could clearly identify the spatial extent of the laser injury. Conclusions This simple registration and analysis provides a way to monitor the size of vascular injury, to highlight areas of subtle vascular leakage and to quantify vascular dynamics not possible using current fluorescein angiography approaches. This can be applied in both laboratory and clinical settings for in vivo dynamic fluorescent imaging of vasculature. PMID:25365578

  15. Highly Reflective Multi-stable Electrofluidic Display Pixels

    NASA Astrophysics Data System (ADS)

    Yang, Shu

    Electronic papers (E-papers) refer to the displays that mimic the appearance of printed papers, but still owning the features of conventional electronic displays, such as the abilities of browsing websites and playing videos. The motivation of creating paper-like displays is inspired by the truths that reading on a paper caused least eye fatigue due to the paper's reflective and light diffusive nature, and, unlike the existing commercial displays, there is no cost of any form of energy for sustaining the displayed image. To achieve the equivalent visual effect of a paper print, an ideal E-paper has to be a highly reflective with good contrast ratio and full-color capability. To sustain the image with zero power consumption, the display pixels need to be bistable, which means the "on" and "off" states are both lowest energy states. Pixel can change its state only when sufficient external energy is given. There are many emerging technologies competing to demonstrate the first ideal E-paper device. However, none is able to achieve satisfactory visual effect, bistability and video speed at the same time. Challenges come from either the inherent physical/chemical properties or the fabrication process. Electrofluidic display is one of the most promising E-paper technologies. It has successfully demonstrated high reflectivity, brilliant color and video speed operation by moving colored pigment dispersion between visible and invisible places with electrowetting force. However, the pixel design did not allow the image bistability. Presented in this dissertation are the multi-stable electrofluidic display pixels that are able to sustain grayscale levels without any power consumption, while keeping the favorable features of the previous generation electrofluidic display. The pixel design, fabrication method using multiple layer dry film photoresist lamination, and physical/optical characterizations are discussed in details. Based on the pixel structure, the preliminary results of a simplified design and fabrication method are demonstrated. As advanced research topics regarding the device optical performance, firstly an optical model for evaluating reflective displays' light out-coupling efficiency is established to guide the pixel design; Furthermore, Aluminum surface diffusers are analytically modeled and then fabricated onto multi-stable electrofluidic display pixels to demonstrate truly "white" multi-stable electrofluidic display modules. The achieved results successfully promoted multi-stable electrofluidic display as excellent candidate for the ultimate E-paper device especially for larger scale signage applications.

  16. Active-Pixel Image Sensor With Analog-To-Digital Converters

    NASA Technical Reports Server (NTRS)

    Fossum, Eric R.; Mendis, Sunetra K.; Pain, Bedabrata; Nixon, Robert H.

    1995-01-01

    Proposed single-chip integrated-circuit image sensor contains 128 x 128 array of active pixel sensors at 50-micrometer pitch. Output terminals of all pixels in each given column connected to analog-to-digital (A/D) converter located at bottom of column. Pixels scanned in semiparallel fashion, one row at time; during time allocated to scanning row, outputs of all active pixel sensors in row fed to respective A/D converters. Design of chip based on complementary metal oxide semiconductor (CMOS) technology, and individual circuit elements fabricated according to 2-micrometer CMOS design rules. Active pixel sensors designed to operate at video rate of 30 frames/second, even at low light levels. A/D scheme based on first-order Sigma-Delta modulation.

  17. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  18. Extracting 3d Semantic Information from Video Surveillance System Using Deep Learning

    NASA Astrophysics Data System (ADS)

    Zhang, J. S.; Cao, J.; Mao, B.; Shen, D. Q.

    2018-04-01

    At present, intelligent video analysis technology has been widely used in various fields. Object tracking is one of the important part of intelligent video surveillance, but the traditional target tracking technology based on the pixel coordinate system in images still exists some unavoidable problems. Target tracking based on pixel can't reflect the real position information of targets, and it is difficult to track objects across scenes. Based on the analysis of Zhengyou Zhang's camera calibration method, this paper presents a method of target tracking based on the target's space coordinate system after converting the 2-D coordinate of the target into 3-D coordinate. It can be seen from the experimental results: Our method can restore the real position change information of targets well, and can also accurately get the trajectory of the target in space.

  19. A new generation of small pixel pitch/SWaP cooled infrared detectors

    NASA Astrophysics Data System (ADS)

    Espuno, L.; Pacaud, O.; Reibel, Y.; Rubaldo, L.; Kerlain, A.; Péré-Laperne, N.; Dariel, A.; Roumegoux, J.; Brunner, A.; Kessler, A.; Gravrand, O.; Castelein, P.

    2015-10-01

    Following clear technological trends, the cooled IR detectors market is now in demand for smaller, more efficient and higher performance products. This demand pushes products developments towards constant innovations on detectors, read-out circuits, proximity electronics boards, and coolers. Sofradir was first to show a 10μm focal plane array (FPA) at DSS 2012, and announced the DAPHNIS 10μm product line back in 2014. This pixel pitch is a key enabler for infrared detectors with increased resolution. Sofradir recently achieved outstanding products demonstrations at this pixel pitch, which clearly demonstrate the benefits of adopting 10μm pixel pitch focal plane array-based detectors. Both HD and XGA Daphnis 10μm products also benefit from a global video datapath efficiency improvement by transitioning to digital video interfaces. Moreover, innovative smart pixels functionalities drastically increase product versatility. In addition to this strong push towards a higher pixels density, Sofradir acknowledges the need for smaller and lower power cooled infrared detector. Together with straightforward system interfaces and better overall performances, latest technological advances on SWAP-C (Size, Weight, Power and Cost) Sofradir products enable the advent of a new generation of high performance portable and agile systems (handheld thermal imagers, unmanned aerial vehicles, light gimbals etc...). This paper focuses on those features and performances that can make an actual difference in the field.

  20. The Automatically Triggered Video or Imaging Station (ATVIS): An Inexpensive Way to Catch Geomorphic Events on Camera

    NASA Astrophysics Data System (ADS)

    Wickert, A. D.

    2010-12-01

    To understand how single events can affect landscape change, we must catch the landscape in the act. Direct observations are rare and often dangerous. While video is a good alternative, commercially-available video systems for field installation cost 11,000, weigh ~100 pounds (45 kg), and shoot 640x480 pixel video at 4 frames per second. This is the same resolution as a cheap point-and-shoot camera, with a frame rate that is nearly an order of magnitude worse. To overcome these limitations of resolution, cost, and portability, I designed and built a new observation station. This system, called ATVIS (Automatically Triggered Video or Imaging Station), costs 450--500 and weighs about 15 pounds. It can take roughly 3 hours of 1280x720 pixel video, 6.5 hours of 640x480 video, or 98,000 1600x1200 pixel photos (one photo every 7 seconds for 8 days). The design calls for a simple Canon point-and-shoot camera fitted with custom firmware that allows 5V pulses through its USB cable to trigger it to take a picture or to initiate or stop video recording. These pulses are provided by a programmable microcontroller that can take input from either sensors or a data logger. The design is easily modifiable to a variety of camera and sensor types, and can also be used for continuous time-lapse imagery. We currently have prototypes set up at a gully near West Bijou Creek on the Colorado high plains and at tributaries to Marble Canyon in northern Arizona. Hopefully, a relatively inexpensive and portable system such as this will allow geomorphologists to supplement sensor networks with photo or video monitoring and allow them to see—and better quantify—the fantastic array of processes that modify landscapes as they unfold. Camera station set up at Badger Canyon, Arizona.Inset: view into box. Clockwise from bottom right: camera, microcontroller (blue), DC converter (red), solar charge controller, 12V battery. Materials and installation assistance courtesy of Ron Griffiths and the USGS Grand Canyon Monitoring and Research Center.

  1. Hyperspectral Imaging and Obstacle Detection for Robotics Navigation

    DTIC Science & Technology

    2005-09-01

    anatomy and diffraction process. 17 3.3 Technical Specifications of the System A. Brimrose AOTF Video Adaptor Specifications: Material TeO2 Active...sampled from glass case on person 2’s belt 530 pixels 20 pick-up white sampled from body panels of pick-up 600 pixels 21 pick-up blue sampled from

  2. Voting based object boundary reconstruction

    NASA Astrophysics Data System (ADS)

    Tian, Qi; Zhang, Like; Ma, Jingsheng

    2005-07-01

    A voting-based object boundary reconstruction approach is proposed in this paper. Morphological technique was adopted in many applications for video object extraction to reconstruct the missing pixels. However, when the missing areas become large, the morphological processing cannot bring us good results. Recently, Tensor voting has attracted people"s attention, and it can be used for boundary estimation on curves or irregular trajectories. However, the complexity of saliency tensor creation limits its applications in real-time systems. An alternative approach based on tensor voting is introduced in this paper. Rather than creating saliency tensors, we use a "2-pass" method for orientation estimation. For the first pass, Sobel d*etector is applied on a coarse boundary image to get the gradient map. In the second pass, each pixel puts decreasing weights based on its gradient information, and the direction with maximum weights sum is selected as the correct orientation of the pixel. After the orientation map is obtained, pixels begin linking edges or intersections along their direction. The approach is applied to various video surveillance clips under different conditions, and the experimental results demonstrate significant improvement on the final extracted objects accuracy.

  3. A Pixel Correlation Technique for Smaller Telescopes to Measure Doubles

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2013-04-01

    Pixel correlation uses the same reduction techniques as speckle imaging but relies on autocorrelation among captured pixel hits rather than true speckles. A video camera operating at speeds (8-66 milliseconds) similar to lucky imaging to capture 400-1,000 video frames. The AVI files are converted to bitmap images and analyzed using the interferometric algorithms in REDUC using all frames. This results in a series of corellograms from which theta and rho can be measured. Results using a 20 cm (8") Dall-Kirkham working at f22.5 are presented for doubles with separations between 1" to 5.7" under average seeing conditions. I conclude that this form of visualizing and analyzing visual double stars is a viable alternative to lucky imaging that can be employed by telescopes that are too small in aperture to capture a sufficient number of speckles for true speckle interferometry.

  4. Microgravity Science Glovebox (MSG), Space Science's Past, Present and Future Aboard the International Space Station (ISS)

    NASA Technical Reports Server (NTRS)

    Spivey, Reggie; Spearing, Scott; Jordan, Lee

    2012-01-01

    The Microgravity Science Glovebox (MSG) is a double rack facility aboard the International Space Station (ISS), which accommodates science and technology investigations in a "workbench' type environment. The MSG has been operating on the ISS since July 2002 and is currently located in the US Laboratory Module. In fact, the MSG has been used for over 10,000 hours of scientific payload operations and plans to continue for the life of ISS. The facility has an enclosed working volume that is held at a negative pressure with respect to the crew living area. This allows the facility to provide two levels of containment for small parts, particulates, fluids, and gases. This containment approach protects the crew from possible hazardous operations that take place inside the MSG work volume and allows researchers a controlled pristine environment for their needs. Research investigations operating inside the MSG are provided a large 255 liter enclosed work space, 1000 watts of dc power via a versatile supply interface (120, 28, + 12, and 5 Vdc), 1000 watts of cooling capability, video and data recording and real time downlink, ground commanding capabilities, access to ISS Vacuum Exhaust and Vacuum Resource Systems, and gaseous nitrogen supply. These capabilities make the MSG one of the most utilized facilities on ISS. MSG investigations have involved research in cryogenic fluid management, fluid physics, spacecraft fire safety, materials science, combustion, and plant growth technologies. Modifications to the MSG facility are currently under way to expand the capabilities and provide for investigations involving Life Science and Biological research. In addition, the MSG video system is being replaced with a state-of-the-art, digital video system with high definition/high speed capabilities, and with near real-time downlink capabilities. This paper will provide an overview of the MSG facility, a synopsis of the research that has already been accomplished in the MSG, and an overview of the facility enhancements that will shortly be available for use by future investigators.

  5. The High Definition Earth Viewing (HDEV) Payload

    NASA Technical Reports Server (NTRS)

    Muri, Paul; Runco, Susan; Fontanot, Carlos; Getteau, Chris

    2017-01-01

    The High Definition Earth Viewing (HDEV) payload enables long-term experimentation of four, commercial-of-the-shelf (COTS) high definition video, cameras mounted on the exterior of the International Space Station. The payload enables testing of cameras in the space environment. The HDEV cameras transmit imagery continuously to an encoder that then sends the video signal via Ethernet through the space station for downlink. The encoder, cameras, and other electronics are enclosed in a box pressurized to approximately one atmosphere, containing dry nitrogen, to provide a level of protection to the electronics from the space environment. The encoded video format supports streaming live video of Earth for viewing online. Camera sensor types include charge-coupled device and complementary metal-oxide semiconductor. Received imagery data is analyzed on the ground to evaluate camera sensor performance. Since payload deployment, minimal degradation to imagery quality has been observed. The HDEV payload continues to operate by live streaming and analyzing imagery. Results from the experiment reduce risk in the selection of cameras that could be considered for future use on the International Space Station and other spacecraft. This paper discusses the payload development, end-to- end architecture, experiment operation, resulting image analysis, and future work.

  6. Small Moving Vehicle Detection in a Satellite Video of an Urban Area

    PubMed Central

    Yang, Tao; Wang, Xiwen; Yao, Bowei; Li, Jing; Zhang, Yanning; He, Zhannan; Duan, Wencheng

    2016-01-01

    Vehicle surveillance of a wide area allows us to learn much about the daily activities and traffic information. With the rapid development of remote sensing, satellite video has become an important data source for vehicle detection, which provides a broader field of surveillance. The achieved work generally focuses on aerial video with moderately-sized objects based on feature extraction. However, the moving vehicles in satellite video imagery range from just a few pixels to dozens of pixels and exhibit low contrast with respect to the background, which makes it hard to get available appearance or shape information. In this paper, we look into the problem of moving vehicle detection in satellite imagery. To the best of our knowledge, it is the first time to deal with moving vehicle detection from satellite videos. Our approach consists of two stages: first, through foreground motion segmentation and trajectory accumulation, the scene motion heat map is dynamically built. Following this, a novel saliency based background model which intensifies moving objects is presented to segment the vehicles in the hot regions. Qualitative and quantitative experiments on sequence from a recent Skybox satellite video dataset demonstrates that our approach achieves a high detection rate and low false alarm simultaneously. PMID:27657091

  7. Fiber optic cable-based high-resolution, long-distance VGA extenders

    NASA Astrophysics Data System (ADS)

    Rhee, Jin-Geun; Lee, Iksoo; Kim, Heejoon; Kim, Sungjoon; Koh, Yeon-Wan; Kim, Hoik; Lim, Jiseok; Kim, Chur; Kim, Jungwon

    2013-02-01

    Remote transfer of high-resolution video information finds more applications in detached display applications for large facilities such as theaters, sports complex, airports, and security facilities. Active optical cables (AOCs) provide a promising approach for enhancing both the transmittable resolution and distance that standard copper-based cables cannot reach. In addition to the standard digital formats such as HDMI, the high-resolution, long-distance transfer of VGA format signals is important for applications where high-resolution analog video ports should be also supported, such as military/defense applications and high-resolution video camera links. In this presentation we present the development of a compressionless, high-resolution (up to WUXGA, 1920x1200), long-distance (up to 2 km) VGA extenders based on serialized technique. We employed asynchronous serial transmission and clock regeneration techniques, which enables lower cost implementation of VGA extenders by removing the necessity for clock transmission and large memory at the receiver. Two 3.125-Gbps transceivers are used in parallel to meet the required maximum video data rate of 6.25 Gbps. As the data are transmitted asynchronously, 24-bit pixel clock time stamp is employed to regenerate video pixel clock accurately at the receiver side. In parallel to the video information, stereo audio and RS-232 control signals are transmitted as well.

  8. Two Droplets on Wire Approaching Ignition

    NASA Technical Reports Server (NTRS)

    2003-01-01

    The Fiber-Supported Droplet Combustion (FSDC) uses two droplets positioned on the fiber wire, instead of the usual one. Two droplets more closely simulates the environment in engines, which ignite many fuel droplets at once. The behavior of the burning was also unexpected -- the droplets moved together after ignition, generating quite a bit of data for understanding the interaction of fuel droplets while they burn. This MPEG movie (1.3 MB) shows a time-lapse of this burn (3x speed). Because FSDC is backlit (the bright glow behind the drops), you carnot see the glow of the droplets while they burn -- instead, you see them shrink! The small blobs left on the wire after the burn are the beads used to center the fuel droplet on the wire. This image was taken on STS-94, July 12, 1997, MET:10/19:13 (approximate). FSDC-2 studied fundamental phenomena related to liquid fuel droplet combustion in air. Pure fuels and mixtures of fuels were burned as isolated single and dual droplets with and without forced air convection. The FSDC guest investigator was Forman Williams, University of California, San Diego. The experiment was part of the space research investigations conducted during the Microgravity Science Laboratory-1R mission (STS-94, July 1-17 1997). Advanced combustion experiments will be a part of investigations planned for the International Space Station. (1.3MB, 12-second MPEG, screen 320 x 240 pixels; downlinked video, higher quality not available) A still JPG composite of this movie is available at http://mix.msfc.nasa.gov/ABSTRACTS/MSFC-0300178.html.

  9. Enhanced Handoff Scheme for Downlink-Uplink Asymmetric Channels in Cellular Systems

    PubMed Central

    2013-01-01

    In the latest cellular networks, data services like SNS and UCC can create asymmetric packet generation rates over the downlink and uplink channels. This asymmetry can lead to a downlink-uplink asymmetric channel condition being experienced by cell edge users. This paper proposes a handoff scheme to cope effectively with downlink-uplink asymmetric channels. The proposed handoff scheme exploits the uplink channel quality as well as the downlink channel quality to determine the appropriate timing and direction of handoff. We first introduce downlink and uplink channel models that consider the intercell interference, to verify the downlink-uplink channel asymmetry. Based on these results, we propose an enhanced handoff scheme that exploits both the uplink and downlink channel qualities to reduce the handoff-call dropping probability and the service interruption time. The simulation results show that the proposed handoff scheme reduces the handoff-call dropping probability about 30% and increases the satisfaction of the service interruption time requirement about 7% under high-offered load, compared to conventional mobile-assisted handoff. Especially, the proposed handoff scheme is more efficient when the uplink QoS requirement is much stricter than the downlink QoS requirement or uplink channel quality is worse than downlink channel quality. PMID:24501576

  10. Thermal-Polarimetric and Visible Data Collection for Face Recognition

    DTIC Science & Technology

    2016-09-01

    pixels • Spectral range: 7.5–13 μm • Analog image output: NTSC analog video • Digital image output: Firewire radiometric, 14-bit digital video to...PC The analog video was not used for this study. The radiometric, 14-bit digital data provided temperature measurement information for comparison...distribution unlimited. 18 9. References 1. Choi J, Hu S, Young SS, Davis LS. Thermal to visible face recognition. Proc. SPIE 8371, Sensing

  11. NTV_VideoFile_Expedtion53-Thanksgiving-Message

    NASA Image and Video Library

    2017-11-16

    Aboard the International Space Station, Expedition 53 Commander Randy Bresnik and his crewmates, NASA’s Joe Acaba and Mark Vande Hei and Paolo Nespoli of the European Space Agency (ESA) offered their thoughts about being in orbit during the Thanksgiving holiday and the meals and food they will enjoy in orbit during a series of messages downlinked on Nov. 15. Bresnik and Nespoli are in the final weeks of their five and a half month mission on the station, heading for a landing in Kazakhstan on Dec. 14, while Acaba and Vande Hei will remain in orbit through late February.

  12. Students Speak with the ISS

    NASA Image and Video Library

    2012-11-15

    International Space Station Expedition 33 flight engineer Kevin Ford (on screen) answers questions from students during a downlink event held in honor of International Education Week at the Smithsonian National Air and Space Museum, Thursday, Nov. 15, 2012 in Washington. Seen next to Ford is Exp. 33 Commander Sunita Williams. More than 9,500 student participants from the Student Spaceflight Experiments Program (SSEP) around the country took part in the live video event. This was a joint venture between the Department of Education and the National Center for Earth and Space Science Education (NCESSE). Photo Credit: (NASA/Carla Cioffi)

  13. Students Speak with the ISS

    NASA Image and Video Library

    2012-11-15

    International Space Station Expedition 33 Commander Sunita Williams (on screen) answers questions from students during a downlink event held in honor of International Education Week at the Smithsonian National Air and Space Museum, Thursday, Nov. 15, 2012 in Washington. Seen next to Williams is Exp. 33 Flight Engineer Kevin Ford. More than 9,500 student participants from the Student Spaceflight Experiments Program (SSEP) around the country took part in the live video event. This was a joint venture between the Department of Education and the National Center for Earth and Space Science Education (NCESSE). Photo Credit: (NASA/Carla Cioffi)

  14. Microgravity

    NASA Image and Video Library

    2001-01-24

    Experiments with colloidal solutions of plastic microspheres suspended in a liquid serve as models of how molecules interact and form crystals. For the Dynamics of Colloidal Disorder-Order Transition (CDOT) experiment, Paul Chaikin of Princeton University has identified effects that are attributable to Earth's gravity and demonstrated that experiments are needed in the microgravity of orbit. Space experiments have produced unexpected dendritic (snowflake-like) structures. To date, the largest hard sphere crystal grown is a 3 mm single crystal grown at the cool end of a ground sample. At least two more additional flight experiments are plarned aboard the International Space Station. This image is from a video downlink.

  15. Conformal, Transparent Printed Antenna Developed for Communication and Navigation Systems

    NASA Technical Reports Server (NTRS)

    Lee, Richard Q.; Simons, Rainee N.

    1999-01-01

    Conformal, transparent printed antennas have advantages over conventional antennas in terms of space reuse and aesthetics. Because of their compactness and thin profile, these antennas can be mounted on video displays for efficient integration in communication systems such as palmtop computers, digital telephones, and flat-panel television displays. As an array of multiple elements, the antenna subsystem may save weight by reusing space (via vertical stacking) on photovoltaic arrays or on Earth-facing sensors. Also, the antenna could go unnoticed on automobile windshields or building windows, enabling satellite uplinks and downlinks or other emerging high-frequency communications.

  16. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  17. On-board B-ISDN fast packet switching architectures. Phase 2: Development. Proof-of-concept architecture definition report

    NASA Technical Reports Server (NTRS)

    Shyy, Dong-Jye; Redman, Wayne

    1993-01-01

    For the next-generation packet switched communications satellite system with onboard processing and spot-beam operation, a reliable onboard fast packet switch is essential to route packets from different uplink beams to different downlink beams. The rapid emergence of point-to-point services such as video distribution, and the large demand for video conference, distributed data processing, and network management makes the multicast function essential to a fast packet switch (FPS). The satellite's inherent broadcast features gives the satellite network an advantage over the terrestrial network in providing multicast services. This report evaluates alternate multicast FPS architectures for onboard baseband switching applications and selects a candidate for subsequent breadboard development. Architecture evaluation and selection will be based on the study performed in phase 1, 'Onboard B-ISDN Fast Packet Switching Architectures', and other switch architectures which have become commercially available as large scale integration (LSI) devices.

  18. Measuring and Estimating Normalized Contrast in Infrared Flash Thermography

    NASA Technical Reports Server (NTRS)

    Koshti, Ajay M.

    2013-01-01

    Infrared flash thermography (IRFT) is used to detect void-like flaws in a test object. The IRFT technique involves heating up the part surface using a flash of flash lamps. The post-flash evolution of the part surface temperature is sensed by an IR camera in terms of pixel intensity of image pixels. The IR technique involves recording of the IR video image data and analysis of the data using the normalized pixel intensity and temperature contrast analysis method for characterization of void-like flaws for depth and width. This work introduces a new definition of the normalized IR pixel intensity contrast and normalized surface temperature contrast. A procedure is provided to compute the pixel intensity contrast from the camera pixel intensity evolution data. The pixel intensity contrast and the corresponding surface temperature contrast differ but are related. This work provides a method to estimate the temperature evolution and the normalized temperature contrast from the measured pixel intensity evolution data and some additional measurements during data acquisition.

  19. 4K x 2K pixel color video pickup system

    NASA Astrophysics Data System (ADS)

    Sugawara, Masayuki; Mitani, Kohji; Shimamoto, Hiroshi; Fujita, Yoshihiro; Yuyama, Ichiro; Itakura, Keijirou

    1998-12-01

    This paper describes the development of an experimental super- high-definition color video camera system. During the past several years there has been much interest in super-high- definition images as the next generation image media. One of the difficulties in implementing a super-high-definition motion imaging system is constructing the image-capturing section (camera). Even the state-of-the-art semiconductor technology can not realize the image sensor which has enough pixels and output data rate for super-high-definition images. The present study is an attempt to fill the gap in this respect. The authors intend to solve the problem by using new imaging method in which four HDTV sensors are attached on a new color separation optics so that their pixel sample pattern forms checkerboard pattern. A series of imaging experiments demonstrate that this technique is an effective approach to capturing super-high-definition moving images in the present situation where no image sensors exist for such images.

  20. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  1. Video Salient Object Detection via Fully Convolutional Networks.

    PubMed

    Wang, Wenguan; Shen, Jianbing; Shao, Ling

    This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).

  2. Standards for efficient employment of wide-area motion imagery (WAMI) sensors

    NASA Astrophysics Data System (ADS)

    Randall, L. Scott; Maenner, Paul F.

    2013-05-01

    Airborne Wide Area Motion Imagery (WAMI) sensors provide the opportunity for continuous high-resolution surveillance of geographic areas covering tens of square kilometers. This is both a blessing and a curse. Data volumes from "gigapixel-class" WAMI sensors are orders of magnitude greater than for traditional "megapixel-class" video sensors. The amount of data greatly exceeds the capacities of downlinks to ground stations, and even if this were not true, the geographic coverage is too large for effective human monitoring. Although collected motion imagery is recorded on the platform, typically only small "windows" of the full field of view are transmitted to the ground; the full set of collected data can be retrieved from the recording device only after the mission has concluded. Thus, the WAMI environment presents several difficulties: (1) data is too massive for downlink; (2) human operator selection and control of the video windows may not be effective; (3) post-mission storage and dissemination may be limited by inefficient file formats; and (4) unique system implementation characteristics may thwart exploitation by available analysis tools. To address these issues, the National Geospatial-Intelligence Agency's Motion Imagery Standards Board (MISB) is developing relevant standard data exchange formats: (1) moving target indicator (MTI) and tracking metadata to support tipping and cueing of WAMI windows using "watch boxes" and "trip wires"; (2) control channel commands for positioning the windows within the full WAMI field of view; and (3) a full-field-of-view spatiotemporal tiled file format for efficient storage, retrieval, and dissemination. The authors previously provided an overview of this suite of standards. This paper describes the latest progress, with specific concentration on a detailed description of the spatiotemporal tiled file format.

  3. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  4. The AAPM/RSNA physics tutorial for residents: digital fluoroscopy.

    PubMed

    Pooley, R A; McKinney, J M; Miller, D A

    2001-01-01

    A digital fluoroscopy system is most commonly configured as a conventional fluoroscopy system (tube, table, image intensifier, video system) in which the analog video signal is converted to and stored as digital data. Other methods of acquiring the digital data (eg, digital or charge-coupled device video and flat-panel detectors) will become more prevalent in the future. Fundamental concepts related to digital imaging in general include binary numbers, pixels, and gray levels. Digital image data allow the convenient use of several image processing techniques including last image hold, gray-scale processing, temporal frame averaging, and edge enhancement. Real-time subtraction of digital fluoroscopic images after injection of contrast material has led to widespread use of digital subtraction angiography (DSA). Additional image processing techniques used with DSA include road mapping, image fade, mask pixel shift, frame summation, and vessel size measurement. Peripheral angiography performed with an automatic moving table allows imaging of the peripheral vasculature with a single contrast material injection.

  5. Simple Pixel Structure Using Video Data Correction Method for Nonuniform Electrical Characteristics of Polycrystalline Silicon Thin-Film Transistors and Differential Aging Phenomenon of Organic Light-Emitting Diodes

    NASA Astrophysics Data System (ADS)

    Hai-Jung In,; Oh-Kyong Kwon,

    2010-03-01

    A simple pixel structure using a video data correction method is proposed to compensate for electrical characteristic variations of driving thin-film transistors (TFTs) and the degradation of organic light-emitting diodes (OLEDs) in active-matrix OLED (AMOLED) displays. The proposed method senses the electrical characteristic variations of TFTs and OLEDs and stores them in external memory. The nonuniform emission current of TFTs and the aging of OLEDs are corrected by modulating video data using the stored data. Experimental results show that the emission current error due to electrical characteristic variation of driving TFTs is in the range from -63.1 to 61.4% without compensation, but is decreased to the range from -1.9 to 1.9% with the proposed correction method. The luminance error due to the degradation of an OLED is less than 1.8% when the proposed correction method is used for a 50% degraded OLED.

  6. Evolving discriminators for querying video sequences

    NASA Astrophysics Data System (ADS)

    Iyengar, Giridharan; Lippman, Andrew B.

    1997-01-01

    In this paper we present a framework for content based query and retrieval of information from large video databases. This framework enables content based retrieval of video sequences by characterizing the sequences using motion, texture and colorimetry cues. This characterization is biologically inspired and results in a compact parameter space where every segment of video is represented by an 8 dimensional vector. Searching and retrieval is done in real- time with accuracy in this parameter space. Using this characterization, we then evolve a set of discriminators using Genetic Programming Experiments indicate that these discriminators are capable of analyzing and characterizing video. The VideoBook is able to search and retrieve video sequences with 92% accuracy in real-time. Experiments thus demonstrate that the characterization is capable of extracting higher level structure from raw pixel values.

  7. On scalable lossless video coding based on sub-pixel accurate MCTF

    NASA Astrophysics Data System (ADS)

    Yea, Sehoon; Pearlman, William A.

    2006-01-01

    We propose two approaches to scalable lossless coding of motion video. They achieve SNR-scalable bitstream up to lossless reconstruction based upon the subpixel-accurate MCTF-based wavelet video coding. The first approach is based upon a two-stage encoding strategy where a lossy reconstruction layer is augmented by a following residual layer in order to obtain (nearly) lossless reconstruction. The key advantages of our approach include an 'on-the-fly' determination of bit budget distribution between the lossy and the residual layers, freedom to use almost any progressive lossy video coding scheme as the first layer and an added feature of near-lossless compression. The second approach capitalizes on the fact that we can maintain the invertibility of MCTF with an arbitrary sub-pixel accuracy even in the presence of an extra truncation step for lossless reconstruction thanks to the lifting implementation. Experimental results show that the proposed schemes achieve compression ratios not obtainable by intra-frame coders such as Motion JPEG-2000 thanks to their inter-frame coding nature. Also they are shown to outperform the state-of-the-art non-scalable inter-frame coder H.264 (JM) lossless mode, with the added benefit of bitstream embeddedness.

  8. Using ARINC 818 Avionics Digital Video Bus (ADVB) for military displays

    NASA Astrophysics Data System (ADS)

    Alexander, Jon; Keller, Tim

    2007-04-01

    ARINC 818 Avionics Digital Video Bus (ADVB) is a new digital video interface and protocol standard developed especially for high bandwidth uncompressed digital video. The first draft of this standard, released in January of 2007, has been advanced by ARINC and the aerospace community to meet the acute needs of commercial aviation for higher performance digital video. This paper analyzes ARINC 818 for use in military display systems found in avionics, helicopters, and ground vehicles. The flexibility of ARINC 818 for the diverse resolutions, grayscales, pixel formats, and frame rates of military displays is analyzed as well as the suitability of ARINC 818 to support requirements for military video systems including bandwidth, latency, and reliability. Implementation issues relevant to military displays are presented.

  9. MT3825BA: a 384×288-25µm ROIC for uncooled microbolometer FPAs

    NASA Astrophysics Data System (ADS)

    Eminoglu, Selim; Gulden, M. Ali; Bayhan, Nusret; Incedere, O. Samet; Soyer, S. Tuncer; Ustundag, Cem M. B.; Isikhan, Murat; Kocak, Serhat; Turan, Ozge; Yalcin, Cem; Akin, Tayfun

    2014-06-01

    This paper reports the development of a new microbolometer Readout Integrated Circuit (ROIC) called MT3825BA. It has a format of 384 × 288 and a pixel pitch of 25μm. MT3825BA is Mikro-Tasarim's second microbolometer ROIC product, which is developed specifically for resistive surface micro-machined microbolometer detector arrays using high-TCR pixel materials, such as VOx and a-Si. MT3825BA has a system-on-chip architecture, where all the timing, biasing, and pixel non-uniformity correction (NUC) operations in the ROIC are applied using on-chip circuitry simplifying the use and system integration of this ROIC. The ROIC is designed to support pixel resistance values ranging from 30 KΩ to 100 KΩ. MT3825BA is operated using conventional row based readout method, where pixels in the array are read out in a row-by-row basis, where the applied bias for each pixel in a given row is updated at the beginning of each line period according to the applied line based NUC data. The NUC data is applied continuously in a row-by-row basis using the serial programming interface, which is also used to program user configurable features of the ROIC, such as readout gain, integration time, and number of analog video outputs. MT3825BA has a total of 4 analog video outputs and 2 analog reference outputs, placed at the top and bottom of the ROIC, which can be programmed to operate in the 1, 2, and 4-output modes, supporting frames rates well above 60 fps at a 3 MHz pixel output rate. The pixels in the array are read out with respect to reference pixels implemented above and below actual array pixels. The bias voltage of the pixels can be programmed over a 1.0 V range to compensate for the changes in the detector resistance values due to the variations coming from the manufacturing process or changes in the operating temperature. The ROIC has an on-chip integrated temperature sensor with a sensitivity of better than 5 mV / K, and the output of the temperature sensor can be read out the output as part of the analog video stream. MT3825BA can be used to build a microbolometer FPAs with an NETD value below 100 mK using a microbolometer detector array fabrication technology with a detector resistance value up to 100 KΩ, a high TCR value (< 2 % / K), and a sufficiently low pixel thermal conductance (Gth ≤ 20 nW / K). MT3825BA measures 13.0 mm × 13.5 mm and is fabricated on 200 mm CMOS wafers. The microbolometer ROIC wafers are engineered to have flat surface finish to simplify the wafer level detector fabrication and wafer level vacuum packaging (WLVP). The ROIC runs on 3.3 V analog and 1.8 V digital supplies, and dissipates less than 85 mW in the 2-output mode at 30 fps. Mikro-Tasarim provides tested ROIC wafers and offers compact test electronics and software for its ROIC customers to shorten their FPA and camera development cycles.

  10. Pixel decomposition for tracking in low resolution videos

    NASA Astrophysics Data System (ADS)

    Govinda, Vivekanand; Ralph, Jason F.; Spencer, Joseph W.; Goulermas, John Y.; Yang, Lihua; Abbas, Alaa M.

    2008-04-01

    This paper describes a novel set of algorithms that allows indoor activity to be monitored using data from very low resolution imagers and other non-intrusive sensors. The objects are not resolved but activity may still be determined. This allows the use of such technology in sensitive environments where privacy must be maintained. Spectral un-mixing algorithms from remote sensing were adapted for this environment. These algorithms allow the fractional contributions from different colours within each pixel to be estimated and this is used to assist in the detection and monitoring of small objects or sub-pixel motion.

  11. Development of a 300,000-pixel ultrahigh-speed high-sensitivity CCD

    NASA Astrophysics Data System (ADS)

    Ohtake, H.; Hayashida, T.; Kitamura, K.; Arai, T.; Yonai, J.; Tanioka, K.; Maruyama, H.; Etoh, T. Goji; Poggemann, D.; Ruckelshausen, A.; van Kuijk, H.; Bosiers, Jan T.

    2006-02-01

    We are developing an ultrahigh-speed, high-sensitivity broadcast camera that is capable of capturing clear, smooth slow-motion videos even where lighting is limited, such as at professional baseball games played at night. In earlier work, we developed an ultrahigh-speed broadcast color camera1) using three 80,000-pixel ultrahigh-speed, highsensitivity CCDs2). This camera had about ten times the sensitivity of standard high-speed cameras, and enabled an entirely new style of presentation for sports broadcasts and science programs. Most notably, increasing the pixel count is crucially important for applying ultrahigh-speed, high-sensitivity CCDs to HDTV broadcasting. This paper provides a summary of our experimental development aimed at improving the resolution of CCD even further: a new ultrahigh-speed high-sensitivity CCD that increases the pixel count four-fold to 300,000 pixels.

  12. Design and evaluation of a payload to support plant growth onboard COMET 1

    NASA Technical Reports Server (NTRS)

    Hoehn, A.; Kliss, M. H.; Luttges, M. W.; Robinson, M. C.; Stodieck, L. S.

    1992-01-01

    The paper describes the design and the operation principles of the Plant Module for Autonomous Space Support (P-MASS), designed to provide life support for a variety of plants, algae, and bacteria in low earth orbit during the maiden flight of COMET-1, scheduled for 1993. During flight (scheduled to continue for 30 days), both color video images and collected environmental data (including light intensity, temperature, relative humidity, CO2 and O2 concentrations, soil moisture, and nutrients released) will be downlinked to earth several times a day. These data will also be stored within the payload and retrieved from it after reentry and recovery.

  13. Microgravity

    NASA Image and Video Library

    1991-04-03

    The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.

  14. Microgravity

    NASA Image and Video Library

    1995-08-29

    The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.

  15. Materials Research Conducted Aboard the International Space Station: Facilities Overview, Operational Procedures, and Experimental Outcomes

    NASA Technical Reports Server (NTRS)

    Grugel, Richard N.; Luz, Paul; Smith, Guy; Spivey, Reggie; Jeter, Linda; Gillies, Donald; Hua, Fay; Anikumar, A. V.

    2007-01-01

    The Microgravity Science Glovebox (MSG) and Maintenance Work Area (MWA) are facilities aboard the International Space Station (ISS) that were used to successfully conduct experiments in support of, respectively, the Pore Formation and Mobility Investigation (PFMI) and the In-Space Soldering Investigation (ISSI). The capabilities of these facilities are briefly discussed and then demonstrated by presenting "real-time" and subsequently down-linked video-taped examples from the abovementioned experiments. Data interpretation, ISS telescience, some lessons learned, and the need of such facilities for conducting work in support of understanding materials behavior, particularly fluid processing and transport scenarios, in low-gravity environments is discussed.

  16. Materials Research Conducted Aboard the International Space Station: Facilities Overview, Operational Procedures, and Experimental Outcomes

    NASA Technical Reports Server (NTRS)

    Grugel, R. N.; Luz, P.; Smith, G. A.; Spivey, R.; Jeter, L.; Gillies, D. C.; Hua, F.; Anilkumar, A. V.

    2006-01-01

    The Microgravity Science Glovebox (MSG) and Maintenance Work Area (MWA) are facilities aboard the International Space Station (ISS) that were used to successfully conduct experiments in support of, respectively, the Pore Formation and Mobility Investigation (PFMI) and the In-Space Soldering Investigation (ISSI). The capabilities of these facilities are briefly discussed and then demonstrated by presenting real-time and subsequently down-linked video-taped examples from the abovementioned experiments. Data interpretation, ISS telescience, some lessons learned, and the need of such facilities for conducting work in support of understanding materials behavior, particularly fluid processing and transport scenarios, in low-gravity environments is discussed.

  17. Video distribution system cost model

    NASA Technical Reports Server (NTRS)

    Gershkoff, I.; Haspert, J. K.; Morgenstern, B.

    1980-01-01

    A cost model that can be used to systematically identify the costs of procuring and operating satellite linked communications systems is described. The user defines a network configuration by specifying the location of each participating site, the interconnection requirements, and the transmission paths available for the uplink (studio to satellite), downlink (satellite to audience), and voice talkback (between audience and studio) segments of the network. The model uses this information to calculate the least expensive signal distribution path for each participating site. Cost estimates are broken downy by capital, installation, lease, operations and maintenance. The design of the model permits flexibility in specifying network and cost structure.

  18. Digital cinema system using JPEG2000 movie of 8-million pixel resolution

    NASA Astrophysics Data System (ADS)

    Fujii, Tatsuya; Nomura, Mitsuru; Shirai, Daisuke; Yamaguchi, Takahiro; Fujii, Tetsuro; Ono, Sadayasu

    2003-05-01

    We have developed a prototype digital cinema system that can store, transmit and display extra high quality movies of 8-million pixel resolution, using JPEG2000 coding algorithm. The image quality is 4 times better than HDTV in resolution, and enables us to replace conventional films with digital cinema archives. Using wide-area optical gigabit IP networks, cinema contents are distributed and played back as a video-on-demand (VoD) system. The system consists of three main devices, a video server, a real-time JPEG2000 decoder, and a large-venue LCD projector. All digital movie data are compressed by JPEG2000 and stored in advance. The coded streams of 300~500 Mbps can be continuously transmitted from the PC server using TCP/IP. The decoder can perform the real-time decompression at 24/48 frames per second, using 120 parallel JPEG2000 processing elements. The received streams are expanded into 4.5Gbps raw video signals. The prototype LCD projector uses 3 pieces of 3840×2048 pixel reflective LCD panels (D-ILA) to show RGB 30-bit color movies fed by the decoder. The brightness exceeds 3000 ANSI lumens for a 300-inch screen. The refresh rate is chosen to 96Hz to thoroughly eliminate flickers, while preserving compatibility to cinema movies of 24 frames per second.

  19. Snapshot spectral and polarimetric imaging; target identification with multispectral video

    NASA Astrophysics Data System (ADS)

    Bartlett, Brent D.; Rodriguez, Mikel D.

    2013-05-01

    As the number of pixels continue to grow in consumer and scientific imaging devices, it has become feasible to collect the incident light field. In this paper, an imaging device developed around light field imaging is used to collect multispectral and polarimetric imagery in a snapshot fashion. The sensor is described and a video data set is shown highlighting the advantage of snapshot spectral imaging. Several novel computer vision approaches are applied to the video cubes to perform scene characterization and target identification. It is shown how the addition of spectral and polarimetric data to the video stream allows for multi-target identification and tracking not possible with traditional RGB video collection.

  20. Using computer-based video analysis in the study of fidgety movements.

    PubMed

    Adde, Lars; Helbostad, Jorunn L; Jensenius, Alexander Refsum; Taraldsen, Gunnar; Støen, Ragnhild

    2009-09-01

    Absence of fidgety movements (FM) in high-risk infants is a strong marker for later cerebral palsy (CP). FMs can be classified by the General Movement Assessment (GMA), based on Gestalt perception of the infant's movement pattern. More objective movement analysis may be provided by computer-based technology. The aim of this study was to explore the feasibility of a computer-based video analysis of infants' spontaneous movements in classifying non-fidgety versus fidgety movements. GMA was performed from video material of the fidgety period in 82 term and preterm infants at low and high risks of developing CP. The same videos were analysed using the developed software called General Movement Toolbox (GMT) with visualisation of the infant's movements for qualitative analyses. Variables derived from the calculation of displacement of pixels from one video frame to the next were used for quantitative analyses. Visual representations from GMT showed easily recognisable patterns of FMs. Of the eight quantitative variables derived, the variability in displacement of a spatial centre of active pixels in the image had the highest sensitivity (81.5) and specificity (70.0) in classifying FMs. By setting triage thresholds at 90% sensitivity and specificity for FM, the need for further referral was reduced by 70%. Video recordings can be used for qualitative and quantitative analyses of FMs provided by GMT. GMT is easy to implement in clinical practice, and may provide assistance in detecting infants without FMs.

  1. Extracting Maximum Total Water Levels from Video "Brightest" Images

    NASA Astrophysics Data System (ADS)

    Brown, J. A.; Holman, R. A.; Stockdon, H. F.; Plant, N. G.; Long, J.; Brodie, K.

    2016-02-01

    An important parameter for predicting storm-induced coastal change is the maximum total water level (TWL). Most studies estimate the TWL as the sum of slowly varying water levels, including tides and storm surge, and the extreme runup parameter R2%, which includes wave setup and swash motions over minutes to seconds. Typically, R2% is measured using video remote sensing data, where cross-shore timestacks of pixel intensity are digitized to extract the horizontal runup timeseries. However, this technique must be repeated at multiple alongshore locations to resolve alongshore variability, and can be tedious and time consuming. We seek an efficient, video-based approach that yields a synoptic estimate of TWL that accounts for alongshore variability and can be applied during storms. In this work, the use of a video product termed the "brightest" image is tested; this represents the highest intensity of each pixel captured during a 10-minute collection period. Image filtering and edge detection techniques are applied to automatically determine the shoreward edge of the brightest region (i.e., the swash zone) at each alongshore pixel. The edge represents the horizontal position of the maximum TWL along the beach during the collection period, and is converted to vertical elevations using measured beach topography. This technique is evaluated using video and topographic data collected every half-hour at Duck, NC, during differing hydrodynamic conditions. Relationships between the maximum TWL estimates from the brightest images and various runup statistics computed using concurrent runup timestacks are examined, and errors associated with mapping the horizontal results to elevations are discussed. This technique is invaluable, as it can be used to routinely estimate maximum TWLs along a coastline from a single brightest image product, and provides a means for examining alongshore variability of TWLs at high alongshore resolution. These advantages will be useful in validating numerical hydrodynamic models and improving coastal change predictions.

  2. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video.

    PubMed

    Ghosh, Tonmoy; Fattah, Shaikh Anowarul; Wahid, Khan A

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data.

  3. Science Planning for the Solar Probe Plus NASA Mission

    NASA Astrophysics Data System (ADS)

    Kusterer, M. B.; Fox, N. J.; Turner, F. S.; Vandegriff, J. D.

    2015-12-01

    With a planned launch in 2018, there are a number of challenges for the Science Planning Team (SPT) of the Solar Probe Plus mission. The geometry of the celestial bodies and the spacecraft during some of the Solar Probe Plus mission orbits cause limited uplink and downlink opportunities. The payload teams must manage the volume of data that they write to the spacecraft solid-state recorders (SSR) for their individual instruments for downlink to the ground. The aim is to write the instrument data to the spacecraft SSR for downlink before a set of data downlink opportunities large enough to get the data to the ground and before the start of another data collection cycle. The SPT also intend to coordinate observations with other spacecraft and ground based systems. To add further complexity, two of the spacecraft payloads have the capability to write a large volumes of data to their internal payload SSR while sending a smaller "survey" portion of the data to the spacecraft SSR for downlink. The instrument scientists would then view the survey data on the ground, determine the most interesting data from their payload SSR, send commands to transfer that data from their payload SSR to the spacecraft SSR for downlink. The timing required for downlink and analysis of the survey data, identifying uplink opportunities for commanding data transfers, and downlink opportunities big enough for the selected data within the data collection period is critical. To solve these challenges, the Solar Probe Plus Science Working Group has designed a orbit-type optimized data file priority downlink scheme to downlink high priority survey data quickly. This file priority scheme would maximize the reaction time that the payload teams have to perform the survey and selected data method on orbits where the downlink and uplink availability will support using this method. An interactive display and analysis science planning tool is being designed for the SPT to use as an aid to planning. The tool will integrate the data file priority downlink scheme, payload data volume allocations, spacecraft ephemeris, attitude, downlink and uplink schedules, spacecraft and payload activities, and other spacecraft ephemeris. A prototype of the tool is in development using notional inputs obtained from the spacecraft engineering teams.

  4. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  5. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  6. Investigation of correlation classification techniques

    NASA Technical Reports Server (NTRS)

    Haskell, R. E.

    1975-01-01

    A two-step classification algorithm for processing multispectral scanner data was developed and tested. The first step is a single pass clustering algorithm that assigns each pixel, based on its spectral signature, to a particular cluster. The output of that step is a cluster tape in which a single integer is associated with each pixel. The cluster tape is used as the input to the second step, where ground truth information is used to classify each cluster using an iterative method of potentials. Once the clusters have been assigned to classes the cluster tape is read pixel-by-pixel and an output tape is produced in which each pixel is assigned to its proper class. In addition to the digital classification programs, a method of using correlation clustering to process multispectral scanner data in real time by means of an interactive color video display is also described.

  7. Luminance compensation for AMOLED displays using integrated MIS sensors

    NASA Astrophysics Data System (ADS)

    Vygranenko, Yuri; Fernandes, Miguel; Louro, Paula; Vieira, Manuela

    2017-05-01

    Active-matrix organic light-emitting diodes (AMOLEDs) are ideal for future TV applications due to their ability to faithfully reproduce real images. However, pixel luminance can be affected by instability of driver TFTs and aging effect in OLEDs. This paper reports on a pixel driver utilizing a metal-insulator-semiconductor (MIS) sensor for luminance control of the OLED element. In the proposed pixel architecture for bottom-emission AMOLEDs, the embedded MIS sensor shares the same layer stack with back-channel etched a Si:H TFTs to maintain the fabrication simplicity. The pixel design for a large-area HD display is presented. The external electronics performs image processing to modify incoming video using correction parameters for each pixel in the backplane, and also sensor data processing to update the correction parameters. The luminance adjusting algorithm is based on realistic models for pixel circuit elements to predict the relation between the programming voltage and OLED luminance. SPICE modeling of the sensing part of the backplane is performed to demonstrate its feasibility. Details on the pixel circuit functionality including the sensing and programming operations are also discussed.

  8. Full-frame video stabilization with motion inpainting.

    PubMed

    Matsushita, Yasuyuki; Ofek, Eyal; Ge, Weina; Tang, Xiaoou; Shum, Heung-Yeung

    2006-07-01

    Video stabilization is an important video enhancement technology which aims at removing annoying shaky motion from videos. We propose a practical and robust approach of video stabilization that produces full-frame stabilized videos with good visual quality. While most previous methods end up with producing smaller size stabilized videos, our completion method can produce full-frame videos by naturally filling in missing image parts by locally aligning image data of neighboring frames. To achieve this, motion inpainting is proposed to enforce spatial and temporal consistency of the completion in both static and dynamic image areas. In addition, image quality in the stabilized video is enhanced with a new practical deblurring algorithm. Instead of estimating point spread functions, our method transfers and interpolates sharper image pixels of neighboring frames to increase the sharpness of the frame. The proposed video completion and deblurring methods enabled us to develop a complete video stabilizer which can naturally keep the original image quality in the stabilized videos. The effectiveness of our method is confirmed by extensive experiments over a wide variety of videos.

  9. Spacecraft Reed-Solomon downlink module

    NASA Technical Reports Server (NTRS)

    Luong, Huy H. (Inventor); Donaldson, James A. (Inventor); Wood, Steven H. (Inventor)

    1998-01-01

    Apparatus and method for providing downlink frames to be transmitted from a spacecraft to a ground station. Each downlink frame includes a synchronization pattern and a transfer frame. The apparatus may comprise a monolithic Reed-Solomon downlink (RSDL) encoding chip coupled to data buffers for storing transfer frames. The RSKL chip includes a timing device, a bus interface, a timing and control unit, a synchronization pattern unit, and a Reed-Solomon encoding unit, and a bus arbiter.

  10. Multi-pass encoding of hyperspectral imagery with spectral quality control

    NASA Astrophysics Data System (ADS)

    Wasson, Steven; Walker, William

    2015-05-01

    Multi-pass encoding is a technique employed in the field of video compression that maximizes the quality of an encoded video sequence within the constraints of a specified bit rate. This paper presents research where multi-pass encoding is extended to the field of hyperspectral image compression. Unlike video, which is primarily intended to be viewed by a human observer, hyperspectral imagery is processed by computational algorithms that generally attempt to classify the pixel spectra within the imagery. As such, these algorithms are more sensitive to distortion in the spectral dimension of the image than they are to perceptual distortion in the spatial dimension. The compression algorithm developed for this research, which uses the Karhunen-Loeve transform for spectral decorrelation followed by a modified H.264/Advanced Video Coding (AVC) encoder, maintains a user-specified spectral quality level while maximizing the compression ratio throughout the encoding process. The compression performance may be considered near-lossless in certain scenarios. For qualitative purposes, this paper presents the performance of the compression algorithm for several Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) and Hyperion datasets using spectral angle as the spectral quality assessment function. Specifically, the compression performance is illustrated in the form of rate-distortion curves that plot spectral angle versus bits per pixel per band (bpppb).

  11. Efficient space-time sampling with pixel-wise coded exposure for high-speed imaging.

    PubMed

    Liu, Dengyu; Gu, Jinwei; Hitomi, Yasunobu; Gupta, Mohit; Mitsunaga, Tomoo; Nayar, Shree K

    2014-02-01

    Cameras face a fundamental trade-off between spatial and temporal resolution. Digital still cameras can capture images with high spatial resolution, but most high-speed video cameras have relatively low spatial resolution. It is hard to overcome this trade-off without incurring a significant increase in hardware costs. In this paper, we propose techniques for sampling, representing, and reconstructing the space-time volume to overcome this trade-off. Our approach has two important distinctions compared to previous works: 1) We achieve sparse representation of videos by learning an overcomplete dictionary on video patches, and 2) we adhere to practical hardware constraints on sampling schemes imposed by architectures of current image sensors, which means that our sampling function can be implemented on CMOS image sensors with modified control units in the future. We evaluate components of our approach, sampling function and sparse representation, by comparing them to several existing approaches. We also implement a prototype imaging system with pixel-wise coded exposure control using a liquid crystal on silicon device. System characteristics such as field of view and modulation transfer function are evaluated for our imaging system. Both simulations and experiments on a wide range of scenes show that our method can effectively reconstruct a video from a single coded image while maintaining high spatial resolution.

  12. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  13. Graded zooming

    DOEpatents

    Coffland, Douglas R.

    2006-04-25

    A system for increasing the resolution in the far field resolution of video or still frame images, while maintaining full coverage in the near field. The system includes a camera connected to a computer. The computer applies a specific zooming scale factor to each of line of pixels and continuously increases the scale factor of the line of pixels from the bottom to the top to capture the scene in the near field, yet maintain resolution in the scene in the far field.

  14. The Lancashire telemedicine ambulance.

    PubMed

    Curry, G R; Harrop, N

    1998-01-01

    An emergency ambulance was equipped with three video-cameras and a system for transmitting slow-scan video-pictures through a cellular telephone link to a hospital accident and emergency department. Video-pictures were trasmitted at a resolution of 320 x 240 pixels and a frame rate of 15 pictures/min. In addition, a helmet-mounted camera was used with a wireless transmission link to the ambulance and thence the hospital. Speech was transmitted by a second hand-held cellular telephone. The equipment was installed in 1996-7 and video-recordings of actual ambulance journeys were made in July 1997. The technical feasibility of the telemedicine ambulance has been demonstrated and further clinical assessment is now in progress.

  15. Digital Watermarking: From Concepts to Real-Time Video Applications

    DTIC Science & Technology

    1999-01-01

    includes still- image , video, audio, and geometry data among others-the fundamental con- cept of steganography can be transferred from the field of...size of the message, which should be as small as possible. Some commercially available algorithms for image watermarking forego the secure-watermarking... image compres- sion.’ The image’s luminance component is divided into 8 x 8 pixel blocks. The algorithm selects a sequence of blocks and applies the

  16. Proceedings of the Image Understanding Workshop, Held at Los Angeles, California November 7-8, 1979

    DTIC Science & Technology

    1979-11-01

    8217 We. ter. incorporated a CCD field delay to r-.move the inter ace and provide a processing capability -J^a- cent lines of video. This...is, let us change notation such that i,j are running indices over the entire frame. Then the center pixel to lower right pixel combination instead...we have what we feel is a very attractive solution to inter -pro- cessor communication, and processor-to-outside- world communicaMon. The strategy

  17. Study of Potential Standardization of Digital Freeze Frame Video Codecs.

    DTIC Science & Technology

    1984-01-01

    and MAR track an input clock over a very wide range. These are dependent on the modem used in any specific application. Interface connectors are those...terminals, 56K bit digital transmission sets). We have a limited custan capability and are not in the custom unit business. 1.,o .2e e.. , , 4g..2. . j...will) are designed for narrowband operation. We build our own modems which send .’e- pixels at a rate of 1969 pixels/second. Grey scale information is

  18. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  19. A single pixel camera video ophthalmoscope

    NASA Astrophysics Data System (ADS)

    Lochocki, B.; Gambin, A.; Manzanera, S.; Irles, E.; Tajahuerce, E.; Lancis, J.; Artal, P.

    2017-02-01

    There are several ophthalmic devices to image the retina, from fundus cameras capable to image the whole fundus to scanning ophthalmoscopes with photoreceptor resolution. Unfortunately, these devices are prone to a variety of ocular conditions like defocus and media opacities, which usually degrade the quality of the image. Here, we demonstrate a novel approach to image the retina in real-time using a single pixel camera, which has the potential to circumvent those optical restrictions. The imaging procedure is as follows: a set of spatially coded patterns is projected rapidly onto the retina using a digital micro mirror device. At the same time, the inner product's intensity is measured for each pattern with a photomultiplier module. Subsequently, an image of the retina is reconstructed computationally. Obtained image resolution is up to 128 x 128 px with a varying real-time video framerate up to 11 fps. Experimental results obtained in an artificial eye confirm the tolerance against defocus compared to a conventional multi-pixel array based system. Furthermore, the use of a multiplexed illumination offers a SNR improvement leading to a lower illumination of the eye and hence an increase in patient's comfort. In addition, the proposed system could enable imaging in wavelength ranges where cameras are not available.

  20. Web life: Ice Flows

    NASA Astrophysics Data System (ADS)

    2016-11-01

    Computer and video gamers of a certain vintage will have fond memories of Lemmings, a game in which players must shepherd pixelated, suicidal rodents around a series of obstacles to reach safety. At first glance, Ice Flows is strikingly similar.

  1. Probabilistic choice between symmetric disparities in motion stereo matching for a lateral navigation system

    NASA Astrophysics Data System (ADS)

    Ershov, Egor; Karnaukhov, Victor; Mozerov, Mikhail

    2016-02-01

    Two consecutive frames of a lateral navigation camera video sequence can be considered as an appropriate approximation to epipolar stereo. To overcome edge-aware inaccuracy caused by occlusion, we propose a model that matches the current frame to the next and to the previous ones. The positive disparity of matching to the previous frame has its symmetric negative disparity to the next frame. The proposed algorithm performs probabilistic choice for each matched pixel between the positive disparity and its symmetric disparity cost. A disparity map obtained by optimization over the cost volume composed of the proposed probabilistic choice is more accurate than the traditional left-to-right and right-to-left disparity maps cross-check. Also, our algorithm needs two times less computational operations per pixel than the cross-check technique. The effectiveness of our approach is demonstrated on synthetic data and real video sequences, with ground-truth value.

  2. Remote Visualization and Remote Collaboration On Computational Fluid Dynamics

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    A new technology has been developed for remote visualization that provides remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as fluid dynamics simulations or measurements). Based on this technology, some World Wide Web sites on the Internet are providing fluid dynamics data for educational or testing purposes. This technology is also being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics and wind tunnel testing. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit).

  3. Photovoltaic Pixels for Neural Stimulation: Circuit Models and Performance.

    PubMed

    Boinagrov, David; Lei, Xin; Goetz, Georges; Kamins, Theodore I; Mathieson, Keith; Galambos, Ludwig; Harris, James S; Palanker, Daniel

    2016-02-01

    Photovoltaic conversion of pulsed light into pulsed electric current enables optically-activated neural stimulation with miniature wireless implants. In photovoltaic retinal prostheses, patterns of near-infrared light projected from video goggles onto subretinal arrays of photovoltaic pixels are converted into patterns of current to stimulate the inner retinal neurons. We describe a model of these devices and evaluate the performance of photovoltaic circuits, including the electrode-electrolyte interface. Characteristics of the electrodes measured in saline with various voltages, pulse durations, and polarities were modeled as voltage-dependent capacitances and Faradaic resistances. The resulting mathematical model of the circuit yielded dynamics of the electric current generated by the photovoltaic pixels illuminated by pulsed light. Voltages measured in saline with a pipette electrode above the pixel closely matched results of the model. Using the circuit model, our pixel design was optimized for maximum charge injection under various lighting conditions and for different stimulation thresholds. To speed discharge of the electrodes between the pulses of light, a shunt resistor was introduced and optimized for high frequency stimulation.

  4. Organic Light-Emitting Diode-on-Silicon Pixel Circuit Using the Source Follower Structure with Active Load for Microdisplays

    NASA Astrophysics Data System (ADS)

    Kwak, Bong-Choon; Lim, Han-Sin; Kwon, Oh-Kyong

    2011-03-01

    In this paper, we propose a pixel circuit immune to the electrical characteristic variation of organic light-emitting diodes (OLEDs) for organic light-emitting diode-on-silicon (OLEDoS) microdisplays with a 0.4 inch video graphics array (VGA) resolution and a 6-bit gray scale. The proposed pixel circuit is implemented using five p-channel metal oxide semiconductor field-effect transistors (MOSFETs) and one storage capacitor. The proposed pixel circuit has a source follower with a diode-connected transistor as an active load for improving the immunity against the electrical characteristic variation of OLEDs. The deviation in the measured emission current ranges from -0.165 to 0.212 least significant bit (LSB) among 11 samples while the anode voltage of OLED is 0 V. Also, the deviation in the measured emission current ranges from -0.262 to 0.272 LSB in pixel samples, while the anode voltage of OLED varies from 0 to 2.5 V owing to the electrical characteristic variation of OLEDs.

  5. Objective analysis of image quality of video image capture systems

    NASA Astrophysics Data System (ADS)

    Rowberg, Alan H.

    1990-07-01

    As Picture Archiving and Communication System (PACS) technology has matured, video image capture has become a common way of capturing digital images from many modalities. While digital interfaces, such as those which use the ACR/NEMA standard, will become more common in the future, and are preferred because of the accuracy of image transfer, video image capture will be the dominant method in the short term, and may continue to be used for some time because of the low cost and high speed often associated with such devices. Currently, virtually all installed systems use methods of digitizing the video signal that is produced for display on the scanner viewing console itself. A series of digital test images have been developed for display on either a GE CT9800 or a GE Signa MRI scanner. These images have been captured with each of five commercially available image capture systems, and the resultant images digitally transferred on floppy disk to a PC1286 computer containing Optimast' image analysis software. Here the images can be displayed in a comparative manner for visual evaluation, in addition to being analyzed statistically. Each of the images have been designed to support certain tests, including noise, accuracy, linearity, gray scale range, stability, slew rate, and pixel alignment. These image capture systems vary widely in these characteristics, in addition to the presence or absence of other artifacts, such as shading and moire pattern. Other accessories such as video distribution amplifiers and noise filters can also add or modify artifacts seen in the captured images, often giving unusual results. Each image is described, together with the tests which were performed using them. One image contains alternating black and white lines, each one pixel wide, after equilibration strips ten pixels wide. While some systems have a slew rate fast enough to track this correctly, others blur it to an average shade of gray, and do not resolve the lines, or give horizontal or vertical streaking. While many of these results are significant from an engineering standpoint alone, there are clinical implications and some anatomy or pathology may not be visualized if an image capture system is used improperly.

  6. Bringing "Scientific Expeditions" Into the Schools

    NASA Technical Reports Server (NTRS)

    Watson, Val; Lasinski, T. A. (Technical Monitor)

    1995-01-01

    Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D, high resolution, dynamic, interactive viewing of scientific data (such as simulations or measurements of fluid dynamics). The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects in computational fluid dynamics (CFD) and wind tunnel testing. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualiZation of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewer's local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: 1. The visual is much higher in resolution (1280xl024 pixels with 24 bits of color) than typical video format transmitted over the network. 2. The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). 3. A rich variety of guided expeditions through the data can be included easily. 4. A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. 5. The scenes can be viewed in 3D using stereo vision. 6. The network bandwidth used for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.)

  7. Fast 3D Net Expeditions: Tools for Effective Scientific Collaboration on the World Wide Web

    NASA Technical Reports Server (NTRS)

    Watson, Val; Chancellor, Marisa K. (Technical Monitor)

    1996-01-01

    Two new technologies, the FASTexpedition and Remote FAST, have been developed that provide remote, 3D (three dimensional), high resolution, dynamic, interactive viewing of scientific data. The FASTexpedition permits one to access scientific data from the World Wide Web, take guided expeditions through the data, and continue with self controlled expeditions through the data. Remote FAST permits collaborators at remote sites to simultaneously view an analysis of scientific data being controlled by one of the collaborators. Control can be transferred between sites. These technologies are now being used for remote collaboration in joint university, industry, and NASA projects. Also, NASA Ames Research Center has initiated a project to make scientific data and guided expeditions through the data available as FASTexpeditions on the World Wide Web for educational purposes. Previously, remote visualization of dynamic data was done using video format (transmitting pixel information) such as video conferencing or MPEG (Motion Picture Expert Group) movies on the Internet. The concept for this new technology is to send the raw data (e.g., grids, vectors, and scalars) along with viewing scripts over the Internet and have the pixels generated by a visualization tool running on the viewers local workstation. The visualization tool that is currently used is FAST (Flow Analysis Software Toolkit). The advantages of this new technology over using video format are: (1) The visual is much higher in resolution (1280x1024 pixels with 24 bits of color) than typical video format transmitted over the network. (2) The form of the visualization can be controlled interactively (because the viewer is interactively controlling the visualization tool running on his workstation). (3) A rich variety of guided expeditions through the data can be included easily. (4) A capability is provided for other sites to see a visual analysis of one site as the analysis is interactively performed. Control of the analysis can be passed from site to site. (5) The scenes can be viewed in 3D using stereo vision. (6) The network bandwidth for the visualization using this new technology is much smaller than when using video format. (The measured peak bandwidth used was 1 Kbit/sec whereas the measured bandwidth for a small video picture was 500 Kbits/sec.) This talk will illustrate the use of these new technologies and present a proposal for using these technologies to improve science education.

  8. Can low-cost motion-tracking systems substitute a Polhemus system when researching social motor coordination in children?

    PubMed

    Romero, Veronica; Amaral, Joseph; Fitzpatrick, Paula; Schmidt, R C; Duncan, Amie W; Richardson, Michael J

    2017-04-01

    Functionally stable and robust interpersonal motor coordination has been found to play an integral role in the effectiveness of social interactions. However, the motion-tracking equipment required to record and objectively measure the dynamic limb and body movements during social interaction has been very costly, cumbersome, and impractical within a non-clinical or non-laboratory setting. Here we examined whether three low-cost motion-tracking options (Microsoft Kinect skeletal tracking of either one limb or whole body and a video-based pixel change method) can be employed to investigate social motor coordination. Of particular interest was the degree to which these low-cost methods of motion tracking could be used to capture and index the coordination dynamics that occurred between a child and an experimenter for three simple social motor coordination tasks in comparison to a more expensive, laboratory-grade motion-tracking system (i.e., a Polhemus Latus system). Overall, the results demonstrated that these low-cost systems cannot substitute the Polhemus system in some tasks. However, the lower-cost Microsoft Kinect skeletal tracking and video pixel change methods were successfully able to index differences in social motor coordination in tasks that involved larger-scale, naturalistic whole body movements, which can be cumbersome and expensive to record with a Polhemus. However, we found the Kinect to be particularly vulnerable to occlusion and the pixel change method to movements that cross the video frame midline. Therefore, particular care needs to be taken in choosing the motion-tracking system that is best suited for the particular research.

  9. A Multiple-Window Video Embedding Transcoder Based on H.264/AVC Standard

    NASA Astrophysics Data System (ADS)

    Li, Chih-Hung; Wang, Chung-Neng; Chiang, Tihao

    2007-12-01

    This paper proposes a low-complexity multiple-window video embedding transcoder (MW-VET) based on H.264/AVC standard for various applications that require video embedding services including picture-in-picture (PIP), multichannel mosaic, screen-split, pay-per-view, channel browsing, commercials and logo insertion, and other visual information embedding services. The MW-VET embeds multiple foreground pictures at macroblock-aligned positions. It improves the transcoding speed with three block level adaptive techniques including slice group based transcoding (SGT), reduced frame memory transcoder (RFMT), and syntax level bypassing (SLB). The SGT utilizes prediction from the slice-aligned data partitions in the original bitstreams such that the transcoder simply merges the bitstreams by parsing. When the prediction comes from the newly covered area without slice-group data partitions, the pixels at the affected macroblocks are transcoded with the RFMT based on the concept of partial reencoding to minimize the number of refined blocks. The RFMT employs motion vector remapping (MVR) and intra mode switching (IMS) to handle intercoded blocks and intracoded blocks, respectively. The pixels outside the macroblocks that are affected by newly covered reference frame are transcoded by the SLB. Experimental results show that, as compared to the cascaded pixel domain transcoder (CPDT) with the highest complexity, our MW-VET can significantly reduce the processing complexity by 25 times and retain the rate-distortion performance close to the CPDT. At certain bit rates, the MW-VET can achieve up to 1.5 dB quality improvement in peak signal-to-noise-ratio (PSNR).

  10. A hardware architecture for real-time shadow removal in high-contrast video

    NASA Astrophysics Data System (ADS)

    Verdugo, Pablo; Pezoa, Jorge E.; Figueroa, Miguel

    2017-09-01

    Broadcasting an outdoor sports event at daytime is a challenging task due to the high contrast that exists between areas in the shadow and light conditions within the same scene. Commercial cameras typically do not handle the high dynamic range of such scenes in a proper manner, resulting in broadcast streams with very little shadow detail. We propose a hardware architecture for real-time shadow removal in high-resolution video, which reduces the shadow effect and simultaneously improves shadow details. The algorithm operates only on the shadow portions of each video frame, thus improving the results and producing more realistic images than algorithms that operate on the entire frame, such as simplified Retinex and histogram shifting. The architecture receives an input in the RGB color space, transforms it into the YIQ space, and uses color information from both spaces to produce a mask of the shadow areas present in the image. The mask is then filtered using a connected components algorithm to eliminate false positives and negatives. The hardware uses pixel information at the edges of the mask to estimate the illumination ratio between light and shadow in the image, which is then used to correct the shadow area. Our prototype implementation simultaneously processes up to 7 video streams of 1920×1080 pixels at 60 frames per second on a Xilinx Kintex-7 XC7K325T FPGA.

  11. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  12. Real-time heart rate measurement for multi-people using compressive tracking

    NASA Astrophysics Data System (ADS)

    Liu, Lingling; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Dong, Liquan; Ma, Feilong; Pang, Zongguang; Cai, Zhi; Zhang, Yachu; Hua, Peng; Yuan, Ruifeng

    2017-09-01

    The rise of aging population has created a demand for inexpensive, unobtrusive, automated health care solutions. Image PhotoPlethysmoGraphy(IPPG) aids in the development of these solutions by allowing for the extraction of physiological signals from video data. However, the main deficiencies of the recent IPPG methods are non-automated, non-real-time and susceptible to motion artifacts(MA). In this paper, a real-time heart rate(HR) detection method for multiple subjects simultaneously was proposed and realized using the open computer vision(openCV) library, which consists of getting multiple subjects' facial video automatically through a Webcam, detecting the region of interest (ROI) in the video, reducing the false detection rate by our improved Adaboost algorithm, reducing the MA by our improved compress tracking(CT) algorithm, wavelet noise-suppression algorithm for denoising and multi-threads for higher detection speed. For comparison, HR was measured simultaneously using a medical pulse oximetry device for every subject during all sessions. Experimental results on a data set of 30 subjects show that the max average absolute error of heart rate estimation is less than 8 beats per minute (BPM), and the processing speed of every frame has almost reached real-time: the experiments with video recordings of ten subjects under the condition of the pixel resolution of 600× 800 pixels show that the average HR detection time of 10 subjects was about 17 frames per second (fps).

  13. Potential Utility of a 4K Consumer Camera for Surgical Education in Ophthalmology.

    PubMed

    Ichihashi, Tsunetomo; Hirabayashi, Yutaka; Nagahara, Miyuki

    2017-01-01

    Purpose. We evaluated the potential utility of a cost-effective 4K consumer video system for surgical education in ophthalmology. Setting. Tokai University Hachioji Hospital, Tokyo, Japan. Design. Experimental study. Methods. The eyes that underwent cataract surgery, glaucoma surgery, vitreoretinal surgery, or oculoplastic surgery between February 2016 and April 2016 were recorded with 17.2 million pixels using a high-definition digital video camera (LUMIX DMC-GH4, Panasonic, Japan) and with 0.41 million pixels using a conventional analog video camera (MKC-501, Ikegami, Japan). Motion pictures of two cases for each surgery type were evaluated and classified as having poor, normal, or excellent visibility. Results. The 4K video system was easily installed by reading the instructions without technical expertise. The details of the surgical picture in the 4K system were highly improved over those of the conventional pictures, and the visual effects for surgical education were significantly improved. Motion pictures were stored for approximately 11 h with 512 GB SD memory. The total price of this system was USD 8000, which is a very low price compared with a commercial system. Conclusion. This 4K consumer camera was able to record and play back with high-definition surgical field visibility on the 4K monitor and is a low-cost, high-performing alternative for surgical facilities.

  14. Contact-free heart rate measurement using multiple video data

    NASA Astrophysics Data System (ADS)

    Hung, Pang-Chan; Lee, Kual-Zheng; Tsai, Luo-Wei

    2013-10-01

    In this paper, we propose a contact-free heart rate measurement method by analyzing sequential images of multiple video data. In the proposed method, skin-like pixels are firstly detected from multiple video data for extracting the color features. These color features are synchronized and analyzed by independent component analysis. A representative component is finally selected among these independent component candidates to measure the HR, which achieves under 2% deviation on average compared with a pulse oximeter in the controllable environment. The advantages of the proposed method include: 1) it uses low cost and high accessibility camera device; 2) it eases users' discomfort by utilizing contact-free measurement; and 3) it achieves the low error rate and the high stability by integrating multiple video data.

  15. CHOBS: Color Histogram of Block Statistics for Automatic Bleeding Detection in Wireless Capsule Endoscopy Video

    PubMed Central

    Ghosh, Tonmoy; Wahid, Khan A.

    2018-01-01

    Wireless capsule endoscopy (WCE) is the most advanced technology to visualize whole gastrointestinal (GI) tract in a non-invasive way. But the major disadvantage here, it takes long reviewing time, which is very laborious as continuous manual intervention is necessary. In order to reduce the burden of the clinician, in this paper, an automatic bleeding detection method for WCE video is proposed based on the color histogram of block statistics, namely CHOBS. A single pixel in WCE image may be distorted due to the capsule motion in the GI tract. Instead of considering individual pixel values, a block surrounding to that individual pixel is chosen for extracting local statistical features. By combining local block features of three different color planes of RGB color space, an index value is defined. A color histogram, which is extracted from those index values, provides distinguishable color texture feature. A feature reduction technique utilizing color histogram pattern and principal component analysis is proposed, which can drastically reduce the feature dimension. For bleeding zone detection, blocks are classified using extracted local features that do not incorporate any computational burden for feature extraction. From extensive experimentation on several WCE videos and 2300 images, which are collected from a publicly available database, a very satisfactory bleeding frame and zone detection performance is achieved in comparison to that obtained by some of the existing methods. In the case of bleeding frame detection, the accuracy, sensitivity, and specificity obtained from proposed method are 97.85%, 99.47%, and 99.15%, respectively, and in the case of bleeding zone detection, 95.75% of precision is achieved. The proposed method offers not only low feature dimension but also highly satisfactory bleeding detection performance, which even can effectively detect bleeding frame and zone in a continuous WCE video data. PMID:29468094

  16. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  17. High speed imager test station

    DOEpatents

    Yates, George J.; Albright, Kevin L.; Turko, Bojan T.

    1995-01-01

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment.

  18. High speed imager test station

    DOEpatents

    Yates, G.J.; Albright, K.L.; Turko, B.T.

    1995-11-14

    A test station enables the performance of a solid state imager (herein called a focal plane array or FPA) to be determined at high image frame rates. A programmable waveform generator is adapted to generate clock pulses at determinable rates for clock light-induced charges from a FPA. The FPA is mounted on an imager header board for placing the imager in operable proximity to level shifters for receiving the clock pulses and outputting pulses effective to clock charge from the pixels forming the FPA. Each of the clock level shifters is driven by leading and trailing edge portions of the clock pulses to reduce power dissipation in the FPA. Analog circuits receive output charge pulses clocked from the FPA pixels. The analog circuits condition the charge pulses to cancel noise in the pulses and to determine and hold a peak value of the charge for digitizing. A high speed digitizer receives the peak signal value and outputs a digital representation of each one of the charge pulses. A video system then displays an image associated with the digital representation of the output charge pulses clocked from the FPA. In one embodiment, the FPA image is formatted to a standard video format for display on conventional video equipment. 12 figs.

  19. The Kepler Full Frame Images

    NASA Technical Reports Server (NTRS)

    Dotson, Jessie L.; Batalha, Natalie; Bryson, Stephen T.; Caldwell, Douglas A.; Clarke, Bruce D.

    2010-01-01

    NASA's exoplanet discovery mission Kepler provides uninterrupted 1-min and 30-min optical photometry of a 100 square degree field over a 3.5 yr nominal mission. Downlink bandwidth is filled at these short cadences by selecting only detector pixels specific to 105 preselected stellar targets. The majority of the Kepler field, comprising 4 x 10(exp 6) m_v < 20 sources, is sampled at much lower 1-month cadence in the form of a full-frame image. The Full Frame Images (FFIs) are calibrated by the Science Operations Center at NASA Ames Research Center. The Kepler Team employ these images for astrometric and photometric reference but make the images available to the astrophysics community through the Multimission Archive at STScI (MAST). The full-frame images provide a resource for potential Kepler Guest Observers to select targets and plan observing proposals, while also providing a freely-available long-cadence legacy of photometric variation across a swathe of the Galactic disk.

  20. Video image processing

    NASA Technical Reports Server (NTRS)

    Murray, N. D.

    1985-01-01

    Current technology projections indicate a lack of availability of special purpose computing for Space Station applications. Potential functions for video image special purpose processing are being investigated, such as smoothing, enhancement, restoration and filtering, data compression, feature extraction, object detection and identification, pixel interpolation/extrapolation, spectral estimation and factorization, and vision synthesis. Also, architectural approaches are being identified and a conceptual design generated. Computationally simple algorithms will be research and their image/vision effectiveness determined. Suitable algorithms will be implimented into an overall architectural approach that will provide image/vision processing at video rates that are flexible, selectable, and programmable. Information is given in the form of charts, diagrams and outlines.

  1. Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber

    NASA Technical Reports Server (NTRS)

    Bales, John W.

    1996-01-01

    The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.

  2. Unmanned Vehicle Guidance Using Video Camera/Vehicle Model

    NASA Technical Reports Server (NTRS)

    Sutherland, T.

    1999-01-01

    A video guidance sensor (VGS) system has flown on both STS-87 and STS-95 to validate a single camera/target concept for vehicle navigation. The main part of the image algorithm was the subtraction of two consecutive images using software. For a nominal size image of 256 x 256 pixels this subtraction can take a large portion of the time between successive frames in standard rate video leaving very little time for other computations. The purpose of this project was to integrate the software subtraction into hardware to speed up the subtraction process and allow for more complex algorithms to be performed, both in hardware and software.

  3. Programmable remapper for image processing

    NASA Technical Reports Server (NTRS)

    Juday, Richard D. (Inventor); Sampsell, Jeffrey B. (Inventor)

    1991-01-01

    A video-rate coordinate remapper includes a memory for storing a plurality of transformations on look-up tables for remapping input images from one coordinate system to another. Such transformations are operator selectable. The remapper includes a collective processor by which certain input pixels of an input image are transformed to a portion of the output image in a many-to-one relationship. The remapper includes an interpolative processor by which the remaining input pixels of the input image are transformed to another portion of the output image in a one-to-many relationship. The invention includes certain specific transforms for creating output images useful for certain defects of visually impaired people. The invention also includes means for shifting input pixels and means for scrolling the output matrix.

  4. Optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve full-frame high dynamic range with superior low light and stop motion capability

    NASA Astrophysics Data System (ADS)

    Kabir, Salman; Smith, Craig; Armstrong, Frank; Barnard, Gerrit; Schneider, Alex; Guidash, Michael; Vogelsang, Thomas; Endsley, Jay

    2018-03-01

    Differential binary pixel technology is a threshold-based timing, readout, and image reconstruction method that utilizes the subframe partial charge transfer technique in a standard four-transistor (4T) pixel CMOS image sensor to achieve a high dynamic range video with stop motion. This technology improves low light signal-to-noise ratio (SNR) by up to 21 dB. The method is verified in silicon using a Taiwan Semiconductor Manufacturing Company's 65 nm 1.1 μm pixel technology 1 megapixel test chip array and is compared with a traditional 4 × oversampling technique using full charge transfer to show low light SNR superiority of the presented technology.

  5. Amorphous In-Ga-Zn-O Thin Film Transistor Current-Scaling Pixel Electrode Circuit for Active-Matrix Organic Light-Emitting Displays

    NASA Astrophysics Data System (ADS)

    Chen, Charlene; Abe, Katsumi; Fung, Tze-Ching; Kumomi, Hideya; Kanicki, Jerzy

    2009-03-01

    In this paper, we analyze application of amorphous In-Ga-Zn-O thin film transistors (a-InGaZnO TFTs) to current-scaling pixel electrode circuit that could be used for 3-in. quarter video graphics array (QVGA) full color active-matrix organic light-emitting displays (AM-OLEDs). Simulation results, based on a-InGaZnO TFT and OLED experimental data, show that both device sizes and operational voltages can be reduced when compare to the same circuit using hydrogenated amorphous silicon (a-Si:H) TFTs. Moreover, the a-InGaZnO TFT pixel circuit can compensate for the drive TFT threshold voltage variation (ΔVT) within acceptable operating error range.

  6. Microgravity

    NASA Image and Video Library

    1998-01-01

    Astronaut John Blaha replaces an exhausted media bag and filled waste bag with fresh bags to continue a bioreactor experiment aboard space station Mir in 1996. NASA-sponsored bioreactor research has been instrumental in helping scientists to better understand normal and cancerous tissue development. In cooperation with the medical community, the bioreactor design is being used to prepare better models of human colon, prostate, breast and ovarian tumors. Cartilage, bone marrow, heart muscle, skeletal muscle, pancreatic islet cells, liver and kidney are just a few of the normal tissues being cultured in rotating bioreactors by investigators. This image is from a video downlink. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC).

  7. Particle Engulfment and Pushing

    NASA Technical Reports Server (NTRS)

    2001-01-01

    As a liquefied metal solidifies, particles dispersed in the liquid are either pushed ahead of or engulfed by the moving solidification front. Similar effects can be seen when the ground freezes and pushes large particles out of the soil. The Particle Engulfment and Pushing (PEP) experiment, conducted aboard the fourth U.S. Microgravity Payload (USMP-4) mission in 1997, used a glass and plastic beads suspended in a transparent liquid. The liquid was then frozen, trapping or pushing the particles as the solidifying front moved. This simulated the formation of advanced alloys and composite materials. Such studies help scientists to understand how to improve the processes for making advanced materials on Earth. The principal investigator is Dr. Doru Stefanescu of the University of Alabama. This image is from a video downlink.

  8. Microgravity

    NASA Image and Video Library

    1997-07-01

    Onboard Space Shuttle Columbia (STS-94) Mission Specialist Donald A. Thomas observes an experiment in the glovebox aboard the Spacelab Science Module. Thomas is looking through an eye-piece of a camcorder and recording his observations on tape for post-flight analysis. Other cameras inside the glovebox are also recording other angles of the experiment or downlinking video to the experiment teams on the ground. The glovebox is thought of as a safety cabinet with closed front and negative pressure differential to prevent spillage and contamination and allow for manipulation of the experiment sample when its containment has to be opened for observation, microscopy and photography. Although not visible in this view, the glovebox is equipped with windows on the top and each side for these observations.

  9. STS-99 Crew Interviews: Janet L. Kavandi

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This NASA JSC video release is one in a series of space shuttle astronaut interviews and was recorded Aug. 9, 1999. Mission Specialist, Janet L. Kavandi, Ph.D. provides answers to questions regarding her role in the Shuttle Radar Topography Mission (SRTM), mission objectives, which center on the three-dimensional mapping of the entire Earth's surface, shuttle imaging radar, payload mast deploy and retraction, data recording vs. downlinking, the fly cast maneuver, applications of recorded data, international participation (DLR), the National Imaging and Mapping Agency (NIMA), and EarthCam (educational middle school project). The interview is summed up by Dr. Kavandi explaining that the mission's objective, if successful, will result in the the most complete high-resolution digital topographic database of the Earth.

  10. Microgravity

    NASA Image and Video Library

    2001-01-24

    As a liquefied metal solidifies, particles dispersed in the liquid are either pushed ahead of or engulfed by the moving solidification front. Similar effects can be seen when the ground freezes and pushes large particles out of the soil. The Particle Engulfment and Pushing (PEP) experiment, conducted aboard the fourth U.S. Microgravity Payload (USMP-4) mission in 1997, used a glass and plastic beads suspended in a transparent liquid. The liquid was then frozen, trapping or pushing the particles as the solidifying front moved. This simulated the formation of advanced alloys and composite materials. Such studies help scientists to understand how to improve the processes for making advanced materials on Earth. The principal investigator is Dr. Doru Stefanescu of the University of Alabama. This image is from a video downlink.

  11. NASA Bioreactor

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Astronaut John Blaha replaces an exhausted media bag and filled waste bag with fresh bags to continue a bioreactor experiment aboard space station Mir in 1996. NASA-sponsored bioreactor research has been instrumental in helping scientists to better understand normal and cancerous tissue development. In cooperation with the medical community, the bioreactor design is being used to prepare better models of human colon, prostate, breast and ovarian tumors. Cartilage, bone marrow, heart muscle, skeletal muscle, pancreatic islet cells, liver and kidney are just a few of the normal tissues being cultured in rotating bioreactors by investigators. This image is from a video downlink. The work is sponsored by NASA's Office of Biological and Physical Research. The bioreactor is managed by the Biotechnology Cell Science Program at NASA's Johnson Space Center (JSC).

  12. Biotube

    NASA Technical Reports Server (NTRS)

    Richards, Stephanie E. (Compiler); Levine, Howard G.; Romero, Vergel

    2016-01-01

    Biotube was developed for plant gravitropic research investigating the potential for magnetic fields to orient plant roots as they grow in microgravity. Prior to flight, experimental seeds are placed into seed cassettes, that are capable of containing up to 10 seeds, and inserted between two magnets located within one of three Magnetic Field Chamber (MFC). Biotube is stored within an International Space Station (ISS) stowage locker and provides three levels of containment for chemical fixatives. Features include monitoring of temperature, fixative/ preservative delivery to specimens, and real-time video imaging downlink. Biotube's primary subsystems are: (1) The Water Delivery System that automatically activates and controls the delivery of water (to initiate seed germination). (2) The Fixative Storage and Delivery System that stores and delivers chemical fixative or RNA later to each seed cassette. (3) The Digital Imaging System consisting of 4 charge-coupled device (CCD) cameras, a video multiplexer, a lighting multiplexer, and 16 infrared light-emitting diodes (LEDs) that provide illumination while the photos are being captured. (4) The Command and Data Management System that provides overall control of the integrated subsystems, graphical user interface, system status and error message display, image display, and other functions.

  13. 47 CFR 101.82 - Reimbursement and relocation expenses in the 2110-2150 MHz and 2160-2200 MHz bands.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... for space-to-Earth downlink in the 2130-2150 or 2180-2200 MHz bands) relocates an incumbent paired...) Cost-sharing obligations for MSS (space-to-Earth downlinks). For an MSS space-to-Earth downlink, the... standard successor, relative to the relocated microwave link. Subsequently entering MSS space-to-Earth...

  14. 47 CFR 101.82 - Reimbursement and relocation expenses in the 2110-2150 MHz and 2160-2200 MHz bands.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... for space-to-Earth downlink in the 2130-2150 or 2180-2200 MHz bands) relocates an incumbent paired...) Cost-sharing obligations for MSS (space-to-Earth downlinks). For an MSS space-to-Earth downlink, the... standard successor, relative to the relocated microwave link. Subsequently entering MSS space-to-Earth...

  15. 47 CFR 101.82 - Reimbursement and relocation expenses in the 2110-2150 MHz and 2160-2200 MHz bands.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... for space-to-Earth downlink in the 2130-2150 or 2180-2200 MHz bands) relocates an incumbent paired...) Cost-sharing obligations for MSS (space-to-Earth downlinks). For an MSS space-to-Earth downlink, the... standard successor, relative to the relocated microwave link. Subsequently entering MSS space-to-Earth...

  16. 47 CFR 101.82 - Reimbursement and relocation expenses in the 2110-2150 MHz and 2160-2200 MHz bands.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... for space-to-Earth downlink in the 2130-2150 or 2180-2200 MHz bands) relocates an incumbent paired...) Cost-sharing obligations for MSS (space-to-Earth downlinks). For an MSS space-to-Earth downlink, the... standard successor, relative to the relocated microwave link. Subsequently entering MSS space-to-Earth...

  17. 47 CFR 101.82 - Reimbursement and relocation expenses in the 2110-2150 MHz and 2160-2200 MHz bands.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... for space-to-Earth downlink in the 2130-2150 or 2180-2200 MHz bands) relocates an incumbent paired...) Cost-sharing obligations for MSS (space-to-Earth downlinks). For an MSS space-to-Earth downlink, the... standard successor, relative to the relocated microwave link. Subsequently entering MSS space-to-Earth...

  18. Eliminating Bias In Acousto-Optical Spectrum Analysis

    NASA Technical Reports Server (NTRS)

    Ansari, Homayoon; Lesh, James R.

    1992-01-01

    Scheme for digital processing of video signals in acousto-optical spectrum analyzer provides real-time correction for signal-dependent spectral bias. Spectrum analyzer described in "Two-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18092), related apparatus described in "Three-Dimensional Acousto-Optical Spectrum Analyzer" (NPO-18122). Essence of correction is to average over digitized outputs of pixels in each CCD row and to subtract this from the digitized output of each pixel in row. Signal processed electro-optically with reference-function signals to form two-dimensional spectral image in CCD camera.

  19. Programmable architecture for pixel level processing tasks in lightweight strapdown IR seekers

    NASA Astrophysics Data System (ADS)

    Coates, James L.

    1993-06-01

    Typical processing tasks associated with missile IR seeker applications are described, and a straw man suite of algorithms is presented. A fully programmable multiprocessor architecture is realized on a multimedia video processor (MVP) developed by Texas Instruments. The MVP combines the elements of RISC, floating point, advanced DSPs, graphics processors, display and acquisition control, RAM, and external memory. Front end pixel level tasks typical of missile interceptor applications, operating on 256 x 256 sensor imagery, can be processed at frame rates exceeding 100 Hz in a single MVP chip.

  20. Review of Fusion Systems and Contributing Technologies for SIHS-TD (Examen des Systemes de Fusion et des Technologies d’Appui pour la DT SIHS)

    DTIC Science & Technology

    2007-03-31

    Unlimited, Nivisys, Insight technology, Elcan, FLIR Systems, Stanford photonics Hardware Sensor fusion processors Video processing boards Image, video...Engineering The SPIE Digital Library is a resource for optics and photonics information. It contains more than 70,000 full-text papers from SPIE...conditions Top row: Stanford Photonics XR-Mega-10 Extreme 1400 x 1024 pixels ICCD detector, 33 msec exposure, no binning. Middle row: Andor EEV iXon

  1. OpenMP Parallelization and Optimization of Graph-based Machine Learning Algorithms

    DTIC Science & Technology

    2016-05-01

    composed of hyper - spectral video sequences recording the release of chemical plumes at the Dugway Proving Ground. We use the 329 frames of the...video. Each frame is a hyper - spectral image with dimension 128 × 320 × 129, where 129 is the dimension of the channel of each pixel. The total number of...j=1 . Then we use the nested for- loop to calculate the values of WXY by the formula (1). We then put the corresponding value in an array which

  2. Adaptive temporal compressive sensing for video with motion estimation

    NASA Astrophysics Data System (ADS)

    Wang, Yeru; Tang, Chaoying; Chen, Yueting; Feng, Huajun; Xu, Zhihai; Li, Qi

    2018-04-01

    In this paper, we present an adaptive reconstruction method for temporal compressive imaging with pixel-wise exposure. The motion of objects is first estimated from interpolated images with a designed coding mask. With the help of motion estimation, image blocks are classified according to the degree of motion and reconstructed with the corresponding dictionary, which was trained beforehand. Both the simulation and experiment results show that the proposed method can obtain accurate motion information before reconstruction and efficiently reconstruct compressive video.

  3. Design of a motion JPEG (M/JPEG) adapter card

    NASA Astrophysics Data System (ADS)

    Lee, D. H.; Sudharsanan, Subramania I.

    1994-05-01

    In this paper we describe a design of a high performance JPEG (Joint Photographic Experts Group) Micro Channel adapter card. The card, tested on a range of PS/2 platforms (models 50 to 95), can complete JPEG operations on a 640 by 240 pixel image within 1/60 of a second, thus enabling real-time capture and display of high quality digital video. The card accepts digital pixels for either a YUV 4:2:2 or an RGB 4:4:4 pixel bus and has been shown to handle up to 2.05 MBytes/second of compressed data. The compressed data is transmitted to a host memory area by Direct Memory Access operations. The card uses a single C-Cube's CL550 JPEG processor that complies with the baseline JPEG. We give broad descriptions of the hardware that controls the video interface, CL550, and the system interface. Some critical design points that enhance the overall performance of the M/JPEG systems are pointed out. The control of the adapter card is achieved by an interrupt driven software that runs under DOS. The software performs a variety of tasks that include change of color space (RGB or YUV), change of quantization and Huffman tables, odd and even field control and some diagnostic operations.

  4. Multi-target detection and positioning in crowds using multiple camera surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng

    2018-04-01

    In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.

  5. ViBe: a universal background subtraction algorithm for video sequences.

    PubMed

    Barnich, Olivier; Van Droogenbroeck, Marc

    2011-06-01

    This paper presents a technique for motion detection that incorporates several innovative mechanisms. For example, our proposed technique stores, for each pixel, a set of values taken in the past at the same location or in the neighborhood. It then compares this set to the current pixel value in order to determine whether that pixel belongs to the background, and adapts the model by choosing randomly which values to substitute from the background model. This approach differs from those based upon the classical belief that the oldest values should be replaced first. Finally, when the pixel is found to be part of the background, its value is propagated into the background model of a neighboring pixel. We describe our method in full details (including pseudo-code and the parameter values used) and compare it to other background subtraction techniques. Efficiency figures show that our method outperforms recent and proven state-of-the-art methods in terms of both computation speed and detection rate. We also analyze the performance of a downscaled version of our algorithm to the absolute minimum of one comparison and one byte of memory per pixel. It appears that even such a simplified version of our algorithm performs better than mainstream techniques.

  6. Miniaturized LEDs for flat-panel displays

    NASA Astrophysics Data System (ADS)

    Radauscher, Erich J.; Meitl, Matthew; Prevatte, Carl; Bonafede, Salvatore; Rotzoll, Robert; Gomez, David; Moore, Tanya; Raymond, Brook; Cok, Ronald; Fecioru, Alin; Trindade, António Jose; Fisher, Brent; Goodwin, Scott; Hines, Paul; Melnik, George; Barnhill, Sam; Bower, Christopher A.

    2017-02-01

    Inorganic light emitting diodes (LEDs) serve as bright pixel-level emitters in displays, from indoor/outdoor video walls with pixel sizes ranging from one to thirty millimeters to micro displays with more than one thousand pixels per inch. Pixel sizes that fall between those ranges, roughly 50 to 500 microns, are some of the most commercially significant ones, including flat panel displays used in smart phones, tablets, and televisions. Flat panel displays that use inorganic LEDs as pixel level emitters (μILED displays) can offer levels of brightness, transparency, and functionality that are difficult to achieve with other flat panel technologies. Cost-effective production of μILED displays requires techniques for precisely arranging sparse arrays of extremely miniaturized devices on a panel substrate, such as transfer printing with an elastomer stamp. Here we present lab-scale demonstrations of transfer printed μILED displays and the processes used to make them. Demonstrations include passive matrix μILED displays that use conventional off-the shelf drive ASICs and active matrix μILED displays that use miniaturized pixel-level control circuits from CMOS wafers. We present a discussion of key considerations in the design and fabrication of highly miniaturized emitters for μILED displays.

  7. Weighted-MSE based on saliency map for assessing video quality of H.264 video streams

    NASA Astrophysics Data System (ADS)

    Boujut, H.; Benois-Pineau, J.; Hadar, O.; Ahmed, T.; Bonnet, P.

    2011-01-01

    Human vision system is very complex and has been studied for many years specifically for purposes of efficient encoding of visual, e.g. video content from digital TV. There have been physiological and psychological evidences which indicate that viewers do not pay equal attention to all exposed visual information, but only focus on certain areas known as focus of attention (FOA) or saliency regions. In this work, we propose a novel based objective quality assessment metric, for assessing the perceptual quality of decoded video sequences affected by transmission errors and packed loses. The proposed method weights the Mean Square Error (MSE), Weighted-MSE (WMSE), according to the calculated saliency map at each pixel. Our method was validated trough subjective quality experiments.

  8. 47 CFR 27.19 - Requirements for operation of base and fixed stations in the 600 MHz downlink band in close...

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... stations in the 600 MHz downlink band in close proximity to Radio Astronomy Observatories. 27.19 Section 27... base and fixed stations in the 600 MHz downlink band in close proximity to Radio Astronomy Observatories. (a) Licensees must make reasonable efforts to protect the radio astronomy observatory at Green...

  9. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  10. Real-time detection of small and dim moving objects in IR video sequences using a robust background estimator and a noise-adaptive double thresholding

    NASA Astrophysics Data System (ADS)

    Zingoni, Andrea; Diani, Marco; Corsini, Giovanni

    2016-10-01

    We developed an algorithm for automatically detecting small and poorly contrasted (dim) moving objects in real-time, within video sequences acquired through a steady infrared camera. The algorithm is suitable for different situations since it is independent of the background characteristics and of changes in illumination. Unlike other solutions, small objects of any size (up to single-pixel), either hotter or colder than the background, can be successfully detected. The algorithm is based on accurately estimating the background at the pixel level and then rejecting it. A novel approach permits background estimation to be robust to changes in the scene illumination and to noise, and not to be biased by the transit of moving objects. Care was taken in avoiding computationally costly procedures, in order to ensure the real-time performance even using low-cost hardware. The algorithm was tested on a dataset of 12 video sequences acquired in different conditions, providing promising results in terms of detection rate and false alarm rate, independently of background and objects characteristics. In addition, the detection map was produced frame by frame in real-time, using cheap commercial hardware. The algorithm is particularly suitable for applications in the fields of video-surveillance and computer vision. Its reliability and speed permit it to be used also in critical situations, like in search and rescue, defence and disaster monitoring.

  11. Joint denoising, demosaicing, and chromatic aberration correction for UHD video

    NASA Astrophysics Data System (ADS)

    Jovanov, Ljubomir; Philips, Wilfried; Damstra, Klaas Jan; Ellenbroek, Frank

    2017-09-01

    High-resolution video capture is crucial for numerous applications such as surveillance, security, industrial inspection, medical imaging and digital entertainment. In the last two decades, we are witnessing a dramatic increase of the spatial resolution and the maximal frame rate of video capturing devices. In order to achieve further resolution increase, numerous challenges will be facing us. Due to the reduced size of the pixel, the amount of light also reduces, leading to the increased noise level. Moreover, the reduced pixel size makes the lens imprecisions more pronounced, which especially applies to chromatic aberrations. Even in the case when high quality lenses are used some chromatic aberration artefacts will remain. Next, noise level additionally increases due to the higher frame rates. To reduce the complexity and the price of the camera, one sensor captures all three colors, by relying on Color Filter Arrays. In order to obtain full resolution color image, missing color components have to be interpolated, i.e. demosaicked, which is more challenging than in the case of lower resolution, due to the increased noise and aberrations. In this paper, we propose a new method, which jointly performs chromatic aberration correction, denoising and demosaicking. By jointly performing the reduction of all artefacts, we are reducing the overall complexity of the system and the introduction of new artefacts. In order to reduce possible flicker we also perform temporal video enhancement. We evaluate the proposed method on a number of publicly available UHD sequences and on sequences recorded in our studio.

  12. Simulation of Downlink Synchronization for a Frequency-Hopped Satellite Communication System

    DTIC Science & Technology

    1992-04-01

    naflonie SIMULATION OF DOWNLINK SYNCHRONIZATION FOR A FREQUENCY-HOPPED SATELLITE COMMUNICATION SYSTEM (U) by Lyle Waper_Communicadion and Xa elo Elkaoftron...is offset by an increase in complexity while establishing the communication link, termed synchronization . This document describes a downlink... synchronization process that involves the transmission of synchronization hops by the satellite and a two-step ground terminal synchonization procedure. In

  13. Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera

    NASA Astrophysics Data System (ADS)

    Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi

    2015-08-01

    A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.

  14. Video framerate, resolution and grayscale tradeoffs for undersea telemanipulator

    NASA Technical Reports Server (NTRS)

    Ranadive, V.; Sheridan, T. B.

    1981-01-01

    The product of Frame Rate (F) in frames per second, Resolution (R) in total pixels and grayscale in bits (G) equals the transmission band rate in bits per second. Thus for a fixed channel capacity there are tradeoffs between F, R and G in the actual sampling of the picture for a particular manual control task in the present case remote undersea manipulation. A manipulator was used in the MASTER/SLAVE mode to study these tradeoffs. Images were systematically degraded from 28 frames per second, 128 x 128 pixels and 16 levels (4 bits) grayscale, with various FRG combinations constructed from a real-time digitized (charge-injection) video camera. It was found that frame rate, resolution and grayscale could be independently reduced without preventing the operator from accomplishing his/her task. Threshold points were found beyond which degradation would prevent any successful performance. A general conclusion is that a well trained operator can perform familiar remote manipulator tasks with a considerably degrade picture, down to 50 K bits/ sec.

  15. Computer-aided analysis for the Mechanics of Granular Materials (MGM) experiment

    NASA Technical Reports Server (NTRS)

    Parker, Joey K.

    1986-01-01

    The Mechanics of Granular Materials (MGM) program is planned to provide experimental determinations of the mechanics of granular materials under very low gravity conditions. The initial experiments will use small glass beads as the granular material, and a precise tracking of individual beads during the test is desired. Real-time video images of the experimental specimen were taken with a television camera, and subsequently digitized by a frame grabber installed in a microcomputer. Easily identified red tracer beads were randomly scattered throughout the test specimen. A set of Pascal programs was written for processing and analyzing the digitized images. Filtering the image with Laplacian, dilation, and blurring filters when using a threshold function produced a binary (black on white) image which clearly identified the red beads. The centroids and areas for each bead were then determined. Analyzing a series of the images determined individual red bead displacements throughout the experiment. The system can provide displacement accuracies on the order of 0.5 to 1 pixel is the image is taken directly from the video camera. Digitizing an image from a video cassette recorder introduces an additional repeatability error of 0.5 to 1 pixel. Other programs were written to provide hardcopy prints of the digitized images on a dot-matrix printer.

  16. A gaze-contingent display to study contrast sensitivity under natural viewing conditions

    NASA Astrophysics Data System (ADS)

    Dorr, Michael; Bex, Peter J.

    2011-03-01

    Contrast sensitivity has been extensively studied over the last decades and there are well-established models of early vision that were derived by presenting the visual system with synthetic stimuli such as sine-wave gratings near threshold contrasts. Natural scenes, however, contain a much wider distribution of orientations, spatial frequencies, and both luminance and contrast values. Furthermore, humans typically move their eyes two to three times per second under natural viewing conditions, but most laboratory experiments require subjects to maintain central fixation. We here describe a gaze-contingent display capable of performing real-time contrast modulations of video in retinal coordinates, thus allowing us to study contrast sensitivity when dynamically viewing dynamic scenes. Our system is based on a Laplacian pyramid for each frame that efficiently represents individual frequency bands. Each output pixel is then computed as a locally weighted sum of pyramid levels to introduce local contrast changes as a function of gaze. Our GPU implementation achieves real-time performance with more than 100 fps on high-resolution video (1920 by 1080 pixels) and a synthesis latency of only 1.5ms. Psychophysical data show that contrast sensitivity is greatly decreased in natural videos and under dynamic viewing conditions. Synthetic stimuli therefore only poorly characterize natural vision.

  17. Real-time distributed video coding for 1K-pixel visual sensor networks

    NASA Astrophysics Data System (ADS)

    Hanca, Jan; Deligiannis, Nikos; Munteanu, Adrian

    2016-07-01

    Many applications in visual sensor networks (VSNs) demand the low-cost wireless transmission of video data. In this context, distributed video coding (DVC) has proven its potential to achieve state-of-the-art compression performance while maintaining low computational complexity of the encoder. Despite their proven capabilities, current DVC solutions overlook hardware constraints, and this renders them unsuitable for practical implementations. This paper introduces a DVC architecture that offers highly efficient wireless communication in real-world VSNs. The design takes into account the severe computational and memory constraints imposed by practical implementations on low-resolution visual sensors. We study performance-complexity trade-offs for feedback-channel removal, propose learning-based techniques for rate allocation, and investigate various simplifications of side information generation yielding real-time decoding. The proposed system is evaluated against H.264/AVC intra, Motion-JPEG, and our previously designed DVC prototype for low-resolution visual sensors. Extensive experimental results on various data show significant improvements in multiple configurations. The proposed encoder achieves real-time performance on a 1k-pixel visual sensor mote. Real-time decoding is performed on a Raspberry Pi single-board computer or a low-end notebook PC. To the best of our knowledge, the proposed codec is the first practical DVC deployment on low-resolution VSNs.

  18. Reduction of ETS-VI Laser Communication Equipment Optical-Downlink Telemetry Collected During GOLD

    NASA Technical Reports Server (NTRS)

    Toyoshima, M.; Araki, K.; Arimoto, Y.; Toyoda, M.; Jeganathan, M.; Wilson, K.; Lesh, J. R.

    1997-01-01

    Free-space laser communications experiments were conducted between the laser communication equipment (LCE) on board the Japanese Engineering Test Satellite VI (ETS-VI) and the ground station located at the Table Mountain Facility (TMF) during late 1995 and early 1996. This article describes the on-line data reduction process used to decode LCE telemetry (called E2) downlinked on the optical carrier during the Ground/Orbiter Lasercomm Demonstration (GOLD) experiments. The LCE has the capability of transmitting real-time sensor and status information at 128 kbps by modulating the onboard diode laser. The optical downlink was detected on the ground, bit synchronized, and the resulting data stream stored on a data recorder. The recorded data were subsequently decoded by on-line data processing that included cross-correlation of the known telemetry data format and the downlink data stream. Signals obtained from the processing can be useful not only in evaluating the characteristics of the LCE but also in understanding uplink and downlink signal quality.

  19. Automatic attention-based prioritization of unconstrained video for compression

    NASA Astrophysics Data System (ADS)

    Itti, Laurent

    2004-06-01

    We apply a biologically-motivated algorithm that selects visually-salient regions of interest in video streams to multiply-foveated video compression. Regions of high encoding priority are selected based on nonlinear integration of low-level visual cues, mimicking processing in primate occipital and posterior parietal cortex. A dynamic foveation filter then blurs (foveates) every frame, increasingly with distance from high-priority regions. Two variants of the model (one with continuously-variable blur proportional to saliency at every pixel, and the other with blur proportional to distance from three independent foveation centers) are validated against eye fixations from 4-6 human observers on 50 video clips (synthetic stimuli, video games, outdoors day and night home video, television newscast, sports, talk-shows, etc). Significant overlap is found between human and algorithmic foveations on every clip with one variant, and on 48 out of 50 clips with the other. Substantial compressed file size reductions by a factor 0.5 on average are obtained for foveated compared to unfoveated clips. These results suggest a general-purpose usefulness of the algorithm in improving compression ratios of unconstrained video.

  20. Survey and Analysis of Environmental Requirements for Shipboard Electronic Equipment Applications. Appendix B. Volume 3.

    DTIC Science & Technology

    1991-07-31

    memory banks Up to 1.25MByte SRAM 5 planes of 2048 x 1024 pixels Programmable video parameters max 720 x 512 pixels Sixteen colors TTL RGBI standard...bit I/O extension bus (VLXbus) Up to 2048 KByte 0-wait state static RAM BTT (Built-In-Test) PAL selectable dual ported VMEbus address Two RS-232/422...16, 25, or 33 MHz) A16/24:D08/16 VMEbus interface 8/16-bit I/O Extension bus (VLXbus) Up to 2048 KByte 32-bit wide static RAM -- 0-wait state at 16

  1. OPALS: Mission System Operations Architecture for an Optical Communications Demonstration on the ISS

    NASA Technical Reports Server (NTRS)

    Abrahamson, Matthew J.; Sindiy, Oleg V.; Oaida, Bogdan V.; Fregoso, Santos; Bowles-Martinez, Jessica N.; Kokorowski, Michael; Wilkerson, Marcus W.; Konyha, Alexander L.

    2014-01-01

    In spring 2014, the Optical PAyload for Lasercomm Science (OPALS) will launch to the International Space Station (ISS) to demonstrate space-to-ground optical communications. During a 90-day baseline mission, OPALS will downlink high quality, short duration videos to the Optical Communications Telescope Laboratory (OCTL) in Wrightwood, California. To achieve mission success, interfaces to the ISS payload operations infrastructure are established. For OPALS, the interfaces facilitate activity planning, hazardous laser operations, commanding, and telemetry transmission. In addition, internal processes such as pointing prediction and data processing satisfy the technical requirements of the mission. The OPALS operations team participates in Operational Readiness Tests (ORTs) with external partners to exercise coordination processes and train for the overall mission. The tests have provided valuable insight into operational considerations on the ISS.

  2. Chemical-garden formation, morphology, and composition. II. Chemical gardens in microgravity.

    PubMed

    Cartwright, Julyan H E; Escribano, Bruno; Sainz-Díaz, C Ignacio; Stodieck, Louis S

    2011-04-05

    We studied the growth of metal-ion silicate chemical gardens under Earth gravity (1 g) and microgravity (μg) conditions. Identical sets of reaction chambers from an automated system (the Silicate Garden Habitat or SGHab) were used in both cases. The μg experiment was performed on board the International Space Station (ISS) within a temperature-controlled setup that provided still and video images of the experiment downlinked to the ground. Calcium chloride, manganese chloride, cobalt chloride, and nickel sulfate were used as seed salts in sodium silicate solutions of several concentrations. The formation and growth of osmotic envelopes and microtubes was much slower under μg conditions. In 1 g, buoyancy forces caused tubes to grow upward, whereas a random orientation for tube growth was found under μg conditions.

  3. Quality evaluation of motion-compensated edge artifacts in compressed video.

    PubMed

    Leontaris, Athanasios; Cosman, Pamela C; Reibman, Amy R

    2007-04-01

    Little attention has been paid to an impairment common in motion-compensated video compression: the addition of high-frequency (HF) energy as motion compensation displaces blocking artifacts off block boundaries. In this paper, we employ an energy-based approach to measure this motion-compensated edge artifact, using both compressed bitstream information and decoded pixels. We evaluate the performance of our proposed metric, along with several blocking and blurring metrics, on compressed video in two ways. First, ordinal scales are evaluated through a series of expectations that a good quality metric should satisfy: the objective evaluation. Then, the best performing metrics are subjectively evaluated. The same subjective data set is finally used to obtain interval scales to gain more insight. Experimental results show that we accurately estimate the percentage of the added HF energy in compressed video.

  4. Automatic blood detection in capsule endoscopy video

    NASA Astrophysics Data System (ADS)

    Novozámský, Adam; Flusser, Jan; Tachecí, Ilja; Sulík, Lukáš; Bureš, Jan; Krejcar, Ondřej

    2016-12-01

    We propose two automatic methods for detecting bleeding in wireless capsule endoscopy videos of the small intestine. The first one uses solely the color information, whereas the second one incorporates the assumptions about the blood spot shape and size. The original idea is namely the definition of a new color space that provides good separability of blood pixels and intestinal wall. Both methods can be applied either individually or their results can be fused together for the final decision. We evaluate their individual performance and various fusion rules on real data, manually annotated by an endoscopist.

  5. Downlink Multihop Transmission Technique for Asymmetric Traffic Accommodation in DS-CDMA/FDD Cellular Communications

    NASA Astrophysics Data System (ADS)

    Mori, Kazuo; Naito, Katsuhiro; Kobayashi, Hideo

    This paper proposes an asymmetric traffic accommodation scheme using a multihop transmission technique for CDMA/FDD cellular communication systems. The proposed scheme exploits the multihop transmission to downlink packet transmissions, which require the large transmission power at their single-hop transmissions, in order to increase the downlink capacity. In these multihop transmissions, vacant uplink band is used for the transmissions from relay stations to destination mobile stations, and this leads more capacity enhancement in the downlink communications. The relay route selection method and power control method for the multihop transmissions are also investigated in the proposed scheme. The proposed scheme is evaluated by computer simulation and the results show that the proposed scheme can achieve better system performance.

  6. Second Harmonic Imaging improves Echocardiograph Quality on board the International Space Station

    NASA Technical Reports Server (NTRS)

    Garcia, Kathleen; Sargsyan, Ashot; Hamilton, Douglas; Martin, David; Ebert, Douglas; Melton, Shannon; Dulchavsky, Scott

    2008-01-01

    Ultrasound (US) capabilities have been part of the Human Research Facility (HRF) on board the International Space Station (ISS) since 2001. The US equipment on board the ISS includes a first-generation Tissue Harmonic Imaging (THI) option. Harmonic imaging (HI) is the second harmonic response of the tissue to the ultrasound beam and produces robust tissue detail and signal. Since this is a first-generation THI, there are inherent limitations in tissue penetration. As a breakthrough technology, HI extensively advanced the field of ultrasound. In cardiac applications, it drastically improves endocardial border detection and has become a common imaging modality. U.S. images were captured and stored as JPEG stills from the ISS video downlink. US images with and without harmonic imaging option were randomized and provided to volunteers without medical education or US skills for identification of endocardial border. The results were processed and analyzed using applicable statistical calculations. The measurements in US images using HI improved measurement consistency and reproducibility among observers when compared to fundamental imaging. HI has been embraced by the imaging community at large as it improves the quality and data validity of US studies, especially in difficult-to-image cases. Even with the limitations of the first generation THI, HI improved the quality and measurability of many of the downlinked images from the ISS and should be an option utilized with cardiac imaging on board the ISS in all future space missions.

  7. Research on compression performance of ultrahigh-definition videos

    NASA Astrophysics Data System (ADS)

    Li, Xiangqun; He, Xiaohai; Qing, Linbo; Tao, Qingchuan; Wu, Di

    2017-11-01

    With the popularization of high-definition (HD) images and videos (1920×1080 pixels and above), there are even 4K (3840×2160) television signals and 8 K (8192×4320) ultrahigh-definition videos. The demand for HD images and videos is increasing continuously, along with the increasing data volume. The storage and transmission cannot be properly solved only by virtue of the expansion capacity of hard disks and the update and improvement of transmission devices. Based on the full use of the coding standard high-efficiency video coding (HEVC), super-resolution reconstruction technology, and the correlation between the intra- and the interprediction, we first put forward a "division-compensation"-based strategy to further improve the compression performance of a single image and frame I. Then, by making use of the above thought and HEVC encoder and decoder, a video compression coding frame is designed. HEVC is used inside the frame. Last, with the super-resolution reconstruction technology, the reconstructed video quality is further improved. The experiment shows that by the proposed compression method for a single image (frame I) and video sequence here, the performance is superior to that of HEVC in a low bit rate environment.

  8. Superimpose methods for uncooled infrared camera applied to the micro-scale thermal characterization of composite materials

    NASA Astrophysics Data System (ADS)

    Morikawa, Junko

    2015-05-01

    The mobile type apparatus for a quantitative micro-scale thermography using a micro-bolometer was developed based on our original techniques such as an achromatic lens design to capture a micro-scale image in long-wave infrared, a video signal superimposing for the real time emissivity correction, and a pseudo acceleration of a timeframe. The total size of the instrument was designed as it was put in the 17 cm x 28 cm x 26 cm size carrying box. The video signal synthesizer enabled to record a direct digital signal of monitoring temperature or positioning data. The encoded digital signal data embedded in each image was decoded to read out. The protocol to encode/decode the measured data was originally defined. The mixed signals of IR camera and the imposed data were applied to the pixel by pixel emissivity corrections and the pseudo-acceleration of the periodical thermal phenomena. Because the emissivity of industrial materials and biological tissues were usually inhomogeneous, it has the different temperature dependence on each pixel. The time-scale resolution for the periodic thermal event was improved with the algorithm for "pseudoacceleration". It contributes to reduce the noise by integrating the multiple image data, keeping a time resolution. The anisotropic thermal properties of some composite materials such as thermal insulating materials of cellular plastics and the biometric composite materials were analyzed using these techniques.

  9. Liquid crystal television custom drive circuit

    NASA Astrophysics Data System (ADS)

    Loudin, Jeffrey A.

    1994-03-01

    A new drive circuit for the liquid crystal display (LCD) of the InFocus TVT-6000TM video projector is currently under development at the U.S. Army Missile Command. The new circuit will allow individual pixel control of the LCD. This paper will discuss results of the effort to date.

  10. Notions of Technology and Visual Literacy

    ERIC Educational Resources Information Center

    Stankiewicz, Mary Ann

    2004-01-01

    For many art educators, the word "technology" conjures up visions of overhead projectors and VCRs, video and digital cameras, computers equipped with graphic programs and presentation software, digital labs where images rendered in pixels replace the debris of charcoal dust and puddled paints. One forgets that visual literacy and technology have…

  11. In-situ calibration of nonuniformity in infrared staring and modulated systems

    NASA Astrophysics Data System (ADS)

    Black, Wiley T.

    Infrared cameras can directly measure the apparent temperature of objects, providing thermal imaging. However, the raw output from most infrared cameras suffers from a strong, often limiting noise source called nonuniformity. Manufacturing imperfections in infrared focal planes lead to high pixel-to-pixel sensitivity to electronic bias, focal plane temperature, and other effects. The resulting imagery can only provide useful thermal imaging after a nonuniformity calibration has been performed. Traditionally, these calibrations are performed by momentarily blocking the field of view with a at temperature plate or blackbody cavity. However because the pattern is a coupling of manufactured sensitivities with operational variations, periodic recalibration is required, sometimes on the order of tens of seconds. A class of computational methods called Scene-Based Nonuniformity Correction (SBNUC) has been researched for over 20 years where the nonuniformity calibration is estimated in digital processing by analysis of the video stream in the presence of camera motion. The most sophisticated SBNUC methods can completely and robustly eliminate the high-spatial frequency component of nonuniformity with only an initial reference calibration or potentially no physical calibration. I will demonstrate a novel algorithm that advances these SBNUC techniques to support all spatial frequencies of nonuniformity correction. Long-wave infrared microgrid polarimeters are a class of camera that incorporate a microscale per-pixel wire-grid polarizer directly affixed to each pixel of the focal plane. These cameras have the capability of simultaneously measuring thermal imagery and polarization in a robust integrated package with no moving parts. I will describe the necessary adaptations of my SBNUC method to operate on this class of sensor as well as demonstrate SBNUC performance in LWIR polarimetry video collected on the UA mall.

  12. Studies on image compression and image reconstruction

    NASA Technical Reports Server (NTRS)

    Sayood, Khalid; Nori, Sekhar; Araj, A.

    1994-01-01

    During this six month period our works concentrated on three, somewhat different areas. We looked at and developed a number of error concealment schemes for use in a variety of video coding environments. This work is described in an accompanying (draft) Masters thesis. In the thesis we describe application of this techniques to the MPEG video coding scheme. We felt that the unique frame ordering approach used in the MPEG scheme would be a challenge to any error concealment/error recovery technique. We continued with our work in the vector quantization area. We have also developed a new type of vector quantizer, which we call a scan predictive vector quantization. The scan predictive VQ was tested on data processed at Goddard to approximate Landsat 7 HRMSI resolution and compared favorably with existing VQ techniques. A paper describing this work is included. The third area is concerned more with reconstruction than compression. While there is a variety of efficient lossless image compression schemes, they all have a common property that they use past data to encode future data. This is done either via taking differences, context modeling, or by building dictionaries. When encoding large images, this common property becomes a common flaw. When the user wishes to decode just a portion of the image, the requirement that the past history be available forces the decoding of a significantly larger portion of the image than desired by the user. Even with intelligent partitioning of the image dataset, the number of pixels decoded may be four times the number of pixels requested. We have developed an adaptive scanning strategy which can be used with any lossless compression scheme and which lowers the additional number of pixels to be decoded to about 7 percent of the number of pixels requested! A paper describing these results is included.

  13. Video scrambling for privacy protection in video surveillance: recent results and validation framework

    NASA Astrophysics Data System (ADS)

    Dufaux, Frederic

    2011-06-01

    The issue of privacy in video surveillance has drawn a lot of interest lately. However, thorough performance analysis and validation is still lacking, especially regarding the fulfillment of privacy-related requirements. In this paper, we first review recent Privacy Enabling Technologies (PET). Next, we discuss pertinent evaluation criteria for effective privacy protection. We then put forward a framework to assess the capacity of PET solutions to hide distinguishing facial information and to conceal identity. We conduct comprehensive and rigorous experiments to evaluate the performance of face recognition algorithms applied to images altered by PET. Results show the ineffectiveness of naïve PET such as pixelization and blur. Conversely, they demonstrate the effectiveness of more sophisticated scrambling techniques to foil face recognition.

  14. Capture and playback synchronization in video conferencing

    NASA Astrophysics Data System (ADS)

    Shae, Zon-Yin; Chang, Pao-Chi; Chen, Mon-Song

    1995-03-01

    Packet-switching based video conferencing has emerged as one of the most important multimedia applications. Lip synchronization can be disrupted in the packet network as the result of the network properties: packet delay jitters at the capture end, network delay jitters, packet loss, packet arrived out of sequence, local clock mismatch, and video playback overlay with the graphic system. The synchronization problem become more demanding as the real time and multiparty requirement of the video conferencing application. Some of the above mentioned problem can be solved in the more advanced network architecture as ATM having promised. This paper will present some of the solutions to the problems that can be useful at the end station terminals in the massively deployed packet switching network today. The playback scheme in the end station will consist of two units: compression domain buffer management unit and the pixel domain buffer management unit. The pixel domain buffer management unit is responsible for removing the annoying frame shearing effect in the display. The compression domain buffer management unit is responsible for parsing the incoming packets for identifying the complete data blocks in the compressed data stream which can be decoded independently. The compression domain buffer management unit is also responsible for concealing the effects of clock mismatch, lip synchronization, and packet loss, out of sequence, and network jitters. This scheme can also be applied to the multiparty teleconferencing environment. Some of the schemes presented in this paper have been implemented in the Multiparty Multimedia Teleconferencing (MMT) system prototype at the IBM watson research center.

  15. Thermal imagers: from ancient analog video output to state-of-the-art video streaming

    NASA Astrophysics Data System (ADS)

    Haan, Hubertus; Feuchter, Timo; Münzberg, Mario; Fritze, Jörg; Schlemmer, Harry

    2013-06-01

    The video output of thermal imagers stayed constant over almost two decades. When the famous Common Modules were employed a thermal image at first was presented to the observer in the eye piece only. In the early 1990s TV cameras were attached and the standard output was CCIR. In the civil camera market output standards changed to digital formats a decade ago with digital video streaming being nowadays state-of-the-art. The reasons why the output technique in the thermal world stayed unchanged over such a long time are: the very conservative view of the military community, long planning and turn-around times of programs and a slower growth of pixel number of TIs in comparison to consumer cameras. With megapixel detectors the CCIR output format is not sufficient any longer. The paper discusses the state-of-the-art compression and streaming solutions for TIs.

  16. Compressed-domain video indexing techniques using DCT and motion vector information in MPEG video

    NASA Astrophysics Data System (ADS)

    Kobla, Vikrant; Doermann, David S.; Lin, King-Ip; Faloutsos, Christos

    1997-01-01

    Development of various multimedia applications hinges on the availability of fast and efficient storage, browsing, indexing, and retrieval techniques. Given that video is typically stored efficiently in a compressed format, if we can analyze the compressed representation directly, we can avoid the costly overhead of decompressing and operating at the pixel level. Compressed domain parsing of video has been presented in earlier work where a video clip is divided into shots, subshots, and scenes. In this paper, we describe key frame selection, feature extraction, and indexing and retrieval techniques that are directly applicable to MPEG compressed video. We develop a frame-type independent representation of the various types of frames present in an MPEG video in which al frames can be considered equivalent. Features are derived from the available DCT, macroblock, and motion vector information and mapped to a low-dimensional space where they can be accessed with standard database techniques. The spatial information is used as primary index while the temporal information is used to enhance the robustness of the system during the retrieval process. The techniques presented enable fast archiving, indexing, and retrieval of video. Our operational prototype typically takes a fraction of a second to retrieve similar video scenes from our database, with over 95% success.

  17. Robust video super-resolution with registration efficiency adaptation

    NASA Astrophysics Data System (ADS)

    Zhang, Xinfeng; Xiong, Ruiqin; Ma, Siwei; Zhang, Li; Gao, Wen

    2010-07-01

    Super-Resolution (SR) is a technique to construct a high-resolution (HR) frame by fusing a group of low-resolution (LR) frames describing the same scene. The effectiveness of the conventional super-resolution techniques, when applied on video sequences, strongly relies on the efficiency of motion alignment achieved by image registration. Unfortunately, such efficiency is limited by the motion complexity in the video and the capability of adopted motion model. In image regions with severe registration errors, annoying artifacts usually appear in the produced super-resolution video. This paper proposes a robust video super-resolution technique that adapts itself to the spatially-varying registration efficiency. The reliability of each reference pixel is measured by the corresponding registration error and incorporated into the optimization objective function of SR reconstruction. This makes the SR reconstruction highly immune to the registration errors, as outliers with higher registration errors are assigned lower weights in the objective function. In particular, we carefully design a mechanism to assign weights according to registration errors. The proposed superresolution scheme has been tested with various video sequences and experimental results clearly demonstrate the effectiveness of the proposed method.

  18. Three-directional motion-compensation mask-based novel look-up table on graphics processing units for video-rate generation of digital holographic videos of three-dimensional scenes.

    PubMed

    Kwon, Min-Woo; Kim, Seung-Cheol; Kim, Eun-Soo

    2016-01-20

    A three-directional motion-compensation mask-based novel look-up table method is proposed and implemented on graphics processing units (GPUs) for video-rate generation of digital holographic videos of three-dimensional (3D) scenes. Since the proposed method is designed to be well matched with the software and memory structures of GPUs, the number of compute-unified-device-architecture kernel function calls can be significantly reduced. This results in a great increase of the computational speed of the proposed method, allowing video-rate generation of the computer-generated hologram (CGH) patterns of 3D scenes. Experimental results reveal that the proposed method can generate 39.8 frames of Fresnel CGH patterns with 1920×1080 pixels per second for the test 3D video scenario with 12,088 object points on dual GPU boards of NVIDIA GTX TITANs, and they confirm the feasibility of the proposed method in the practical application fields of electroholographic 3D displays.

  19. Feasibility of video codec algorithms for software-only playback

    NASA Astrophysics Data System (ADS)

    Rodriguez, Arturo A.; Morse, Ken

    1994-05-01

    Software-only video codecs can provide good playback performance in desktop computers with a 486 or 68040 CPU running at 33 MHz without special hardware assistance. Typically, playback of compressed video can be categorized into three tasks: the actual decoding of the video stream, color conversion, and the transfer of decoded video data from system RAM to video RAM. By current standards, good playback performance is the decoding and display of video streams of 320 by 240 (or larger) compressed frames at 15 (or greater) frames-per- second. Software-only video codecs have evolved by modifying and tailoring existing compression methodologies to suit video playback in desktop computers. In this paper we examine the characteristics used to evaluate software-only video codec algorithms, namely: image fidelity (i.e., image quality), bandwidth (i.e., compression) ease-of-decoding (i.e., playback performance), memory consumption, compression to decompression asymmetry, scalability, and delay. We discuss the tradeoffs among these variables and the compromises that can be made to achieve low numerical complexity for software-only playback. Frame- differencing approaches are described since software-only video codecs typically employ them to enhance playback performance. To complement other papers that appear in this session of the Proceedings, we review methods derived from binary pattern image coding since these methods are amenable for software-only playback. In particular, we introduce a novel approach called pixel distribution image coding.

  20. Micro-pixelation and color mixing in biological photonic structures (presentation video)

    NASA Astrophysics Data System (ADS)

    Bartl, Michael H.; Nagi, Ramneet K.

    2014-03-01

    The world of insects displays myriad hues of coloration effects produced by elaborate nano-scale architectures built into wings and exoskeleton. For example, we have recently found many weevils possess photonic architectures with cubic lattices. In this talk, we will present high-resolution three-dimensional reconstructions of weevil photonic structures with diamond and gyroid lattices. Moreover, by reconstructing entire scales we found arrays of single-crystalline domains, each oriented such that only selected crystal faces are visible to an observer. This pixel-like arrangement is key to the angle-independent coloration typical of weevils—a strategy that could enable a new generation of coating technologies.

  1. An improved real time superresolution FPGA system

    NASA Astrophysics Data System (ADS)

    Lakshmi Narasimha, Pramod; Mudigoudar, Basavaraj; Yue, Zhanfeng; Topiwala, Pankaj

    2009-05-01

    In numerous computer vision applications, enhancing the quality and resolution of captured video can be critical. Acquired video is often grainy and low quality due to motion, transmission bottlenecks, etc. Postprocessing can enhance it. Superresolution greatly decreases camera jitter to deliver a smooth, stabilized, high quality video. In this paper, we extend previous work on a real-time superresolution application implemented in ASIC/FPGA hardware. A gradient based technique is used to register the frames at the sub-pixel level. Once we get the high resolution grid, we use an improved regularization technique in which the image is iteratively modified by applying back-projection to get a sharp and undistorted image. The algorithm was first tested in software and migrated to hardware, to achieve 320x240 -> 1280x960, about 30 fps, a stunning superresolution by 16X in total pixels. Various input parameters, such as size of input image, enlarging factor and the number of nearest neighbors, can be tuned conveniently by the user. We use a maximum word size of 32 bits to implement the algorithm in Matlab Simulink as well as in FPGA hardware, which gives us a fine balance between the number of bits and performance. The proposed system is robust and highly efficient. We have shown the performance improvement of the hardware superresolution over the software version (C code).

  2. Impulsive noise suppression in color images based on the geodesic digital paths

    NASA Astrophysics Data System (ADS)

    Smolka, Bogdan; Cyganek, Boguslaw

    2015-02-01

    In the paper a novel filtering design based on the concept of exploration of the pixel neighborhood by digital paths is presented. The paths start from the boundary of a filtering window and reach its center. The cost of transitions between adjacent pixels is defined in the hybrid spatial-color space. Then, an optimal path of minimum total cost, leading from pixels of the window's boundary to its center is determined. The cost of an optimal path serves as a degree of similarity of the central pixel to the samples from the local processing window. If a pixel is an outlier, then all the paths starting from the window's boundary will have high costs and the minimum one will also be high. The filter output is calculated as a weighted mean of the central pixel and an estimate constructed using the information on the minimum cost assigned to each image pixel. So, first the costs of optimal paths are used to build a smoothed image and in the second step the minimum cost of the central pixel is utilized for construction of the weights of a soft-switching scheme. The experiments performed on a set of standard color images, revealed that the efficiency of the proposed algorithm is superior to the state-of-the-art filtering techniques in terms of the objective restoration quality measures, especially for high noise contamination ratios. The proposed filter, due to its low computational complexity, can be applied for real time image denoising and also for the enhancement of video streams.

  3. Characterization of multiport solid state imagers at megahertz data rates

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yates, G.J.; Pena, C.R.; Turko, B.T.

    1994-08-01

    Test results obtained from two recently developed multiport Charge-Coupled Devices (CCDs) operated at pixel rates in the 10-to-100 MHz range will be presented . The CCDs were evaluated in Los Alamos National Laboratory`s High Speed Solid State Imager Test Station (HSTS) which features PC-based programmable clock waveform generation (Tektronix DAS 9200) and synchronously clocked Digital Sampling Oscilloscopes (DSOs) (LeCroy 9424/9314 series) for CCD pixel data acquisition, analysis and storage. The HSTS also provided special designed optical pinhole array test patterns in the 5-to-50 micron diameter range for use with Xenon Strobe and pulsed laser light sources to simultaneously provide multiplemore » single-pixel illumination patterns to study CCD point-spread-function (PSF) and pixel smear characteristics. The two CCDs tested, EEV model CCD-13 and EG&G Reticon model HSO512J, are both 512 {times} 512 pixel arrays with eight (8) and sixteen (16) video output ports respectively. Both devices are generically Frame Transfer CCDs (FT CCDs) designed for parallel bi-directional vertical readout to augment their multiport design for increased pixel rates over common single port serial readout architecture. Although both CCDs were tested similarly, differences in their designs precluded normalization or any direct comparisons of test results. Rate dependent parameters investigated include S/N, PSF, and MTF. The performance observed for the two imagers at various pixel rates from selected typical output ports is discussed.« less

  4. Situational Awareness: A Feasibility Investigation of Near-Threshold Skills Development

    DTIC Science & Technology

    1994-01-01

    management and analysis. TS 1 consisted of a medium resolution (640-pixel x 240- line) 13-in. Magnavox RGB Monitor 80 (Model CM 8562 color video monitor), a...timed relaxation period between each training block (20 trials) and a 10- to 15-min rest period between each training protocol. Refreshments and snacks

  5. Motion video compression system with neural network having winner-take-all function

    NASA Technical Reports Server (NTRS)

    Fang, Wai-Chi (Inventor); Sheu, Bing J. (Inventor)

    1997-01-01

    A motion video data system includes a compression system, including an image compressor, an image decompressor correlative to the image compressor having an input connected to an output of the image compressor, a feedback summing node having one input connected to an output of the image decompressor, a picture memory having an input connected to an output of the feedback summing node, apparatus for comparing an image stored in the picture memory with a received input image and deducing therefrom pixels having differences between the stored image and the received image and for retrieving from the picture memory a partial image including the pixels only and applying the partial image to another input of the feedback summing node, whereby to produce at the output of the feedback summing node an updated decompressed image, a subtraction node having one input connected to received the received image and another input connected to receive the partial image so as to generate a difference image, the image compressor having an input connected to receive the difference image whereby to produce a compressed difference image at the output of the image compressor.

  6. Opals: Mission System Operations Architecture for an Optical Communications Demonstration on the ISS

    NASA Technical Reports Server (NTRS)

    Abrahamson, Matthew J.; Sindiy, Oleg V.; Oaida, Bogdan V.; Fregoso, Santos; Bowles-Martinez, Jessica N.; Kokorowski, Michael; Wilkerson, Marcus W.; Konyha, Alexander L.

    2014-01-01

    In April of 2014, the Optical PAyload for Lasercomm Science (OPALS) Flight System (FS) launched to the International Space Station (ISS) to demonstrate space-to-ground optical communications. During a planned 90-day baseline mission, the OPALS FS will downlink high quality, short duration videos to the Optical Communications Telescope Laboratory (OCTL) ground station in Wrightwood, California. Interfaces to the ISS payload operations infrastructure have been established to facilitate activity planning, hazardous laser operations, commanding, and telemetry transmission. In addition, internal processes, such as pointing prediction and data processing, satisfy the technical requirements of the mission. The OPALS operations team participates in Operational Readiness Tests (ORTs) with external partners to exercise coordination processes and train for the overall mission. The ORTs have provided valuable insight into operational considerations for the instrument on the ISS.

  7. Digital TV tri-state delta modulation system for Space Shuttle ku-band downlink

    NASA Technical Reports Server (NTRS)

    Udalov, S.; Huth, G. K.; Roberts, D.; Batson, B. H.

    1982-01-01

    A tri-state delta modulation/demodulation (TSDM) technique which provides for efficient run-length coding of constant-intensity segments of a TV picture is described. Aspects of the hardware implementation of a high-speed TSDM transmitter and receiver for black-and-white TV or field-sequential color or NTSC format color are reviewed. Run-length encoding of the TSDM output can consistently reduce the required channel data rate well below one bit per sample. As compared with a bistate delta modulation system, the present technique eliminates granularity in the reconstructed video without degrading rise or fall times. About 40 chips are used by TSDM when used to handle the luminance information in a color link. A possible overall space and ground functional configuration to accommodate Shuttle digital TV with scrambling for privacy is presented.

  8. Development of the ISS EMU Dashboard Software

    NASA Technical Reports Server (NTRS)

    Bernard, Craig; Hill, Terry R.

    2011-01-01

    The EMU (Extra-Vehicular Mobility Unit) Dashboard was developed at NASA s Johnson Space Center to aid in real-time mission support for the ISS (International Space Station) and Shuttle EMU space suit by time synchronizing down-linked video, space suit data and audio from the mission control audio loops. Once the input streams are synchronized and recorded, the data can be replayed almost instantly and has proven invaluable in understanding in-flight hardware anomalies and playing back information conveyed by the crew to missions control and the back room support. This paper will walk through the development from an engineer s idea brought to life by an intern to real time mission support and how this tool is evolving today and its challenges to support EVAs (Extra-Vehicular Activities) and human exploration in the 21st century.

  9. Visible-regime polarimetric imager: a fully polarimetric, real-time imaging system.

    PubMed

    Barter, James D; Thompson, Harold R; Richardson, Christine L

    2003-03-20

    A fully polarimetric optical camera system has been constructed to obtain polarimetric information simultaneously from four synchronized charge-coupled device imagers at video frame rates of 60 Hz and a resolution of 640 x 480 pixels. The imagers view the same scene along the same optical axis by means of a four-way beam-splitting prism similar to ones used for multiple-imager, common-aperture color TV cameras. Appropriate polarizing filters in front of each imager provide the polarimetric information. Mueller matrix analysis of the polarimetric response of the prism, analyzing filters, and imagers is applied to the detected intensities in each imager as a function of the applied state of polarization over a wide range of linear and circular polarization combinations to obtain an average polarimetric calibration consistent to approximately 2%. Higher accuracies can be obtained by improvement of the polarimetric modeling of the splitting prism and by implementation of a pixel-by-pixel calibration.

  10. Spatio-Temporal Video Segmentation with Shape Growth or Shrinkage Constraint

    NASA Technical Reports Server (NTRS)

    Tarabalka, Yuliya; Charpiat, Guillaume; Brucker, Ludovic; Menze, Bjoern H.

    2014-01-01

    We propose a new method for joint segmentation of monotonously growing or shrinking shapes in a time sequence of noisy images. The task of segmenting the image time series is expressed as an optimization problem using the spatio-temporal graph of pixels, in which we are able to impose the constraint of shape growth or of shrinkage by introducing monodirectional infinite links connecting pixels at the same spatial locations in successive image frames. The globally optimal solution is computed with a graph cut. The performance of the proposed method is validated on three applications: segmentation of melting sea ice floes and of growing burned areas from time series of 2D satellite images, and segmentation of a growing brain tumor from sequences of 3D medical scans. In the latter application, we impose an additional intersequences inclusion constraint by adding directed infinite links between pixels of dependent image structures.

  11. A rain pixel recovery algorithm for videos with highly dynamic scenes.

    PubMed

    Jie Chen; Lap-Pui Chau

    2014-03-01

    Rain removal is a very useful and important technique in applications such as security surveillance and movie editing. Several rain removal algorithms have been proposed these years, where photometric, chromatic, and probabilistic properties of the rain have been exploited to detect and remove the rainy effect. Current methods generally work well with light rain and relatively static scenes, when dealing with heavier rainfall in dynamic scenes, these methods give very poor visual results. The proposed algorithm is based on motion segmentation of dynamic scene. After applying photometric and chromatic constraints for rain detection, rain removal filters are applied on pixels such that their dynamic property as well as motion occlusion clue are considered; both spatial and temporal informations are then adaptively exploited during rain pixel recovery. Results show that the proposed algorithm has a much better performance for rainy scenes with large motion than existing algorithms.

  12. Probabilistic multi-resolution human classification

    NASA Astrophysics Data System (ADS)

    Tu, Jun; Ran, H.

    2006-02-01

    Recently there has been some interest in using infrared cameras for human detection because of the sharply decreasing prices of infrared cameras. The training data used in our work for developing the probabilistic template consists images known to contain humans in different poses and orientation but having the same height. Multiresolution templates are performed. They are based on contour and edges. This is done so that the model does not learn the intensity variations among the background pixels and intensity variations among the foreground pixels. Each template at every level is then translated so that the centroid of the non-zero pixels matches the geometrical center of the image. After this normalization step, for each pixel of the template, the probability of it being pedestrian is calculated based on the how frequently it appears as 1 in the training data. We also use periodicity gait to verify the pedestrian in a Bayesian manner for the whole blob in a probabilistic way. The videos had quite a lot of variations in the scenes, sizes of people, amount of occlusions and clutter in the backgrounds as is clearly evident. Preliminary experiments show the robustness.

  13. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering

    PubMed Central

    Mars, Kamel; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro

    2017-01-01

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system. PMID:29120358

  14. Label-Free Biomedical Imaging Using High-Speed Lock-In Pixel Sensor for Stimulated Raman Scattering.

    PubMed

    Mars, Kamel; Lioe, De Xing; Kawahito, Shoji; Yasutomi, Keita; Kagawa, Keiichiro; Yamada, Takahiro; Hashimoto, Mamoru

    2017-11-09

    Raman imaging eliminates the need for staining procedures, providing label-free imaging to study biological samples. Recent developments in stimulated Raman scattering (SRS) have achieved fast acquisition speed and hyperspectral imaging. However, there has been a problem of lack of detectors suitable for MHz modulation rate parallel detection, detecting multiple small SRS signals while eliminating extremely strong offset due to direct laser light. In this paper, we present a complementary metal-oxide semiconductor (CMOS) image sensor using high-speed lock-in pixels for stimulated Raman scattering that is capable of obtaining the difference of Stokes-on and Stokes-off signal at modulation frequency of 20 MHz in the pixel before reading out. The generated small SRS signal is extracted and amplified in a pixel using a high-speed and large area lateral electric field charge modulator (LEFM) employing two-step ion implantation and an in-pixel pair of low-pass filter, a sample and hold circuit and a switched capacitor integrator using a fully differential amplifier. A prototype chip is fabricated using 0.11 μm CMOS image sensor technology process. SRS spectra and images of stearic acid and 3T3-L1 samples are successfully obtained. The outcomes suggest that hyperspectral and multi-focus SRS imaging at video rate is viable after slight modifications to the pixel architecture and the acquisition system.

  15. Uplink Downlink Rate Balancing and Throughput Scaling in FDD Massive MIMO Systems

    NASA Astrophysics Data System (ADS)

    Bergel, Itsik; Perets, Yona; Shamai, Shlomo

    2016-05-01

    In this work we extend the concept of uplink-downlink rate balancing to frequency division duplex (FDD) massive MIMO systems. We consider a base station with large number antennas serving many single antenna users. We first show that any unused capacity in the uplink can be traded off for higher throughput in the downlink in a system that uses either dirty paper (DP) coding or linear zero-forcing (ZF) precoding. We then also study the scaling of the system throughput with the number of antennas in cases of linear Beamforming (BF) Precoding, ZF Precoding, and DP coding. We show that the downlink throughput is proportional to the logarithm of the number of antennas. While, this logarithmic scaling is lower than the linear scaling of the rate in the uplink, it can still bring significant throughput gains. For example, we demonstrate through analysis and simulation that increasing the number of antennas from 4 to 128 will increase the throughput by more than a factor of 5. We also show that a logarithmic scaling of downlink throughput as a function of the number of receive antennas can be achieved even when the number of transmit antennas only increases logarithmically with the number of receive antennas.

  16. Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System.

    PubMed

    Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi

    2015-01-01

    The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system.

  17. Computer Simulation and Field Experiment for Downlink Multiuser MIMO in Mobile WiMAX System

    PubMed Central

    Yamaguchi, Kazuhiro; Nagahashi, Takaharu; Akiyama, Takuya; Matsue, Hideaki; Uekado, Kunio; Namera, Takakazu; Fukui, Hiroshi; Nanamatsu, Satoshi

    2015-01-01

    The transmission performance for a downlink mobile WiMAX system with multiuser multiple-input multiple-output (MU-MIMO) systems in a computer simulation and field experiment is described. In computer simulation, a MU-MIMO transmission system can be realized by using the block diagonalization (BD) algorithm, and each user can receive signals without any signal interference from other users. The bit error rate (BER) performance and channel capacity in accordance with modulation schemes and the number of streams were simulated in a spatially correlated multipath fading environment. Furthermore, we propose a method for evaluating the transmission performance for this downlink mobile WiMAX system in this environment by using the computer simulation. In the field experiment, the received power and downlink throughput in the UDP layer were measured on an experimental mobile WiMAX system developed in Azumino City in Japan. In comparison with the simulated and experimented results, the measured maximum throughput performance in the downlink had almost the same performance as the simulated throughput. It was confirmed that the experimental mobile WiMAX system for MU-MIMO transmission successfully increased the total channel capacity of the system. PMID:26421311

  18. SPADER - Science Planning Analysis and Data Estimation Resource for the NASA Parker Solar Probe Mission

    NASA Astrophysics Data System (ADS)

    Rodgers, D. J.; Fox, N. J.; Kusterer, M. B.; Turner, F. S.; Woleslagle, A. B.

    2017-12-01

    Scheduled to launch in July 2018, the Parker Solar Probe (PSP) will orbit the Sun for seven years, making a total of twenty-four extended encounters inside a solar radial distance of 0.25 AU. During most orbits, there are extended periods of time where PSP-Sun-Earth geometry dramatically reduces PSP-Earth communications via the Deep Space Network (DSN); there is the possibility that multiple orbits will have little to no high-rate downlink available. Science and housekeeping data taken during an encounter may reside on the spacecraft solid state recorder (SSR) for multiple orbits, potentially running the risk of overflowing the SSR in the absence of mitigation. The Science Planning Analysis and Data Estimation Resource (SPADER) has been developed to provide the science and operations teams the ability to plan operations accounting for multiple orbits in order to mitigate the effects caused by the lack of high-rate downlink. Capabilities and visualizations of SPADER are presented; further complications associated with file downlink priority and high-speed data transfers between instrument SSRs and the spacecraft SSR are discussed, as well as the long-term consequences of variations in DSN downlink parameters on the science data downlink.

  19. FAST at MACH 20: clinical ultrasound aboard the International Space Station.

    PubMed

    Sargsyan, Ashot E; Hamilton, Douglas R; Jones, Jeffrey A; Melton, Shannon; Whitson, Peggy A; Kirkpatrick, Andrew W; Martin, David; Dulchavsky, Scott A

    2005-01-01

    Focused assessment with sonography for trauma (FAST) examination has been proved accurate for diagnosing trauma when performed by nonradiologist physicians. Recent reports have suggested that nonphysicians also may be able to perform the FAST examination reliably. A multipurpose ultrasound system is installed on the International Space Station as a component of the Human Research Facility. Nonphysician crew members aboard the International Space Station receive modest training in hardware operation, sonographic techniques, and remotely guided scanning. This report documents the first FAST examination conducted in space, as part of the sustained effort to maintain the highest possible level of available medical care during long-duration space flight. An International Space Station crew member with minimal sonography training was remotely guided through a FAST examination by an ultrasound imaging expert from Mission Control Center using private real-time two-way audio and a private space-to-ground video downlink (7.5 frames/second). There was a 2-second satellite delay for both video and audio. To facilitate the real-time telemedical ultrasound examination, identical reference cards showing topologic reference points and hardware controls were available to both the crew member and the ground-based expert. A FAST examination, including four standard abdominal windows, was completed in approximately 5.5 minutes. Following commands from the Mission Control Center-based expert, the crew member acquired all target images without difficulty. The anatomic content and fidelity of the ultrasound video were excellent and would allow clinical decision making. It is possible to conduct a remotely guided FAST examination with excellent clinical results and speed, even with a significantly reduced video frame rate and a 2-second communication latency. A wider application of trauma ultrasound applications for remote medicine on earth appears to be possible and warranted.

  20. A customizable commercial miniaturized 320×256 indium gallium arsenide shortwave infrared camera

    NASA Astrophysics Data System (ADS)

    Huang, Shih-Che; O'Grady, Matthew; Groppe, Joseph V.; Ettenberg, Martin H.; Brubaker, Robert M.

    2004-10-01

    The design and performance of a commercial short-wave-infrared (SWIR) InGaAs microcamera engine is presented. The 0.9-to-1.7 micron SWIR imaging system consists of a room-temperature-TEC-stabilized, 320x256 (25 μm pitch) InGaAs focal plane array (FPA) and a high-performance, highly customizable image-processing set of electronics. The detectivity, D*, of the system is greater than 1013 cm-√Hz/W at 1.55 μm, and this sensitivity may be adjusted in real-time over 100 dB. It features snapshot-mode integration with a minimum exposure time of 130 μs. The digital video processor provides real time pixel-to-pixel, 2-point dark-current subtraction and non-uniformity compensation along with defective-pixel substitution. Other features include automatic gain control (AGC), gamma correction, 7 preset configurations, adjustable exposure time, external triggering, and windowing. The windowing feature is highly flexible; the region of interest (ROI) may be placed anywhere on the imager and can be varied at will. Windowing allows for high-speed readout enabling such applications as target acquisition and tracking; for example, a 32x32 ROI window may be read out at over 3500 frames per second (fps). Output video is provided as EIA170-compatible analog, or as 12-bit CameraLink-compatible digital. All the above features are accomplished in a small volume < 28 cm3, weight < 70 g, and with low power consumption < 1.3 W at room temperature using this new microcamera engine. Video processing is based on a field-programmable gate array (FPGA) platform with a soft-embedded processor that allows for ease of integration/addition of customer-specific algorithms, processes, or design requirements. The camera was developed with the high-performance, space-restricted, power-conscious application in mind, such as robotic or UAV deployment.

  1. Carotid and Femoral Artery Intima-Media Thickness During 6 Months of Spaceflight.

    PubMed

    Arbeille, Philippe; Provost, Romain; Zuj, Kathryn

    2016-05-01

    The objective was to determine the effects of 6 mo of microgravity exposure on conduit artery diameter and wall thickness. Diagnostic images of the common carotid artery (CC) and superficial femoral artery (FA) were obtained using echography which astronauts performed on themselves after receiving minimal training in the use of ultrasound imaging. Echographic video was recorded using a volume capture method directed by a trained sonographer on the ground through videoconferencing. Vessel properties were later assessed by processing the downlinked video. Data were collected from 10 astronauts who performed the echographic video capture at the beginning of the spaceflight (day 15) and near the end of the spaceflight (day 115 to 165). In-flight and postflight measurements were compared to preflight assessments. No significant changes with spaceflight were found for CC and FA diameter. Intima-media thickness (IMT) of the CC was found to be significantly increased (12% ± 4) in all astronauts during the spaceflight (early and late flight) and remained elevated 4 d after returning to Earth. Similarly, FA IMT was increased during the flight but returned to preflight levels 4 d postflight. The experiment demonstrated that, using the volume capture method of echography, untrained astronauts were able to capture enough echographic data to display vessel images of good quality for analysis. The increase in both CC and FA IMT during the flight suggest an adaptation to microgravity and to the confined environment of spaceflight which deserves further investigation.

  2. Background estimation and player detection in badminton video clips using histogram of pixel values along temporal dimension

    NASA Astrophysics Data System (ADS)

    Peng, Yahui; Ma, Xiao; Gao, Xinyu; Zhou, Fangxu

    2015-12-01

    Computer vision is an important tool for sports video processing. However, its application in badminton match analysis is very limited. In this study, we proposed a straightforward but robust histogram-based background estimation and player detection methods for badminton video clips, and compared the results with the naive averaging method and the mixture of Gaussians methods, respectively. The proposed method yielded better background estimation results than the naive averaging method and more accurate player detection results than the mixture of Gaussians player detection method. The preliminary results indicated that the proposed histogram-based method could estimate the background and extract the players accurately. We conclude that the proposed method can be used for badminton player tracking and further studies are warranted for automated match analysis.

  3. An approach to integrate the human vision psychology and perception knowledge into image enhancement

    NASA Astrophysics Data System (ADS)

    Wang, Hui; Huang, Xifeng; Ping, Jiang

    2009-07-01

    Image enhancement is very important image preprocessing technology especially when the image is captured in the poor imaging condition or dealing with the high bits image. The benefactor of image enhancement either may be a human observer or a computer vision process performing some kind of higher-level image analysis, such as target detection or scene understanding. One of the main objects of the image enhancement is getting a high dynamic range image and a high contrast degree image for human perception or interpretation. So, it is very necessary to integrate either empirical or statistical human vision psychology and perception knowledge into image enhancement. The human vision psychology and perception claims that humans' perception and response to the intensity fluctuation δu of visual signals are weighted by the background stimulus u, instead of being plainly uniform. There are three main laws: Weber's law, Weber- Fechner's law and Stevens's Law that describe this phenomenon in the psychology and psychophysics. This paper will integrate these three laws of the human vision psychology and perception into a very popular image enhancement algorithm named Adaptive Plateau Equalization (APE). The experiments were done on the high bits star image captured in night scene and the infrared-red image both the static image and the video stream. For the jitter problem in the video stream, this algorithm reduces this problem using the difference between the current frame's plateau value and the previous frame's plateau value to correct the current frame's plateau value. Considering the random noise impacts, the pixel value mapping process is not only depending on the current pixel but the pixels in the window surround the current pixel. The window size is usually 3×3. The process results of this improved algorithms is evaluated by the entropy analysis and visual perception analysis. The experiments' result showed the improved APE algorithms improved the quality of the image, the target and the surrounding assistant targets could be identified easily, and the noise was not amplified much. For the low quality image, these improved algorithms augment the information entropy and improve the image and the video stream aesthetic quality, while for the high quality image they will not debase the quality of the image.

  4. Description and Simulation of a Fast Packet Switch Architecture for Communication Satellites

    NASA Technical Reports Server (NTRS)

    Quintana, Jorge A.; Lizanich, Paul J.

    1995-01-01

    The NASA Lewis Research Center has been developing the architecture for a multichannel communications signal processing satellite (MCSPS) as part of a flexible, low-cost meshed-VSAT (very small aperture terminal) network. The MCSPS architecture is based on a multifrequency, time-division-multiple-access (MF-TDMA) uplink and a time-division multiplex (TDM) downlink. There are eight uplink MF-TDMA beams, and eight downlink TDM beams, with eight downlink dwells per beam. The information-switching processor, which decodes, stores, and transmits each packet of user data to the appropriate downlink dwell onboard the satellite, has been fully described by using VHSIC (Very High Speed Integrated-Circuit) Hardware Description Language (VHDL). This VHDL code, which was developed in-house to simulate the information switching processor, showed that the architecture is both feasible and viable. This paper describes a shared-memory-per-beam architecture, its VHDL implementation, and the simulation efforts.

  5. Quantitative analysis of tympanic membrane perforation: a simple and reliable method.

    PubMed

    Ibekwe, T S; Adeosun, A A; Nwaorgu, O G

    2009-01-01

    Accurate assessment of the features of tympanic membrane perforation, especially size, site, duration and aetiology, is important, as it enables optimum management. To describe a simple, cheap and effective method of quantitatively analysing tympanic membrane perforations. The system described comprises a video-otoscope (capable of generating still and video images of the tympanic membrane), adapted via a universal serial bus box to a computer screen, with images analysed using the Image J geometrical analysis software package. The reproducibility of results and their correlation with conventional otoscopic methods of estimation were tested statistically with the paired t-test and correlational tests, using the Statistical Package for the Social Sciences version 11 software. The following equation was generated: P/T x 100 per cent = percentage perforation, where P is the area (in pixels2) of the tympanic membrane perforation and T is the total area (in pixels2) for the entire tympanic membrane (including the perforation). Illustrations are shown. Comparison of blinded data on tympanic membrane perforation area obtained independently from assessments by two trained otologists, of comparative years of experience, using the video-otoscopy system described, showed similar findings, with strong correlations devoid of inter-observer error (p = 0.000, r = 1). Comparison with conventional otoscopic assessment also indicated significant correlation, comparing results for two trained otologists, but some inter-observer variation was present (p = 0.000, r = 0.896). Correlation between the two methods for each of the otologists was also highly significant (p = 0.000). A computer-adapted video-otoscope, with images analysed by Image J software, represents a cheap, reliable, technology-driven, clinical method of quantitative analysis of tympanic membrane perforations and injuries.

  6. Dynamic subframe allocation for mobile broadband m-health using IEEE 802.16j mobile multihop relay networks.

    PubMed

    Alinejad, Ali; Istepanian, R S H; Philip, N

    2012-01-01

    The concept of 4G health will be one of the key focus areas of future m-health research and enterprise activities in the coming years. WiMAX technology is one of the constituent 4G wireless technologies that provides broadband wireless access (BWA). Despite the fact that WiMAX is able to provide a high data rate in a relatively large coverage; this technology has specific limitations such as: coverage, signal attenuation problems due to shadowing or path loss, and limited available spectrum. The IEEE 802.16j mobile multihop relay (MMR) technology is a pragmatic solution designed to overcome these limitations. The aim of IEEE 802.16j MMR is to expand the IEEE 802.16e's capabilities with multihop features. In particular, the uplink (UL) and downlink (DL) subframe allocation in WiMAX network is usually fixed. However, dynamic frame allocation is a useful mechanism to optimize uplink and downlink subframe size dynamically based on the traffic conditions through real-time traffic monitoring. This particular mechanism is important for future WiMAX based m-health applications as it allows the tradeoff in both UL and DL channels. In this paper, we address the dynamic frame allocation issue in IEEE 802.16j MMR network for m-health applications. A comparative performance analysis of the proposed approach is validated using the OPNET Modeler(®). The simulation results have shown an improved performance of resource allocation and end-to-end delay performance for typical medical video streaming application.

  7. Heterogeneity image patch index and its application to consumer video summarization.

    PubMed

    Dang, Chinh T; Radha, Hayder

    2014-06-01

    Automatic video summarization is indispensable for fast browsing and efficient management of large video libraries. In this paper, we introduce an image feature that we refer to as heterogeneity image patch (HIP) index. The proposed HIP index provides a new entropy-based measure of the heterogeneity of patches within any picture. By evaluating this index for every frame in a video sequence, we generate a HIP curve for that sequence. We exploit the HIP curve in solving two categories of video summarization applications: key frame extraction and dynamic video skimming. Under the key frame extraction frame-work, a set of candidate key frames is selected from abundant video frames based on the HIP curve. Then, a proposed patch-based image dissimilarity measure is used to create affinity matrix of these candidates. Finally, a set of key frames is extracted from the affinity matrix using a min–max based algorithm. Under video skimming, we propose a method to measure the distance between a video and its skimmed representation. The video skimming problem is then mapped into an optimization framework and solved by minimizing a HIP-based distance for a set of extracted excerpts. The HIP framework is pixel-based and does not require semantic information or complex camera motion estimation. Our simulation results are based on experiments performed on consumer videos and are compared with state-of-the-art methods. It is shown that the HIP approach outperforms other leading methods, while maintaining low complexity.

  8. Automated detection of videotaped neonatal seizures based on motion segmentation methods.

    PubMed

    Karayiannis, Nicolaos B; Tao, Guozhi; Frost, James D; Wise, Merrill S; Hrachovy, Richard A; Mizrahi, Eli M

    2006-07-01

    This study was aimed at the development of a seizure detection system by training neural networks using quantitative motion information extracted by motion segmentation methods from short video recordings of infants monitored for seizures. The motion of the infants' body parts was quantified by temporal motion strength signals extracted from video recordings by motion segmentation methods based on optical flow computation. The area of each frame occupied by the infants' moving body parts was segmented by direct thresholding, by clustering of the pixel velocities, and by clustering the motion parameters obtained by fitting an affine model to the pixel velocities. The computational tools and procedures developed for automated seizure detection were tested and evaluated on 240 short video segments selected and labeled by physicians from a set of video recordings of 54 patients exhibiting myoclonic seizures (80 segments), focal clonic seizures (80 segments), and random infant movements (80 segments). The experimental study described in this paper provided the basis for selecting the most effective strategy for training neural networks to detect neonatal seizures as well as the decision scheme used for interpreting the responses of the trained neural networks. Depending on the decision scheme used for interpreting the responses of the trained neural networks, the best neural networks exhibited sensitivity above 90% or specificity above 90%. The best among the motion segmentation methods developed in this study produced quantitative features that constitute a reliable basis for detecting myoclonic and focal clonic neonatal seizures. The performance targets of this phase of the project may be achieved by combining the quantitative features described in this paper with those obtained by analyzing motion trajectory signals produced by motion tracking methods. A video system based upon automated analysis potentially offers a number of advantages. Infants who are at risk for seizures could be monitored continuously using relatively inexpensive and non-invasive video techniques that supplement direct observation by nursery personnel. This would represent a major advance in seizure surveillance and offers the possibility for earlier identification of potential neurological problems and subsequent intervention.

  9. Three-Dimensional Super-Resolution: Theory, Modeling, and Field Tests Results

    NASA Technical Reports Server (NTRS)

    Bulyshev, Alexander; Amzajerdian, Farzin; Roback, Vincent E.; Hines, Glenn; Pierrottet, Diego; Reisse, Robert

    2014-01-01

    Many flash lidar applications continue to demand higher three-dimensional image resolution beyond the current state-of-the-art technology of the detector arrays and their associated readout circuits. Even with the available number of focal plane pixels, the required number of photons for illuminating all the pixels may impose impractical requirements on the laser pulse energy or the receiver aperture size. Therefore, image resolution enhancement by means of a super-resolution algorithm in near real time presents a very attractive solution for a wide range of flash lidar applications. This paper describes a superresolution technique and illustrates its performance and merits for generating three-dimensional image frames at a video rate.

  10. Efficient subtle motion detection from high-speed video for sound recovery and vibration analysis using singular value decomposition-based approach

    NASA Astrophysics Data System (ADS)

    Zhang, Dashan; Guo, Jie; Jin, Yi; Zhu, Chang'an

    2017-09-01

    High-speed cameras provide full field measurement of structure motions and have been applied in nondestructive testing and noncontact structure monitoring. Recently, a phase-based method has been proposed to extract sound-induced vibrations from phase variations in videos, and this method provides insights into the study of remote sound surveillance and material analysis. An efficient singular value decomposition (SVD)-based approach is introduced to detect sound-induced subtle motions from pixel intensities in silent high-speed videos. A high-speed camera is initially applied to capture a video of the vibrating objects stimulated by sound fluctuations. Then, subimages collected from a small region on the captured video are reshaped into vectors and reconstructed to form a matrix. Orthonormal image bases (OIBs) are obtained from the SVD of the matrix; available vibration signal can then be obtained by projecting subsequent subimages onto specific OIBs. A simulation test is initiated to validate the effectiveness and efficiency of the proposed method. Two experiments are conducted to demonstrate the potential applications in sound recovery and material analysis. Results show that the proposed method efficiently detects subtle motions from the video.

  11. Detection and tracking of gas plumes in LWIR hyperspectral video sequence data

    NASA Astrophysics Data System (ADS)

    Gerhart, Torin; Sunu, Justin; Lieu, Lauren; Merkurjev, Ekaterina; Chang, Jen-Mei; Gilles, Jérôme; Bertozzi, Andrea L.

    2013-05-01

    Automated detection of chemical plumes presents a segmentation challenge. The segmentation problem for gas plumes is difficult due to the diffusive nature of the cloud. The advantage of considering hyperspectral images in the gas plume detection problem over the conventional RGB imagery is the presence of non-visual data, allowing for a richer representation of information. In this paper we present an effective method of visualizing hyperspectral video sequences containing chemical plumes and investigate the effectiveness of segmentation techniques on these post-processed videos. Our approach uses a combination of dimension reduction and histogram equalization to prepare the hyperspectral videos for segmentation. First, Principal Components Analysis (PCA) is used to reduce the dimension of the entire video sequence. This is done by projecting each pixel onto the first few Principal Components resulting in a type of spectral filter. Next, a Midway method for histogram equalization is used. These methods redistribute the intensity values in order to reduce icker between frames. This properly prepares these high-dimensional video sequences for more traditional segmentation techniques. We compare the ability of various clustering techniques to properly segment the chemical plume. These include K-means, spectral clustering, and the Ginzburg-Landau functional.

  12. Automatic video summarization driven by a spatio-temporal attention model

    NASA Astrophysics Data System (ADS)

    Barland, R.; Saadane, A.

    2008-02-01

    According to the literature, automatic video summarization techniques can be classified in two parts, following the output nature: "video skims", which are generated using portions of the original video and "key-frame sets", which correspond to the images, selected from the original video, having a significant semantic content. The difference between these two categories is reduced when we consider automatic procedures. Most of the published approaches are based on the image signal and use either pixel characterization or histogram techniques or image decomposition by blocks. However, few of them integrate properties of the Human Visual System (HVS). In this paper, we propose to extract keyframes for video summarization by studying the variations of salient information between two consecutive frames. For each frame, a saliency map is produced simulating the human visual attention by a bottom-up (signal-dependent) approach. This approach includes three parallel channels for processing three early visual features: intensity, color and temporal contrasts. For each channel, the variations of the salient information between two consecutive frames are computed. These outputs are then combined to produce the global saliency variation which determines the key-frames. Psychophysical experiments have been defined and conducted to analyze the relevance of the proposed key-frame extraction algorithm.

  13. 47 CFR 25.147 - Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25.147 Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz. If an NGSO...

  14. 47 CFR 25.147 - Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25.147 Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz. If an NGSO...

  15. 47 CFR 25.147 - Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25.147 Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz. If an NGSO...

  16. 47 CFR 25.147 - Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25.147 Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz. If an NGSO...

  17. 47 CFR 25.147 - Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... COMMISSION (CONTINUED) COMMON CARRIER SERVICES SATELLITE COMMUNICATIONS Applications and Licenses Space Stations § 25.147 Licensing provision for NGSO MSS feeder downlinks in the band 6700-6875 MHz. If an NGSO...

  18. Robust efficient estimation of heart rate pulse from video.

    PubMed

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-04-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity.

  19. Robust efficient estimation of heart rate pulse from video

    PubMed Central

    Xu, Shuchang; Sun, Lingyun; Rohde, Gustavo Kunde

    2014-01-01

    We describe a simple but robust algorithm for estimating the heart rate pulse from video sequences containing human skin in real time. Based on a model of light interaction with human skin, we define the change of blood concentration due to arterial pulsation as a pixel quotient in log space, and successfully use the derived signal for computing the pulse heart rate. Various experiments with different cameras, different illumination condition, and different skin locations were conducted to demonstrate the effectiveness and robustness of the proposed algorithm. Examples computed with normal illumination show the algorithm is comparable with pulse oximeter devices both in accuracy and sensitivity. PMID:24761294

  20. High-speed reconstruction of compressed images

    NASA Astrophysics Data System (ADS)

    Cox, Jerome R., Jr.; Moore, Stephen M.

    1990-07-01

    A compression scheme is described that allows high-definition radiological images with greater than 8-bit intensity resolution to be represented by 8-bit pixels. Reconstruction of the images with their original intensity resolution can be carried out by means of a pipeline architecture suitable for compact, high-speed implementation. A reconstruction system is described that can be fabricated according to this approach and placed between an 8-bit display buffer and the display's video system thereby allowing contrast control of images at video rates. Results for 50 CR chest images are described showing that error-free reconstruction of the original 10-bit CR images can be achieved.

  1. Comparative Evaluation of Background Subtraction Algorithms in Remote Scene Videos Captured by MWIR Sensors

    PubMed Central

    Yao, Guangle; Lei, Tao; Zhong, Jiandan; Jiang, Ping; Jia, Wenwu

    2017-01-01

    Background subtraction (BS) is one of the most commonly encountered tasks in video analysis and tracking systems. It distinguishes the foreground (moving objects) from the video sequences captured by static imaging sensors. Background subtraction in remote scene infrared (IR) video is important and common to lots of fields. This paper provides a Remote Scene IR Dataset captured by our designed medium-wave infrared (MWIR) sensor. Each video sequence in this dataset is identified with specific BS challenges and the pixel-wise ground truth of foreground (FG) for each frame is also provided. A series of experiments were conducted to evaluate BS algorithms on this proposed dataset. The overall performance of BS algorithms and the processor/memory requirements were compared. Proper evaluation metrics or criteria were employed to evaluate the capability of each BS algorithm to handle different kinds of BS challenges represented in this dataset. The results and conclusions in this paper provide valid references to develop new BS algorithm for remote scene IR video sequence, and some of them are not only limited to remote scene or IR video sequence but also generic for background subtraction. The Remote Scene IR dataset and the foreground masks detected by each evaluated BS algorithm are available online: https://github.com/JerryYaoGl/BSEvaluationRemoteSceneIR. PMID:28837112

  2. Artificial vision support system (AVS(2)) for improved prosthetic vision.

    PubMed

    Fink, Wolfgang; Tarbell, Mark A

    2014-11-01

    State-of-the-art and upcoming camera-driven, implanted artificial vision systems provide only tens to hundreds of electrodes, affording only limited visual perception for blind subjects. Therefore, real time image processing is crucial to enhance and optimize this limited perception. Since tens or hundreds of pixels/electrodes allow only for a very crude approximation of the typically megapixel optical resolution of the external camera image feed, the preservation and enhancement of contrast differences and transitions, such as edges, are especially important compared to picture details such as object texture. An Artificial Vision Support System (AVS(2)) is devised that displays the captured video stream in a pixelation conforming to the dimension of the epi-retinal implant electrode array. AVS(2), using efficient image processing modules, modifies the captured video stream in real time, enhancing 'present but hidden' objects to overcome inadequacies or extremes in the camera imagery. As a result, visual prosthesis carriers may now be able to discern such objects in their 'field-of-view', thus enabling mobility in environments that would otherwise be too hazardous to navigate. The image processing modules can be engaged repeatedly in a user-defined order, which is a unique capability. AVS(2) is directly applicable to any artificial vision system that is based on an imaging modality (video, infrared, sound, ultrasound, microwave, radar, etc.) as the first step in the stimulation/processing cascade, such as: retinal implants (i.e. epi-retinal, sub-retinal, suprachoroidal), optic nerve implants, cortical implants, electric tongue stimulators, or tactile stimulators.

  3. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  4. Blind identification of full-field vibration modes from video measurements with phase-based video motion magnification

    NASA Astrophysics Data System (ADS)

    Yang, Yongchao; Dorn, Charles; Mancini, Tyler; Talken, Zachary; Kenyon, Garrett; Farrar, Charles; Mascareñas, David

    2017-02-01

    Experimental or operational modal analysis traditionally requires physically-attached wired or wireless sensors for vibration measurement of structures. This instrumentation can result in mass-loading on lightweight structures, and is costly and time-consuming to install and maintain on large civil structures, especially for long-term applications (e.g., structural health monitoring) that require significant maintenance for cabling (wired sensors) or periodic replacement of the energy supply (wireless sensors). Moreover, these sensors are typically placed at a limited number of discrete locations, providing low spatial sensing resolution that is hardly sufficient for modal-based damage localization, or model correlation and updating for larger-scale structures. Non-contact measurement methods such as scanning laser vibrometers provide high-resolution sensing capacity without the mass-loading effect; however, they make sequential measurements that require considerable acquisition time. As an alternative non-contact method, digital video cameras are relatively low-cost, agile, and provide high spatial resolution, simultaneous, measurements. Combined with vision based algorithms (e.g., image correlation, optical flow), video camera based measurements have been successfully used for vibration measurements and subsequent modal analysis, based on techniques such as the digital image correlation (DIC) and the point-tracking. However, they typically require speckle pattern or high-contrast markers to be placed on the surface of structures, which poses challenges when the measurement area is large or inaccessible. This work explores advanced computer vision and video processing algorithms to develop a novel video measurement and vision-based operational (output-only) modal analysis method that alleviate the need of structural surface preparation associated with existing vision-based methods and can be implemented in a relatively efficient and autonomous manner with little user supervision and calibration. First a multi-scale image processing method is applied on the frames of the video of a vibrating structure to extract the local pixel phases that encode local structural vibration, establishing a full-field spatiotemporal motion matrix. Then a high-spatial dimensional, yet low-modal-dimensional, over-complete model is used to represent the extracted full-field motion matrix using modal superposition, which is physically connected and manipulated by a family of unsupervised learning models and techniques, respectively. Thus, the proposed method is able to blindly extract modal frequencies, damping ratios, and full-field (as many points as the pixel number of the video frame) mode shapes from line of sight video measurements of the structure. The method is validated by laboratory experiments on a bench-scale building structure and a cantilever beam. Its ability for output (video measurements)-only identification and visualization of the weakly-excited mode is demonstrated and several issues with its implementation are discussed.

  5. Technology Readiness Level (TRL) Advancement of the MSPI On-Board Processing Platform for the ACE Decadal Survey Mission

    NASA Technical Reports Server (NTRS)

    Pingree, Paula J.; Werne, Thomas A.; Bekker, Dmitriy L.; Wilson, Thor O.

    2011-01-01

    The Xilinx Virtex-5QV is a new Single-event Immune Reconfigurable FPGA (SIRF) device that is targeted as the spaceborne processor for the NASA Decadal Survey Aerosol-Cloud-Ecosystem (ACE) mission's Multiangle SpectroPolarimetric Imager (MSPI) instrument, currently under development at JPL. A key technology needed for MSPI is on-board processing (OBP) to calculate polarimetry data as imaged by each of the 9 cameras forming the instrument. With funding from NASA's ESTO1 AIST2 Program, JPL is demonstrating how signal data at 95 Mbytes/sec over 16 channels for each of the 9 multi-angle cameras can be reduced to 0.45 Mbytes/sec, thereby substantially reducing the image data volume for spacecraft downlink without loss of science information. This is done via a least-squares fitting algorithm implemented on the Virtex-5 FPGA operating in real-time on the raw video data stream.

  6. High altitude aircraft remote sensing during the 1988 Yellowstone National Park wildfires

    NASA Technical Reports Server (NTRS)

    Ambrosia, Vincent G.

    1990-01-01

    An overview is presented of the effects of the wildfires that occurred in the Yellowstone National Park during 1988 and the techniques employed to combat these fires with the use of remote sensing. The fire management team utilized King-Air and Merlin aircraft flying night missions with a thermal IR line-scanning system. NASA-Ames Research Center assisted with an ER-2 high altitude aircraft with the ability to down-link active data from the aircraft via a teledetection system. The ER-2 was equipped with a multispectral Thematic Mapper Simulator scanner and the resultant map data and video imagery was provided to the fire command personnel for field evaluation and fire suppression activities. This type of information proved very valuable to the fire control management personnel and to the continuing ecological research goals of NASA-Ames scientists analyzing the effects of burn type and severity on ecosystem recovery and development.

  7. Real Time Data/Video/Voice Uplink and Downlink for Kuiper Airborne Observatory

    NASA Technical Reports Server (NTRS)

    Harper, Doyal A.

    1997-01-01

    LFS was an educational outreach adventure which brought the excitement of astronomical exploration on NASA's Kuiper Airborne Observatory (KAO) to a nationwide audience of children, parents and children through live, interactive television, broadcast from the KAO at an altitude of 41,000 feet during an actual scientific observing mission. The project encompassed three KAO flights during the fall of 1995, including a short practice mission, a daytime observing flight between Moffett Field, California to Houston, Texas, and a nighttime mission from Houston back to Moffett Field. The University of Chicago infrared research team participated in planning the program, developing auxiliary materials including background information and lesson plans, developing software which allowed students on the ground to control the telescope and on-board cameras via the Internet from the Adler Planetarium in Chicago, and acting as on-camera correspondents to explain and answer questions about the scientific research conducted during the flights.

  8. Design of measuring system for wire diameter based on sub-pixel edge detection algorithm

    NASA Astrophysics Data System (ADS)

    Chen, Yudong; Zhou, Wang

    2016-09-01

    Light projection method is often used in measuring system for wire diameter, which is relatively simpler structure and lower cost, and the measuring accuracy is limited by the pixel size of CCD. Using a CCD with small pixel size can improve the measuring accuracy, but will increase the cost and difficulty of making. In this paper, through the comparative analysis of a variety of sub-pixel edge detection algorithms, polynomial fitting method is applied for data processing in measuring system for wire diameter, to improve the measuring accuracy and enhance the ability of anti-noise. In the design of system structure, light projection method with orthogonal structure is used for the detection optical part, which can effectively reduce the error caused by line jitter in the measuring process. For the electrical part, ARM Cortex-M4 microprocessor is used as the core of the circuit module, which can not only drive double channel linear CCD but also complete the sampling, processing and storage of the CCD video signal. In addition, ARM microprocessor can complete the high speed operation of the whole measuring system for wire diameter in the case of no additional chip. The experimental results show that sub-pixel edge detection algorithm based on polynomial fitting can make up for the lack of single pixel size and improve the precision of measuring system for wire diameter significantly, without increasing hardware complexity of the entire system.

  9. Repurposing video recordings for structure motion estimations

    NASA Astrophysics Data System (ADS)

    Khaloo, Ali; Lattanzi, David

    2016-04-01

    Video monitoring of public spaces is becoming increasingly ubiquitous, particularly near essential structures and facilities. During any hazard event that dynamically excites a structure, such as an earthquake or hurricane, proximal video cameras may inadvertently capture the motion time-history of the structure during the event. If this dynamic time-history could be extracted from the repurposed video recording it would become a valuable forensic analysis tool for engineers performing post-disaster structural evaluations. The difficulty is that almost all potential video cameras are not installed to monitor structure motions, leading to camera perspective distortions and other associated challenges. This paper presents a method for extracting structure motions from videos using a combination of computer vision techniques. Images from a video recording are first reprojected into synthetic images that eliminate perspective distortion, using as-built knowledge of a structure for calibration. The motion of the camera itself during an event is also considered. Optical flow, a technique for tracking per-pixel motion, is then applied to these synthetic images to estimate the building motion. The developed method was validated using the experimental records of the NEESHub earthquake database. The results indicate that the technique is capable of estimating structural motions, particularly the frequency content of the response. Further work will evaluate variants and alternatives to the optical flow algorithm, as well as study the impact of video encoding artifacts on motion estimates.

  10. KSC-02pd1374

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  11. KSC-02pd1376

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  12. KSC-02pd1375

    NASA Image and Video Library

    2002-09-26

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  13. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  14. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  15. Experimental evaluation of open-loop UpLink Power Control using ACTS

    NASA Technical Reports Server (NTRS)

    Dissanayake, Asoka

    1995-01-01

    The present investigation deals with the implementation of open-loop up-link power control using a beacon signal in the down-link frequency band as the control parameter. A power control system was developed and tested using the ACTS satellite. ACTS carries beacon signals in both up- and down-link bands with which the relationship between the up- and down-link fading can be established. A power controlled carrier was transmitted to the ACTS satellite from a NASA operated ground station and the transponded signal was received at COMSAT Laboratories using a terminal that was routinely used to monitor the two ACTS beacon signals. The experiment ran for a period of approximately six months and the collected data were used to evaluate the performance of the power control system. A brief review of propagation factors involved in estimating the up-link fade using a beacon signal in the down-link band are presented. The power controller design and the experiment configuration are discussed. Results of the experiment are discussed.

  16. Scheduling Onboard Processing for the Proposed HyspIRI Mission

    NASA Technical Reports Server (NTRS)

    Chien, Steve; Mclaren, David; Rabideau, Gregg; Mandl, Daniel; Hengemihle, Jerry

    2011-01-01

    The proposed Hyspiri mission is evaluating a X-band Direct Broadcast (DB) capability that would enable data to be delivered to ground stations virtually as it is acquired. However the HyspIRI VSWIR and TIR instruments will produce 1 Gbps data while the DB capability is 15 M bps for a 60x oversubscription. In order to address this data volume mismatch a DB concept has been developed thatdetermines which data to downlink based on both: 1. The type of surface the spacecraft is overflying and 2. Onboard processing of the data to detect events. For example when the spacecraft is overflying polar regions it might downlink a snow/ice product. Additionally the onboard software will search for thermal signatures indicative of a volcanic event or wild fire and downlink summary information (extent, spectra) when detected. The process of determining which products to generate when, based on request prioritization and onboard processing and downlink constraints is inherently a prioritized scheduling problem - we describe work to develop an automated solution to this problem.

  17. Converting CSV Files to RKSML Files

    NASA Technical Reports Server (NTRS)

    Trebi-Ollennu, Ashitey; Liebersbach, Robert

    2009-01-01

    A computer program converts, into a format suitable for processing on Earth, files of downlinked telemetric data pertaining to the operation of the Instrument Deployment Device (IDD), which is a robot arm on either of the Mars Explorer Rovers (MERs). The raw downlinked data files are in comma-separated- value (CSV) format. The present program converts the files into Rover Kinematics State Markup Language (RKSML), which is an Extensible Markup Language (XML) format that facilitates representation of operations of the IDD and enables analysis of the operations by means of the Rover Sequencing Validation Program (RSVP), which is used to build sequences of commanded operations for the MERs. After conversion by means of the present program, the downlinked data can be processed by RSVP, enabling the MER downlink operations team to play back the actual IDD activity represented by the telemetric data against the planned IDD activity. Thus, the present program enhances the diagnosis of anomalies that manifest themselves as differences between actual and planned IDD activities.

  18. Cost-effective bidirectional digitized radio-over-fiber systems employing sigma delta modulation

    NASA Astrophysics Data System (ADS)

    Lee, Kyung Woon; Jung, HyunDo; Park, Jung Ho

    2016-11-01

    We propose a cost effective digitized radio-over-fiber (D-RoF) system employing a sigma delta modulation (SDM) and a bidirectional transmission technique using phase modulated downlink and intensity modulated uplink. SDM is transparent to different radio access technologies and modulation formats, and more suitable for a downlink of wireless system because a digital to analog converter (DAC) can be avoided at the base station (BS). Also, Central station and BS share the same light source by using a phase modulation for the downlink and an intensity modulation for the uplink transmission. Avoiding DACs and light sources have advantages in terms of cost reduction, power consumption, and compatibility with conventional wireless network structure. We have designed a cost effective bidirectional D-RoF system using a low pass SDM and measured the downlink and uplink transmission performance in terms of error vector magnitude, signal spectra, and constellations, which are based on the 10MHz LTE 64-QAM standard.

  19. Yankee Tank Creek Observatory Report No. 1: Forty-One Measures from 2012

    NASA Astrophysics Data System (ADS)

    Wiley, E. O.

    2014-01-01

    This report contains 41 measures of mostly STF pairs taken in 2012 and comprises those pairs not reported in other papers. All measures were taken with a 0.2M Dall-Kirkham and a DMK21 video camera working at F22.5. Both stacking and pixel correlation techniques were used to obtain measures using REDUC.

  20. Gaming as a Therapeutic Tool in Adolescence. Experience of Institutional Therapy of CThA, UCL, Brussels, Belgium.

    PubMed

    Descamps, Guillaume; d'Alcantara, Ann

    2016-09-01

    This work presents the experience of an Emancipatory action research led at the Therapeutic Center for Adolescents (CThA) at Saint Luc's Clinics (UCL). This research focuses on the practice effects of "Pixels" and "Passerelle" workshops at CThA. It is about the use of video games as a therapeutic tool, mobilizing of the symptomatology of the teenager. "Pixels" workshops use playing according to three specific forms: the paper role-play game, the video game, and the cards playing game. Their specificity is that the participative adult shows a regressive ability strong enough to play with teenagers and is very careful to not interpret what takes place within. "Passerelle" workshops demonstrate the link between the teenager's mind and the use of his own virtual avatar. It allows to evolve from a "play together" to a "talk together", a moment of symbolization and of being able to stand back in regards to his or her own recreational activities. As a discussion, this clinical illustration of Karl recovering from depression and dependency. This setting for speech allowed him to evolve into an impulse mood and to reconnect emotionally.

  1. Spatiotemporal Pixelization to Increase the Recognition Score of Characters for Retinal Prostheses

    PubMed Central

    Kim, Hyun Seok; Park, Kwang Suk

    2017-01-01

    Most of the retinal prostheses use a head-fixed camera and a video processing unit. Some studies proposed various image processing methods to improve visual perception for patients. However, previous studies only focused on using spatial information. The present study proposes a spatiotemporal pixelization method mimicking fixational eye movements to generate stimulation images for artificial retina arrays by combining spatial and temporal information. Input images were sampled with a resolution that was four times higher than the number of pixel arrays. We subsampled this image and generated four different phosphene images. We then evaluated the recognition scores of characters by sequentially presenting phosphene images with varying pixel array sizes (6 × 6, 8 × 8 and 10 × 10) and stimulus frame rates (10 Hz, 15 Hz, 20 Hz, 30 Hz, and 60 Hz). The proposed method showed the highest recognition score at a stimulus frame rate of approximately 20 Hz. The method also significantly improved the recognition score for complex characters. This method provides a new way to increase practical resolution over restricted spatial resolution by merging the higher resolution image into high-frame time slots. PMID:29073735

  2. Using polynomials to simplify fixed pattern noise and photometric correction of logarithmic CMOS image sensors.

    PubMed

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-10-16

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient.

  3. Computational imaging with a balanced detector.

    PubMed

    Soldevila, F; Clemente, P; Tajahuerce, E; Uribe-Patarroyo, N; Andrés, P; Lancis, J

    2016-06-29

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  4. Computational imaging with a balanced detector

    NASA Astrophysics Data System (ADS)

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-06-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media.

  5. Evaluation of color encodings for high dynamic range pixels

    NASA Astrophysics Data System (ADS)

    Boitard, Ronan; Mantiuk, Rafal K.; Pouli, Tania

    2015-03-01

    Traditional Low Dynamic Range (LDR) color spaces encode a small fraction of the visible color gamut, which does not encompass the range of colors produced on upcoming High Dynamic Range (HDR) displays. Future imaging systems will require encoding much wider color gamut and luminance range. Such wide color gamut can be represented using floating point HDR pixel values but those are inefficient to encode. They also lack perceptual uniformity of the luminance and color distribution, which is provided (in approximation) by most LDR color spaces. Therefore, there is a need to devise an efficient, perceptually uniform and integer valued representation for high dynamic range pixel values. In this paper we evaluate several methods for encoding colour HDR pixel values, in particular for use in image and video compression. Unlike other studies we test both luminance and color difference encoding in a rigorous 4AFC threshold experiments to determine the minimum bit-depth required. Results show that the Perceptual Quantizer (PQ) encoding provides the best perceptual uniformity in the considered luminance range, however the gain in bit-depth is rather modest. More significant difference can be observed between color difference encoding schemes, from which YDuDv encoding seems to be the most efficient.

  6. Computational imaging with a balanced detector

    PubMed Central

    Soldevila, F.; Clemente, P.; Tajahuerce, E.; Uribe-Patarroyo, N.; Andrés, P.; Lancis, J.

    2016-01-01

    Single-pixel cameras allow to obtain images in a wide range of challenging scenarios, including broad regions of the electromagnetic spectrum and through scattering media. However, there still exist several drawbacks that single-pixel architectures must address, such as acquisition speed and imaging in the presence of ambient light. In this work we introduce balanced detection in combination with simultaneous complementary illumination in a single-pixel camera. This approach enables to acquire information even when the power of the parasite signal is higher than the signal itself. Furthermore, this novel detection scheme increases both the frame rate and the signal-to-noise ratio of the system. By means of a fast digital micromirror device together with a low numerical aperture collecting system, we are able to produce a live-feed video with a resolution of 64 × 64 pixels at 5 Hz. With advanced undersampling techniques, such as compressive sensing, we can acquire information at rates of 25 Hz. By using this strategy, we foresee real-time biological imaging with large area detectors in conditions where array sensors are unable to operate properly, such as infrared imaging and dealing with objects embedded in turbid media. PMID:27353733

  7. Robust Transceiver Design for Multiuser MIMO Downlink with Channel Uncertainties

    NASA Astrophysics Data System (ADS)

    Miao, Wei; Li, Yunzhou; Chen, Xiang; Zhou, Shidong; Wang, Jing

    This letter addresses the problem of robust transceiver design for the multiuser multiple-input-multiple-output (MIMO) downlink where the channel state information at the base station (BS) is imperfect. A stochastic approach which minimizes the expectation of the total mean square error (MSE) of the downlink conditioned on the channel estimates under a total transmit power constraint is adopted. The iterative algorithm reported in [2] is improved to handle the proposed robust optimization problem. Simulation results show that our proposed robust scheme effectively reduces the performance loss due to channel uncertainties and outperforms existing methods, especially when the channel errors of the users are different.

  8. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  9. A system to geometrically rectify and map airborne scanner imagery and to estimate ground area. [by computer

    NASA Technical Reports Server (NTRS)

    Spencer, M. M.; Wolf, J. M.; Schall, M. A.

    1974-01-01

    A system of computer programs were developed which performs geometric rectification and line-by-line mapping of airborne multispectral scanner data to ground coordinates and estimates ground area. The system requires aircraft attitude and positional information furnished by ancillary aircraft equipment, as well as ground control points. The geometric correction and mapping procedure locates the scan lines, or the pixels on each line, in terms of map grid coordinates. The area estimation procedure gives ground area for each pixel or for a predesignated parcel specified in map grid coordinates. The results of exercising the system with simulated data showed the uncorrected video and corrected imagery and produced area estimates accurate to better than 99.7%.

  10. Respiratory rate estimation from the built-in cameras of smartphones and tablets.

    PubMed

    Nam, Yunyoung; Lee, Jinseok; Chon, Ki H

    2014-04-01

    This paper presents a method for respiratory rate estimation using the camera of a smartphone, an MP3 player or a tablet. The iPhone 4S, iPad 2, iPod 5, and Galaxy S3 were used to estimate respiratory rates from the pulse signal derived from a finger placed on the camera lens of these devices. Prior to estimation of respiratory rates, we systematically investigated the optimal signal quality of these 4 devices by dividing the video camera's resolution into 12 different pixel regions. We also investigated the optimal signal quality among the red, green and blue color bands for each of these 12 pixel regions for all four devices. It was found that the green color band provided the best signal quality for all 4 devices and that the left half VGA pixel region was found to be the best choice only for iPhone 4S. For the other three devices, smaller 50 × 50 pixel regions were found to provide better or equally good signal quality than the larger pixel regions. Using the green signal and the optimal pixel regions derived from the four devices, we then investigated the suitability of the smartphones, the iPod 5 and the tablet for respiratory rate estimation using three different computational methods: the autoregressive (AR) model, variable-frequency complex demodulation (VFCDM), and continuous wavelet transform (CWT) approaches. Specifically, these time-varying spectral techniques were used to identify the frequency and amplitude modulations as they contain respiratory rate information. To evaluate the performance of the three computational methods and the pixel regions for the optimal signal quality, data were collected from 10 healthy subjects. It was found that the VFCDM method provided good estimates of breathing rates that were in the normal range (12-24 breaths/min). Both CWT and VFCDM methods provided reasonably good estimates for breathing rates that were higher than 26 breaths/min but their accuracy degraded concomitantly with increased respiratory rates. Overall, the VFCDM method provided the best results for accuracy (smaller median error), consistency (smaller interquartile range of the median value), and computational efficiency (less than 0.5 s on 1 min of data using a MATLAB implementation) to extract breathing rates that varied from 12 to 36 breaths/min. The AR method provided the least accurate respiratory rate estimation among the three methods. This work illustrates that both heart rates and normal breathing rates can be accurately derived from a video signal obtained from smartphones, an MP3 player and tablets with or without a flashlight.

  11. Beacon Spacecraft Operations: Lessons in Automation

    NASA Technical Reports Server (NTRS)

    Sherwood, R.; Schlutsmeyer, A.; Sue, M.; Szijjarto, J.; Wyatt, E. J.

    2000-01-01

    A new approach to mission operations has been flight validated on NASA's Deep Space One (DS1) mission that launched in October 1998. The beacon monitor operations technology is aimed at decreasing the total volume of downlinked engineering telemetry by reducing the frequency of downlink and the volume of data received per pass.

  12. Efficient Use of Video for 3d Modelling of Cultural Heritage Objects

    NASA Astrophysics Data System (ADS)

    Alsadik, B.; Gerke, M.; Vosselman, G.

    2015-03-01

    Currently, there is a rapid development in the techniques of the automated image based modelling (IBM), especially in advanced structure-from-motion (SFM) and dense image matching methods, and camera technology. One possibility is to use video imaging to create 3D reality based models of cultural heritage architectures and monuments. Practically, video imaging is much easier to apply when compared to still image shooting in IBM techniques because the latter needs a thorough planning and proficiency. However, one is faced with mainly three problems when video image sequences are used for highly detailed modelling and dimensional survey of cultural heritage objects. These problems are: the low resolution of video images, the need to process a large number of short baseline video images and blur effects due to camera shake on a significant number of images. In this research, the feasibility of using video images for efficient 3D modelling is investigated. A method is developed to find the minimal significant number of video images in terms of object coverage and blur effect. This reduction in video images is convenient to decrease the processing time and to create a reliable textured 3D model compared with models produced by still imaging. Two experiments for modelling a building and a monument are tested using a video image resolution of 1920×1080 pixels. Internal and external validations of the produced models are applied to find out the final predicted accuracy and the model level of details. Related to the object complexity and video imaging resolution, the tests show an achievable average accuracy between 1 - 5 cm when using video imaging, which is suitable for visualization, virtual museums and low detailed documentation.

  13. Feasibility study of a real-time operating system for a multichannel MPEG-4 encoder

    NASA Astrophysics Data System (ADS)

    Lehtoranta, Olli; Hamalainen, Timo D.

    2005-03-01

    Feasibility of DSP/BIOS real-time operating system for a multi-channel MPEG-4 encoder is studied. Performances of two MPEG-4 encoder implementations with and without the operating system are compared in terms of encoding frame rate and memory requirements. The effects of task switching frequency and number of parallel video channels to the encoding frame rate are measured. The research is carried out on a 200 MHz TMS320C6201 fixed point DSP using QCIF (176x144 pixels) video format. Compared to a traditional DSP implementation without an operating system, inclusion of DSP/BIOS reduces total system throughput only by 1 QCIF frames/s. The operating system has 6 KB data memory overhead and program memory requirement of 15.7 KB. Hence, the overhead is considered low enough for resource critical mobile video applications.

  14. Using digital images to measure and discriminate small particles in cotton

    NASA Astrophysics Data System (ADS)

    Taylor, Robert A.; Godbey, Luther C.

    1991-02-01

    Inages from conventional video systems are being digitized in coraputers for the analysis of small trash particles in cotton. The method has been developed to automate particle counting and area measurements for bales of cotton prepared for market. Because the video output is linearly proportional to the amount of light reflected the best spectral band for optimum particle discrimination should be centered at the wavelength of maximum difference between particles and their surroundings. However due to the spectral distribution of the illumination energy and the detector sensitivity peak image performance bands were altered. Reflectance from seven mechanically cleaned cotton lint samples and trash removed were examined for spectral contrast in the wavelength range of camera sensitivity. Pixel intensity histograms from the video systent are reported for simulated trashmeter area reference samples (painted dots on panels) and for cotton containing trash to demonstrate the particle discrimination mechanism. 2.

  15. Background-Modeling-Based Adaptive Prediction for Surveillance Video Coding.

    PubMed

    Zhang, Xianguo; Huang, Tiejun; Tian, Yonghong; Gao, Wen

    2014-02-01

    The exponential growth of surveillance videos presents an unprecedented challenge for high-efficiency surveillance video coding technology. Compared with the existing coding standards that were basically developed for generic videos, surveillance video coding should be designed to make the best use of the special characteristics of surveillance videos (e.g., relative static background). To do so, this paper first conducts two analyses on how to improve the background and foreground prediction efficiencies in surveillance video coding. Following the analysis results, we propose a background-modeling-based adaptive prediction (BMAP) method. In this method, all blocks to be encoded are firstly classified into three categories. Then, according to the category of each block, two novel inter predictions are selectively utilized, namely, the background reference prediction (BRP) that uses the background modeled from the original input frames as the long-term reference and the background difference prediction (BDP) that predicts the current data in the background difference domain. For background blocks, the BRP can effectively improve the prediction efficiency using the higher quality background as the reference; whereas for foreground-background-hybrid blocks, the BDP can provide a better reference after subtracting its background pixels. Experimental results show that the BMAP can achieve at least twice the compression ratio on surveillance videos as AVC (MPEG-4 Advanced Video Coding) high profile, yet with a slightly additional encoding complexity. Moreover, for the foreground coding performance, which is crucial to the subjective quality of moving objects in surveillance videos, BMAP also obtains remarkable gains over several state-of-the-art methods.

  16. Radiometric resolution enhancement by lossy compression as compared to truncation followed by lossless compression

    NASA Technical Reports Server (NTRS)

    Tilton, James C.; Manohar, Mareboyana

    1994-01-01

    Recent advances in imaging technology make it possible to obtain imagery data of the Earth at high spatial, spectral and radiometric resolutions from Earth orbiting satellites. The rate at which the data is collected from these satellites can far exceed the channel capacity of the data downlink. Reducing the data rate to within the channel capacity can often require painful trade-offs in which certain scientific returns are sacrificed for the sake of others. In this paper we model the radiometric version of this form of lossy compression by dropping a specified number of least significant bits from each data pixel and compressing the remaining bits using an appropriate lossless compression technique. We call this approach 'truncation followed by lossless compression' or TLLC. We compare the TLLC approach with applying a lossy compression technique to the data for reducing the data rate to the channel capacity, and demonstrate that each of three different lossy compression techniques (JPEG/DCT, VQ and Model-Based VQ) give a better effective radiometric resolution than TLLC for a given channel rate.

  17. Onboard Interferometric SAR Processor for the Ka-Band Radar Interferometer (KaRIn)

    NASA Technical Reports Server (NTRS)

    Esteban-Fernandez, Daniel; Rodriquez, Ernesto; Peral, Eva; Clark, Duane I.; Wu, Xiaoqing

    2011-01-01

    An interferometric synthetic aperture radar (SAR) onboard processor concept and algorithm has been developed for the Ka-band radar interferometer (KaRIn) instrument on the Surface and Ocean Topography (SWOT) mission. This is a mission- critical subsystem that will perform interferometric SAR processing and multi-look averaging over the oceans to decrease the data rate by three orders of magnitude, and therefore enable the downlink of the radar data to the ground. The onboard processor performs demodulation, range compression, coregistration, and re-sampling, and forms nine azimuth squinted beams. For each of them, an interferogram is generated, including common-band spectral filtering to improve correlation, followed by averaging to the final 1 1-km ground resolution pixel. The onboard processor has been prototyped on a custom FPGA-based cPCI board, which will be part of the radar s digital subsystem. The level of complexity of this technology, dictated by the implementation of interferometric SAR processing at high resolution, the extremely tight level of accuracy required, and its implementation on FPGAs are unprecedented at the time of this reporting for an onboard processor for flight applications.

  18. World's Most Advanced Planetarium Opens; University Partners Sought

    NASA Astrophysics Data System (ADS)

    Duncan, Douglas K.

    2015-01-01

    The 40 year old-Fiske Planetarium at the Univ. of Colorado has remodeled as the most advanced planetarium ever built. The 20m diameter dome features a stunning video image 8,000 x 8,000 pixels, up to 60 frames per second, produced by 6 JVC projectors. It also features the first US installation of the Megastar IIa Opto-mechanical planetarium that projects 20 million individual stars and 170 deep sky objects. You can use binoculars indoor and see individual Milky Way stars.The video projectors have high dynamic range, but not as great as the eye. In order to preserve the remarkable Megastar sky while still using video, each projector shines through a computer controlled variable density filter than extends the dynamic range by about 4 magnitudes. It therefore is possible to show a Mauna Kea quality star field and also beautiful bright videos.Unlike most planetariums, the #1 audience of Fiske is college students - the more than 2,000 who take Introductory Astronomy at Colorado each year. WE ARE SEEKING OTHER UNIVERSITIES WITH FULL-DOME VIDEO PLANETARIUMS to join us in the production of college-level material. We already have a beautiful production studio funded by Hewlett Packard and an experienced full-time Video Producer for Educational Programs. Please seek out Fiske Director Dr. Doug Duncan if interested in possible collaboration.

  19. Recursive algorithms for bias and gain nonuniformity correction in infrared videos.

    PubMed

    Pipa, Daniel R; da Silva, Eduardo A B; Pagliari, Carla L; Diniz, Paulo S R

    2012-12-01

    Infrared focal-plane array (IRFPA) detectors suffer from fixed-pattern noise (FPN) that degrades image quality, which is also known as spatial nonuniformity. FPN is still a serious problem, despite recent advances in IRFPA technology. This paper proposes new scene-based correction algorithms for continuous compensation of bias and gain nonuniformity in FPA sensors. The proposed schemes use recursive least-square and affine projection techniques that jointly compensate for both the bias and gain of each image pixel, presenting rapid convergence and robustness to noise. The synthetic and real IRFPA videos experimentally show that the proposed solutions are competitive with the state-of-the-art in FPN reduction, by presenting recovered images with higher fidelity.

  20. Potential usefulness of a video printer for producing secondary images from digitized chest radiographs

    NASA Astrophysics Data System (ADS)

    Nishikawa, Robert M.; MacMahon, Heber; Doi, Kunio; Bosworth, Eric

    1991-05-01

    Communication between radiologists and clinicians could be improved if a secondary image (copy of the original image) accompanied the radiologic report. In addition, the number of lost original radiographs could be decreased, since clinicians would have less need to borrow films. The secondary image should be simple and inexpensive to produce, while providing sufficient image quality for verification of the diagnosis. We are investigating the potential usefulness of a video printer for producing copies of radiographs, i.e. images printed on thermal paper. The video printer we examined (Seikosha model VP-3500) can provide 64 shades of gray. It is capable of recording images up to 1,280 pixels by 1,240 lines and can accept any raster-type video signal. The video printer was characterized in terms of its linearity, contrast, latitude, resolution, and noise properties. The quality of video-printer images was also evaluated in an observer study using portable chest radiographs. We found that observers could confirm up to 90 of the reported findings in the thorax using video- printer images, when the original radiographs were of high quality. The number of verified findings was diminished when high spatial resolution was required (e.g. detection of a subtle pneumothorax) or when a low-contrast finding was located in the mediastinal area or below the diaphragm (e.g. nasogastric tubes).

  1. Super-Resolution for “Jilin-1” Satellite Video Imagery via a Convolutional Network

    PubMed Central

    Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-01-01

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method’s practicality. Experimental results on “Jilin-1” satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods. PMID:29652838

  2. Super-Resolution for "Jilin-1" Satellite Video Imagery via a Convolutional Network.

    PubMed

    Xiao, Aoran; Wang, Zhongyuan; Wang, Lei; Ren, Yexian

    2018-04-13

    Super-resolution for satellite video attaches much significance to earth observation accuracy, and the special imaging and transmission conditions on the video satellite pose great challenges to this task. The existing deep convolutional neural-network-based methods require pre-processing or post-processing to be adapted to a high-resolution size or pixel format, leading to reduced performance and extra complexity. To this end, this paper proposes a five-layer end-to-end network structure without any pre-processing and post-processing, but imposes a reshape or deconvolution layer at the end of the network to retain the distribution of ground objects within the image. Meanwhile, we formulate a joint loss function by combining the output and high-dimensional features of a non-linear mapping network to precisely learn the desirable mapping relationship between low-resolution images and their high-resolution counterparts. Also, we use satellite video data itself as a training set, which favors consistency between training and testing images and promotes the method's practicality. Experimental results on "Jilin-1" satellite video imagery show that this method demonstrates a superior performance in terms of both visual effects and measure metrics over competing methods.

  3. Ultra-high resolution and high-brightness AMOLED

    NASA Astrophysics Data System (ADS)

    Wacyk, Ihor; Ghosh, Amal; Prache, Olivier; Draper, Russ; Fellowes, Dave

    2012-06-01

    As part of its continuing effort to improve both the resolution and optical performance of AMOLED microdisplays, eMagin has recently developed an SXGA (1280×3×1024) microdisplay under a US Army RDECOM CERDEC NVESD contract that combines the world's smallest OLED pixel pitch with an ultra-high brightness green OLED emitter. This development is aimed at next-generation HMD systems with "see-through" and daylight imaging requirements. The OLED pixel array is built on a 0.18-micron CMOS backplane and contains over 4 million individually addressable pixels with a pixel pitch of 2.7 × 8.1 microns, resulting in an active area of 0.52 inches diagonal. Using both spatial and temporal enhancement, the display can provide over 10-bits of gray-level control for high dynamic range applications. The new pixel design also enables the future implementation of a full-color QSXGA (2560 × RGB × 2048) microdisplay in an active area of only 1.05 inch diagonal. A low-power serialized low-voltage-differential-signaling (LVDS) interface is integrated into the display for use as a remote video link for tethered systems. The new SXGA backplane has been combined with the high-brightness green OLED device developed by eMagin under an NVESD contract. This OLED device has produced an output brightness of more than 8000fL with all pixels on; lifetime measurements are currently underway and will presented at the meeting. This paper will describe the operational features and first optical and electrical test results of the new SXGA demonstrator microdisplay.

  4. Joint minimization of uplink and downlink whole-body exposure dose in indoor wireless networks.

    PubMed

    Plets, D; Joseph, W; Vanhecke, K; Vermeeren, G; Wiart, J; Aerts, S; Varsier, N; Martens, L

    2015-01-01

    The total whole-body exposure dose in indoor wireless networks is minimized. For the first time, indoor wireless networks are designed and simulated for a minimal exposure dose, where both uplink and downlink are considered. The impact of the minimization is numerically assessed for four scenarios: two WiFi configurations with different throughputs, a Universal Mobile Telecommunications System (UMTS) configuration for phone call traffic, and a Long-Term Evolution (LTE) configuration with a high data rate. Also, the influence of the uplink usage on the total absorbed dose is characterized. Downlink dose reductions of at least 75% are observed when adding more base stations with a lower transmit power. Total dose reductions decrease with increasing uplink usage for WiFi due to the lack of uplink power control but are maintained for LTE and UMTS. Uplink doses become dominant over downlink doses for usages of only a few seconds for WiFi. For UMTS and LTE, an almost continuous uplink usage is required to have a significant effect on the total dose, thanks to the power control mechanism.

  5. Joint Minimization of Uplink and Downlink Whole-Body Exposure Dose in Indoor Wireless Networks

    PubMed Central

    Plets, D.; Joseph, W.; Vanhecke, K.; Vermeeren, G.; Wiart, J.; Aerts, S.; Varsier, N.; Martens, L.

    2015-01-01

    The total whole-body exposure dose in indoor wireless networks is minimized. For the first time, indoor wireless networks are designed and simulated for a minimal exposure dose, where both uplink and downlink are considered. The impact of the minimization is numerically assessed for four scenarios: two WiFi configurations with different throughputs, a Universal Mobile Telecommunications System (UMTS) configuration for phone call traffic, and a Long-Term Evolution (LTE) configuration with a high data rate. Also, the influence of the uplink usage on the total absorbed dose is characterized. Downlink dose reductions of at least 75% are observed when adding more base stations with a lower transmit power. Total dose reductions decrease with increasing uplink usage for WiFi due to the lack of uplink power control but are maintained for LTE and UMTS. Uplink doses become dominant over downlink doses for usages of only a few seconds for WiFi. For UMTS and LTE, an almost continuous uplink usage is required to have a significant effect on the total dose, thanks to the power control mechanism. PMID:25793213

  6. On the definition of adapted audio/video profiles for high-quality video calling services over LTE/4G

    NASA Astrophysics Data System (ADS)

    Ndiaye, Maty; Quinquis, Catherine; Larabi, Mohamed Chaker; Le Lay, Gwenael; Saadane, Hakim; Perrine, Clency

    2014-01-01

    During the last decade, the important advances and widespread availability of mobile technology (operating systems, GPUs, terminal resolution and so on) have encouraged a fast development of voice and video services like video-calling. While multimedia services have largely grown on mobile devices, the generated increase of data consumption is leading to the saturation of mobile networks. In order to provide data with high bit-rates and maintain performance as close as possible to traditional networks, the 3GPP (The 3rd Generation Partnership Project) worked on a high performance standard for mobile called Long Term Evolution (LTE). In this paper, we aim at expressing recommendations related to audio and video media profiles (selection of audio and video codecs, bit-rates, frame-rates, audio and video formats) for a typical video-calling services held over LTE/4G mobile networks. These profiles are defined according to targeted devices (smartphones, tablets), so as to ensure the best possible quality of experience (QoE). Obtained results indicate that for a CIF format (352 x 288 pixels) which is usually used for smartphones, the VP8 codec provides a better image quality than the H.264 codec for low bitrates (from 128 to 384 kbps). However sequences with high motion, H.264 in slow mode is preferred. Regarding audio, better results are globally achieved using wideband codecs offering good quality except for opus codec (at 12.2 kbps).

  7. Eulerian frequency analysis of structural vibrations from high-speed video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Venanzoni, Andrea; Siemens Industry Software NV, Interleuvenlaan 68, B-3001 Leuven; De Ryck, Laurent

    An approach for the analysis of the frequency content of structural vibrations from high-speed video recordings is proposed. The techniques and tools proposed rely on an Eulerian approach, that is, using the time history of pixels independently to analyse structural motion, as opposed to Lagrangian approaches, where the motion of the structure is tracked in time. The starting point is an existing Eulerian motion magnification method, which consists in decomposing the video frames into a set of spatial scales through a so-called Laplacian pyramid [1]. Each scale — or level — can be amplified independently to reconstruct a magnified motionmore » of the observed structure. The approach proposed here provides two analysis tools or pre-amplification steps. The first tool provides a representation of the global frequency content of a video per pyramid level. This may be further enhanced by applying an angular filter in the spatial frequency domain to each frame of the video before the Laplacian pyramid decomposition, which allows for the identification of the frequency content of the structural vibrations in a particular direction of space. This proposed tool complements the existing Eulerian magnification method by amplifying selectively the levels containing relevant motion information with respect to their frequency content. This magnifies the displacement while limiting the noise contribution. The second tool is a holographic representation of the frequency content of a vibrating structure, yielding a map of the predominant frequency components across the structure. In contrast to the global frequency content representation of the video, this tool provides a local analysis of the periodic gray scale intensity changes of the frame in order to identify the vibrating parts of the structure and their main frequencies. Validation cases are provided and the advantages and limits of the approaches are discussed. The first validation case consists of the frequency content retrieval of the tip of a shaker, excited at selected fixed frequencies. The goal of this setup is to retrieve the frequencies at which the tip is excited. The second validation case consists of two thin metal beams connected to a randomly excited bar. It is shown that the holographic representation visually highlights the predominant frequency content of each pixel and locates the global frequencies of the motion, thus retrieving the natural frequencies for each beam.« less

  8. Evaluation of experimental UAV video change detection

    NASA Astrophysics Data System (ADS)

    Bartelsen, J.; Saur, G.; Teutsch, C.

    2016-10-01

    During the last ten years, the availability of images acquired from unmanned aerial vehicles (UAVs) has been continuously increasing due to the improvements and economic success of flight and sensor systems. From our point of view, reliable and automatic image-based change detection may contribute to overcoming several challenging problems in military reconnaissance, civil security, and disaster management. Changes within a scene can be caused by functional activities, i.e., footprints or skid marks, excavations, or humidity penetration; these might be recognizable in aerial images, but are almost overlooked when change detection is executed manually. With respect to the circumstances, these kinds of changes may be an indication of sabotage, terroristic activity, or threatening natural disasters. Although image-based change detection is possible from both ground and aerial perspectives, in this paper we primarily address the latter. We have applied an extended approach to change detection as described by Saur and Kruger,1 and Saur et al.2 and have built upon the ideas of Saur and Bartelsen.3 The commercial simulation environment Virtual Battle Space 3 (VBS3) is used to simulate aerial "before" and "after" image acquisition concerning flight path, weather conditions and objects within the scene and to obtain synthetic videos. Video frames, which depict the same part of the scene, including "before" and "after" changes and not necessarily from the same perspective, are registered pixel-wise against each other by a photogrammetric concept, which is based on a homography. The pixel-wise registration is used to apply an automatic difference analysis, which, to a limited extent, is able to suppress typical errors caused by imprecise frame registration, sensor noise, vegetation and especially parallax effects. The primary concern of this paper is to seriously evaluate the possibilities and limitations of our current approach for image-based change detection with respect to the flight path, viewpoint change and parametrization. Hence, based on synthetic "before" and "after" videos of a simulated scene, we estimated the precision and recall of automatically detected changes. In addition and based on our approach, we illustrate the results showing the change detection in short, but real video sequences. Future work will improve the photogrammetric approach for frame registration, and extensive real video material, capable of change detection, will be acquired.

  9. Video segmentation using keywords

    NASA Astrophysics Data System (ADS)

    Ton-That, Vinh; Vong, Chi-Tai; Nguyen-Dao, Xuan-Truong; Tran, Minh-Triet

    2018-04-01

    At DAVIS-2016 Challenge, many state-of-art video segmentation methods achieve potential results, but they still much depend on annotated frames to distinguish between background and foreground. It takes a lot of time and efforts to create these frames exactly. In this paper, we introduce a method to segment objects from video based on keywords given by user. First, we use a real-time object detection system - YOLOv2 to identify regions containing objects that have labels match with the given keywords in the first frame. Then, for each region identified from the previous step, we use Pyramid Scene Parsing Network to assign each pixel as foreground or background. These frames can be used as input frames for Object Flow algorithm to perform segmentation on entire video. We conduct experiments on a subset of DAVIS-2016 dataset in half the size of its original size, which shows that our method can handle many popular classes in PASCAL VOC 2012 dataset with acceptable accuracy, about 75.03%. We suggest widely testing by combining other methods to improve this result in the future.

  10. A Comprehensive Motion Estimation Technique for the Improvement of EIS Methods Based on the SURF Algorithm and Kalman Filter.

    PubMed

    Cheng, Xuemin; Hao, Qun; Xie, Mengdi

    2016-04-07

    Video stabilization is an important technology for removing undesired motion in videos. This paper presents a comprehensive motion estimation method for electronic image stabilization techniques, integrating the speeded up robust features (SURF) algorithm, modified random sample consensus (RANSAC), and the Kalman filter, and also taking camera scaling and conventional camera translation and rotation into full consideration. Using SURF in sub-pixel space, feature points were located and then matched. The false matched points were removed by modified RANSAC. Global motion was estimated by using the feature points and modified cascading parameters, which reduced the accumulated errors in a series of frames and improved the peak signal to noise ratio (PSNR) by 8.2 dB. A specific Kalman filter model was established by considering the movement and scaling of scenes. Finally, video stabilization was achieved with filtered motion parameters using the modified adjacent frame compensation. The experimental results proved that the target images were stabilized even when the vibrating amplitudes of the video become increasingly large.

  11. A Real-Time Image Acquisition And Processing System For A RISC-Based Microcomputer

    NASA Astrophysics Data System (ADS)

    Luckman, Adrian J.; Allinson, Nigel M.

    1989-03-01

    A low cost image acquisition and processing system has been developed for the Acorn Archimedes microcomputer. Using a Reduced Instruction Set Computer (RISC) architecture, the ARM (Acorn Risc Machine) processor provides instruction speeds suitable for image processing applications. The associated improvement in data transfer rate has allowed real-time video image acquisition without the need for frame-store memory external to the microcomputer. The system is comprised of real-time video digitising hardware which interfaces directly to the Archimedes memory, and software to provide an integrated image acquisition and processing environment. The hardware can digitise a video signal at up to 640 samples per video line with programmable parameters such as sampling rate and gain. Software support includes a work environment for image capture and processing with pixel, neighbourhood and global operators. A friendly user interface is provided with the help of the Archimedes Operating System WIMP (Windows, Icons, Mouse and Pointer) Manager. Windows provide a convenient way of handling images on the screen and program control is directed mostly by pop-up menus.

  12. Droplet morphometry and velocimetry (DMV): a video processing software for time-resolved, label-free tracking of droplet parameters.

    PubMed

    Basu, Amar S

    2013-05-21

    Emerging assays in droplet microfluidics require the measurement of parameters such as drop size, velocity, trajectory, shape deformation, fluorescence intensity, and others. While micro particle image velocimetry (μPIV) and related techniques are suitable for measuring flow using tracer particles, no tool exists for tracking droplets at the granularity of a single entity. This paper presents droplet morphometry and velocimetry (DMV), a digital video processing software for time-resolved droplet analysis. Droplets are identified through a series of image processing steps which operate on transparent, translucent, fluorescent, or opaque droplets. The steps include background image generation, background subtraction, edge detection, small object removal, morphological close and fill, and shape discrimination. A frame correlation step then links droplets spanning multiple frames via a nearest neighbor search with user-defined matching criteria. Each step can be individually tuned for maximum compatibility. For each droplet found, DMV provides a time-history of 20 different parameters, including trajectory, velocity, area, dimensions, shape deformation, orientation, nearest neighbour spacing, and pixel statistics. The data can be reported via scatter plots, histograms, and tables at the granularity of individual droplets or by statistics accrued over the population. We present several case studies from industry and academic labs, including the measurement of 1) size distributions and flow perturbations in a drop generator, 2) size distributions and mixing rates in drop splitting/merging devices, 3) efficiency of single cell encapsulation devices, 4) position tracking in electrowetting operations, 5) chemical concentrations in a serial drop dilutor, 6) drop sorting efficiency of a tensiophoresis device, 7) plug length and orientation of nonspherical plugs in a serpentine channel, and 8) high throughput tracking of >250 drops in a reinjection system. Performance metrics show that highest accuracy and precision is obtained when the video resolution is >300 pixels per drop. Analysis time increases proportionally with video resolution. The current version of the software provides throughputs of 2-30 fps, suggesting the potential for real time analysis.

  13. Highly Efficient Multi Channel Packet Forwarding with Round Robin Intermittent Periodic Transmit for Multihop Wireless Backhaul Networks

    PubMed Central

    Furukawa, Hiroshi

    2017-01-01

    Round Robin based Intermittent Periodic Transmit (RR-IPT) has been proposed which achieves highly efficient multi-hop relays in multi-hop wireless backhaul networks (MWBN) where relay nodes are 2-dimensionally deployed. This paper newly investigates multi-channel packet scheduling and forwarding scheme for RR-IPT. Downlink traffic is forwarded by RR-IPT via one of the channels, while uplink traffic and part of downlink are accommodated in the other channel. By comparing IPT and carrier sense multiple access with collision avoidance (CSMA/CA) for uplink/downlink packet forwarding channel, IPT is more effective in reducing packet loss rate whereas CSMA/CA is better in terms of system throughput and packet delay improvement. PMID:29137164

  14. Scene-based nonuniformity correction using local constant statistics.

    PubMed

    Zhang, Chao; Zhao, Wenyi

    2008-06-01

    In scene-based nonuniformity correction, the statistical approach assumes all possible values of the true-scene pixel are seen at each pixel location. This global-constant-statistics assumption does not distinguish fixed pattern noise from spatial variations in the average image. This often causes the "ghosting" artifacts in the corrected images since the existing spatial variations are treated as noises. We introduce a new statistical method to reduce the ghosting artifacts. Our method proposes a local-constant statistics that assumes that the temporal signal distribution is not constant at each pixel but is locally true. This considers statistically a constant distribution in a local region around each pixel but uneven distribution in a larger scale. Under the assumption that the fixed pattern noise concentrates in a higher spatial-frequency domain than the distribution variation, we apply a wavelet method to the gain and offset image of the noise and separate out the pattern noise from the spatial variations in the temporal distribution of the scene. We compare the results to the global-constant-statistics method using a clean sequence with large artificial pattern noises. We also apply the method to a challenging CCD video sequence and a LWIR sequence to show how effective it is in reducing noise and the ghosting artifacts.

  15. Evaluation and display of polarimetric image data using long-wave cooled microgrid focal plane arrays

    NASA Astrophysics Data System (ADS)

    Bowers, David L.; Boger, James K.; Wellems, L. David; Black, Wiley T.; Ortega, Steve E.; Ratliff, Bradley M.; Fetrow, Matthew P.; Hubbs, John E.; Tyo, J. Scott

    2006-05-01

    Recent developments for Long Wave InfraRed (LWIR) imaging polarimeters include incorporating a microgrid polarizer array onto the focal plane array (FPA). Inherent advantages over typical polarimeters include packaging and instantaneous acquisition of thermal and polarimetric information. This allows for real time video of thermal and polarimetric products. The microgrid approach has inherent polarization measurement error due to the spatial sampling of a non-uniform scene, residual pixel to pixel variations in the gain corrected responsivity and in the noise equivalent input (NEI), and variations in the pixel to pixel micro-polarizer performance. The Degree of Linear Polarization (DoLP) is highly sensitive to these parameters and is consequently used as a metric to explore instrument sensitivities. Image processing and fusion techniques are used to take advantage of the inherent thermal and polarimetric sensing capability of this FPA, providing additional scene information in real time. Optimal operating conditions are employed to improve FPA uniformity and sensitivity. Data from two DRS Infrared Technologies, L.P. (DRS) microgrid polarizer HgCdTe FPAs are presented. One FPA resides in a liquid nitrogen (LN2) pour filled dewar with a 80°K nominal operating temperature. The other FPA resides in a cryogenic (cryo) dewar with a 60° K nominal operating temperature.

  16. Using Polynomials to Simplify Fixed Pattern Noise and Photometric Correction of Logarithmic CMOS Image Sensors

    PubMed Central

    Li, Jing; Mahmoodi, Alireza; Joseph, Dileepan

    2015-01-01

    An important class of complementary metal-oxide-semiconductor (CMOS) image sensors are those where pixel responses are monotonic nonlinear functions of light stimuli. This class includes various logarithmic architectures, which are easily capable of wide dynamic range imaging, at video rates, but which are vulnerable to image quality issues. To minimize fixed pattern noise (FPN) and maximize photometric accuracy, pixel responses must be calibrated and corrected due to mismatch and process variation during fabrication. Unlike literature approaches, which employ circuit-based models of varying complexity, this paper introduces a novel approach based on low-degree polynomials. Although each pixel may have a highly nonlinear response, an approximately-linear FPN calibration is possible by exploiting the monotonic nature of imaging. Moreover, FPN correction requires only arithmetic, and an optimal fixed-point implementation is readily derived, subject to a user-specified number of bits per pixel. Using a monotonic spline, involving cubic polynomials, photometric calibration is also possible without a circuit-based model, and fixed-point photometric correction requires only a look-up table. The approach is experimentally validated with a logarithmic CMOS image sensor and is compared to a leading approach from the literature. The novel approach proves effective and efficient. PMID:26501287

  17. Artificial brains. A million spiking-neuron integrated circuit with a scalable communication network and interface.

    PubMed

    Merolla, Paul A; Arthur, John V; Alvarez-Icaza, Rodrigo; Cassidy, Andrew S; Sawada, Jun; Akopyan, Filipp; Jackson, Bryan L; Imam, Nabil; Guo, Chen; Nakamura, Yutaka; Brezzo, Bernard; Vo, Ivan; Esser, Steven K; Appuswamy, Rathinakumar; Taba, Brian; Amir, Arnon; Flickner, Myron D; Risk, William P; Manohar, Rajit; Modha, Dharmendra S

    2014-08-08

    Inspired by the brain's structure, we have developed an efficient, scalable, and flexible non-von Neumann architecture that leverages contemporary silicon technology. To demonstrate, we built a 5.4-billion-transistor chip with 4096 neurosynaptic cores interconnected via an intrachip network that integrates 1 million programmable spiking neurons and 256 million configurable synapses. Chips can be tiled in two dimensions via an interchip communication interface, seamlessly scaling the architecture to a cortexlike sheet of arbitrary size. The architecture is well suited to many applications that use complex neural networks in real time, for example, multiobject detection and classification. With 400-pixel-by-240-pixel video input at 30 frames per second, the chip consumes 63 milliwatts. Copyright © 2014, American Association for the Advancement of Science.

  18. Calibration method for video and radiation imagers

    DOEpatents

    Cunningham, Mark F [Oak Ridge, TN; Fabris, Lorenzo [Knoxville, TN; Gee, Timothy F [Oak Ridge, TN; Goddard, Jr., James S.; Karnowski, Thomas P [Knoxville, TN; Ziock, Klaus-peter [Clinton, TN

    2011-07-05

    The relationship between the high energy radiation imager pixel (HERIP) coordinate and real-world x-coordinate is determined by a least square fit between the HERIP x-coordinate and the measured real-world x-coordinates of calibration markers that emit high energy radiation imager and reflect visible light. Upon calibration, a high energy radiation imager pixel position may be determined based on a real-world coordinate of a moving vehicle. Further, a scale parameter for said high energy radiation imager may be determined based on the real-world coordinate. The scale parameter depends on the y-coordinate of the moving vehicle as provided by a visible light camera. The high energy radiation imager may be employed to detect radiation from moving vehicles in multiple lanes, which correspondingly have different distances to the high energy radiation imager.

  19. Remote sensing of Alaskan boreal forest fires at the pixel and sub-pixel level: multi-sensor approaches and sensitivity analysis

    NASA Astrophysics Data System (ADS)

    Waigl, C.; Stuefer, M.; Prakash, A.

    2013-12-01

    Wildfire is the main disturbance regime of the boreal forest ecosystem, a region acutely sensitive to climate change. Large fires impact the carbon cycle, permafrost, and air quality on a regional and even hemispheric scale. Because of their significance as a hazard to human health and economic activity, monitoring wildfires is relevant not only to science but also to government agencies. The goal of this study is to develop pathways towards a near real-time assessment of fire characteristics in the boreal zones of Alaska based on satellite remote sensing data. We map the location of active burn areas and derive fire parameters such as fire temperature, intensity, stage (smoldering or flaming), emission injection points, carbon consumed, and energy released. For monitoring wildfires in the sub-arctic region, we benefit from the high temporal resolution of data (as high as 8 images a day) from MODIS on the Aqua and Terra platforms and VIIRS on NPP/Suomi, downlinked and processed to level 1 by the Geographic Information Network of Alaska at the University of Alaska Fairbanks. To transcend the low spatial resolution of these sensors, a sub-pixel analysis is carried out. By applying techniques from Bayesian inverse modeling to Dozier's two-component approach, uncertainties and sensitivity of the retrieved fire temperatures and fractional pixel areas to background temperature and atmospheric factors are assessed. A set of test cases - large fires from the 2004 to 2013 fire seasons complemented by a selection of smaller burns at the lower end of the MODIS detection threshold - is used to evaluate the methodology. While the VIIRS principal fire detection band M13 (centered at 4.05 μm, similar to MODIS bands 21 and 22 at 3.959 μm) does not usually saturate for Alaskan wildfire areas, the thermal IR band M15 (10.763 μm, comparable to MODIS band 31 at 11.03 μm) indeed saturates for a percentage, though not all, of the fire pixels of intense burns. As this limits the application of the classical version of Dozier's model for this particular combination to lower intensity and smaller fires, or smaller fractional fire areas, other VIIRS band combinations are evaluated as well. Furthermore, the higher spatial resolution of the VIIRS sensor compared to MODIS and its constant along-scan resolution DNB (day/night band) dataset provide additional options for fire mapping, detection and quantification. Higher spatial resolution satellite-borne remote sensing data is used to validate the pixel and sub-pixel level analysis and to assess lower detection thresholds. For each sample fire, moderate-resolution imagery is paired with data from the ASTER instrument (simultaneous with MODIS data on the Terra platform) and/or Landsat scenes acquired in close temporal proximity. To complement the satellite-borne imagery, aerial surveys using a FLIR thermal imaging camera with a broadband TIR sensor provide additional ground truthing and a validation of fire location and background temperature.

  20. Liquid crystal display (LCD) drive electronics

    NASA Astrophysics Data System (ADS)

    Loudin, Jeffrey A.; Duffey, Jason N.; Booth, Joseph J.; Jones, Brian K.

    1995-03-01

    A new drive circuit for the liquid crystal display (LCD) of the InFocus TVT-6000 video projector is currently under development at the U.S. Army Missile Command. The new circuit will allow individual pixel control of the LCD and increase the frame rate by a factor of two while yielding a major reduction in space and power requirements. This paper will discuss results of the effort to date.

  1. Optimal design and critical analysis of a high resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Bertrand, Jérôme; Schubert, Arno; Blondé, Etienne

    2011-03-01

    A plenoptic camera is a natural multi-view acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and depth sensitivity. In a very first step and in order to circumvent those shortcomings, we have investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and also its depth measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered 5 video views of 820x410. The main limitation in our prototype is view cross talk due to optical aberrations which reduce the depth accuracy performance. We have simulated some limiting optical aberrations and predicted its impact on the performances of the camera. In addition, we developed adjustment protocols based on a simple pattern and analyzing programs which investigate the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a sub micrometer precision and to mark the pixels of the sensor where the views do not register properly.

  2. Optimal design and critical analysis of a high-resolution video plenoptic demonstrator

    NASA Astrophysics Data System (ADS)

    Drazic, Valter; Sacré, Jean-Jacques; Schubert, Arno; Bertrand, Jérôme; Blondé, Etienne

    2012-01-01

    A plenoptic camera is a natural multiview acquisition device also capable of measuring distances by correlating a set of images acquired under different parallaxes. Its single lens and single sensor architecture have two downsides: limited resolution and limited depth sensitivity. As a first step and in order to circumvent those shortcomings, we investigated how the basic design parameters of a plenoptic camera optimize both the resolution of each view and its depth-measuring capability. In a second step, we built a prototype based on a very high resolution Red One® movie camera with an external plenoptic adapter and a relay lens. The prototype delivered five video views of 820 × 410. The main limitation in our prototype is view crosstalk due to optical aberrations that reduce the depth accuracy performance. We simulated some limiting optical aberrations and predicted their impact on the performance of the camera. In addition, we developed adjustment protocols based on a simple pattern and analysis of programs that investigated the view mapping and amount of parallax crosstalk on the sensor on a pixel basis. The results of these developments enabled us to adjust the lenslet array with a submicrometer precision and to mark the pixels of the sensor where the views do not register properly.

  3. Video-rate or high-precision: a flexible range imaging camera

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian A.; Cree, Michael J.; Carnegie, Dale A.; Payne, Andrew D.; Conroy, Richard M.; Godbaz, John P.; Jongenelen, Adrian P. P.

    2008-02-01

    A range imaging camera produces an output similar to a digital photograph, but every pixel in the image contains distance information as well as intensity. This is useful for measuring the shape, size and location of objects in a scene, hence is well suited to certain machine vision applications. Previously we demonstrated a heterodyne range imaging system operating in a relatively high resolution (512-by-512) pixels and high precision (0.4 mm best case) configuration, but with a slow measurement rate (one every 10 s). Although this high precision range imaging is useful for some applications, the low acquisition speed is limiting in many situations. The system's frame rate and length of acquisition is fully configurable in software, which means the measurement rate can be increased by compromising precision and image resolution. In this paper we demonstrate the flexibility of our range imaging system by showing examples of high precision ranging at slow acquisition speeds and video-rate ranging with reduced ranging precision and image resolution. We also show that the heterodyne approach and the use of more than four samples per beat cycle provides better linearity than the traditional homodyne quadrature detection approach. Finally, we comment on practical issues of frame rate and beat signal frequency selection.

  4. Assessing Mesoscale Volcanic Aviation Hazards using ASTER

    NASA Astrophysics Data System (ADS)

    Pieri, D.; Gubbels, T.; Hufford, G.; Olsson, P.; Realmuto, V.

    2006-12-01

    The Advanced Spaceborne Thermal Emission and Reflection (ASTER) imager onboard the NASA Terra Spacecraft is a joint project of the Japanese Ministry for Economy, Trade, and Industry (METI) and NASA. ASTER has acquired over one million multi-spectral 60km by 60 km images of the earth over the last six years. It consists of three sub-instruments: (a) a four channel VNIR (0.52-0.86um) imager with a spatial resolution of 15m/pixel, including three nadir-viewing bands (1N, 2N, 3N) and one repeated rear-viewing band (3B) for stereo-photogrammetric terrain reconstruction (8-12m vertical resolution); (b) a SWIR (1.6-2.43um) imager with six bands at 30m/pixel; and (c) a TIR (8.125-11.65um) instrument with five bands at 90m/pixel. Returned data are processed in Japan at the Earth Remote Sensing Data Analysis Center (ERSDAC) and at the Land Processes Distributed Active Archive Center (LP DAAC), located at the USGS Center for Earth Resource Observation and Science (EROS) in Sioux Falls, South Dakota. Within the ASTER Project, the JPL Volcano Data Acquisition and Analyses System (VDAAS) houses over 60,000 ASTER volcano images of 1542 volcanoes worldwide and will be accessible for downloads by the general public and on-line image analyses by researchers in early 2007. VDAAS multi-spectral thermal infrared (TIR) de-correlation stretch products are optimized for volcanic ash detection and have a spatial resolution of 90m/pixel. Digital elevation models (DEM) stereo-photogrammetrically derived from ASTER Band 3B/3N data are also available within VDAAS at 15 and 30m/pixel horizontal resolution. Thus, ASTER visible, IR, and DEM data at 15-100m/pixel resolution within VDAAS can be combined to provide useful boundary conditions on local volcanic eruption plume location, composition, and altitude, as well as on topography of underlying terrain. During and after eruptions, low- altitude winds and ash transport can be affected by topography, and other orographic thermal and water vapor transport effects from the micro (<1km) to mesoscale (1-100km). Such phenomena are thus well-observed by ASTER and pose transient and severe hazards to aircraft operating in and out of airports near volcanoes (e.g., Anchorage, AK, USA; Catania, Italy; Kagoshima City, Japan). ASTER image data and derived products provide boundary conditions for 3D mesoscale atmospheric transport and chemistry models (e.g., RAMS) for retrospective and prospective studies of volcanic aerosol transport at low altitudes in takeoff and landing corridors near active volcanoes. Putative ASTER direct downlinks in the future could provide real-time mitigation of such hazards. Some examples of mesoscale analyses for threatened airspace near US and non- US airports will be shown. This work was, in part, carried out at the Jet Propulsion Laboratory of the California Institute of Technology under contract to the NASA Earth Science Research Program and as part of ASTER Science Team activities.

  5. Experimental implant communication of high data rate video using an ultra wideband radio link.

    PubMed

    Chávez-Santiago, Raúl; Balasingham, Ilangko; Bergsland, Jacob; Zahid, Wasim; Takizawa, Kenichi; Miura, Ryu; Li, Huan-Bang

    2013-01-01

    Ultra wideband (UWB) is one of the radio technologies adopted by the IEEE 802.15.6™-2012 standard for on-body communication in body area networks (BANs). However, a number of simulation-based studies suggest the feasibility of using UWB for high data rate implant communication too. This paper presents an experimental verification of said predictions. We carried out radio transmissions of H.264/1280×720 pixels video at 80 Mbps through a UWB multiband orthogonal frequency division multiplexing (MB-OFDM) interface in a porcine chirurgical model. The results demonstrated successful transmission up to a maximum depth of 30 mm in the abdomen and 33 mm in the thorax within the 4.2-4.8 GHz frequency band.

  6. Colour based fire detection method with temporal intensity variation filtration

    NASA Astrophysics Data System (ADS)

    Trambitckii, K.; Anding, K.; Musalimov, V.; Linß, G.

    2015-02-01

    Development of video, computing technologies and computer vision gives a possibility of automatic fire detection on video information. Under that project different algorithms was implemented to find more efficient way of fire detection. In that article colour based fire detection algorithm is described. But it is not enough to use only colour information to detect fire properly. The main reason of this is that in the shooting conditions may be a lot of things having colour similar to fire. A temporary intensity variation of pixels is used to separate them from the fire. These variations are averaged over the series of several frames. This algorithm shows robust work and was realised as a computer program by using of the OpenCV library.

  7. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1992-11-01

    The luminance emitted from a cathode ray tube (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image property represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. The effect of this nonlinear transformation on a variety of image-processing applications used in visual communications is described.

  8. Display nonlinearity in digital image processing for visual communications

    NASA Astrophysics Data System (ADS)

    Peli, Eli

    1991-11-01

    The luminance emitted from a cathode ray tube, (CRT) display is a nonlinear function (the gamma function) of the input video signal voltage. In most analog video systems, compensation for this nonlinear transfer function is implemented in the camera amplifiers. When CRT displays are used to present psychophysical stimuli in vision research, the specific display nonlinearity usually is measured and accounted for to ensure that the luminance of each pixel in the synthetic image properly represents the intended value. However, when using digital image processing, the linear analog-to-digital converters store a digital image that is nonlinearly related to the displayed or recorded image. This paper describes the effect of this nonlinear transformation on a variety of image-processing applications used in visual communications.

  9. Video bioinformatics analysis of human embryonic stem cell colony growth.

    PubMed

    Lin, Sabrina; Fonteno, Shawn; Satish, Shruthi; Bhanu, Bir; Talbot, Prue

    2010-05-20

    Because video data are complex and are comprised of many images, mining information from video material is difficult to do without the aid of computer software. Video bioinformatics is a powerful quantitative approach for extracting spatio-temporal data from video images using computer software to perform dating mining and analysis. In this article, we introduce a video bioinformatics method for quantifying the growth of human embryonic stem cells (hESC) by analyzing time-lapse videos collected in a Nikon BioStation CT incubator equipped with a camera for video imaging. In our experiments, hESC colonies that were attached to Matrigel were filmed for 48 hours in the BioStation CT. To determine the rate of growth of these colonies, recipes were developed using CL-Quant software which enables users to extract various types of data from video images. To accurately evaluate colony growth, three recipes were created. The first segmented the image into the colony and background, the second enhanced the image to define colonies throughout the video sequence accurately, and the third measured the number of pixels in the colony over time. The three recipes were run in sequence on video data collected in a BioStation CT to analyze the rate of growth of individual hESC colonies over 48 hours. To verify the truthfulness of the CL-Quant recipes, the same data were analyzed manually using Adobe Photoshop software. When the data obtained using the CL-Quant recipes and Photoshop were compared, results were virtually identical, indicating the CL-Quant recipes were truthful. The method described here could be applied to any video data to measure growth rates of hESC or other cells that grow in colonies. In addition, other video bioinformatics recipes can be developed in the future for other cell processes such as migration, apoptosis, and cell adhesion.

  10. Development of the SEASIS instrument for SEDSAT

    NASA Technical Reports Server (NTRS)

    Maier, Mark W.

    1996-01-01

    Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.

  11. Vision Sensors and Cameras

    NASA Astrophysics Data System (ADS)

    Hoefflinger, Bernd

    Silicon charge-coupled-device (CCD) imagers have been and are a specialty market ruled by a few companies for decades. Based on CMOS technologies, active-pixel sensors (APS) began to appear in 1990 at the 1 μm technology node. These pixels allow random access, global shutters, and they are compatible with focal-plane imaging systems combining sensing and first-level image processing. The progress towards smaller features and towards ultra-low leakage currents has provided reduced dark currents and μm-size pixels. All chips offer Mega-pixel resolution, and many have very high sensitivities equivalent to ASA 12.800. As a result, HDTV video cameras will become a commodity. Because charge-integration sensors suffer from a limited dynamic range, significant processing effort is spent on multiple exposure and piece-wise analog-digital conversion to reach ranges >10,000:1. The fundamental alternative is log-converting pixels with an eye-like response. This offers a range of almost a million to 1, constant contrast sensitivity and constant colors, important features in professional, technical and medical applications. 3D retino-morphic stacking of sensing and processing on top of each other is being revisited with sub-100 nm CMOS circuits and with TSV technology. With sensor outputs directly on top of neurons, neural focal-plane processing will regain momentum, and new levels of intelligent vision will be achieved. The industry push towards thinned wafers and TSV enables backside-illuminated and other pixels with a 100% fill-factor. 3D vision, which relies on stereo or on time-of-flight, high-speed circuitry, will also benefit from scaled-down CMOS technologies both because of their size as well as their higher speed.

  12. Two-terminal video coding.

    PubMed

    Yang, Yang; Stanković, Vladimir; Xiong, Zixiang; Zhao, Wei

    2009-03-01

    Following recent works on the rate region of the quadratic Gaussian two-terminal source coding problem and limit-approaching code designs, this paper examines multiterminal source coding of two correlated, i.e., stereo, video sequences to save the sum rate over independent coding of both sequences. Two multiterminal video coding schemes are proposed. In the first scheme, the left sequence of the stereo pair is coded by H.264/AVC and used at the joint decoder to facilitate Wyner-Ziv coding of the right video sequence. The first I-frame of the right sequence is successively coded by H.264/AVC Intracoding and Wyner-Ziv coding. An efficient stereo matching algorithm based on loopy belief propagation is then adopted at the decoder to produce pixel-level disparity maps between the corresponding frames of the two decoded video sequences on the fly. Based on the disparity maps, side information for both motion vectors and motion-compensated residual frames of the right sequence are generated at the decoder before Wyner-Ziv encoding. In the second scheme, source splitting is employed on top of classic and Wyner-Ziv coding for compression of both I-frames to allow flexible rate allocation between the two sequences. Experiments with both schemes on stereo video sequences using H.264/AVC, LDPC codes for Slepian-Wolf coding of the motion vectors, and scalar quantization in conjunction with LDPC codes for Wyner-Ziv coding of the residual coefficients give a slightly lower sum rate than separate H.264/AVC coding of both sequences at the same video quality.

  13. Applications of just-noticeable depth difference model in joint multiview video plus depth coding

    NASA Astrophysics Data System (ADS)

    Liu, Chao; An, Ping; Zuo, Yifan; Zhang, Zhaoyang

    2014-10-01

    A new multiview just-noticeable-depth-difference(MJNDD) Model is presented and applied to compress the joint multiview video plus depth. Many video coding algorithms remove spatial and temporal redundancies and statistical redundancies but they are not capable of removing the perceptual redundancies. Since the final receptor of video is the human eyes, we can remove the perception redundancy to gain higher compression efficiency according to the properties of human visual system (HVS). Traditional just-noticeable-distortion (JND) model in pixel domain contains luminance contrast and spatial-temporal masking effects, which describes the perception redundancy quantitatively. Whereas HVS is very sensitive to depth information, a new multiview-just-noticeable-depth-difference(MJNDD) model is proposed by combining traditional JND model with just-noticeable-depth-difference (JNDD) model. The texture video is divided into background and foreground areas using depth information. Then different JND threshold values are assigned to these two parts. Later the MJNDD model is utilized to encode the texture video on JMVC. When encoding the depth video, JNDD model is applied to remove the block artifacts and protect the edges. Then we use VSRS3.5 (View Synthesis Reference Software) to generate the intermediate views. Experimental results show that our model can endure more noise and the compression efficiency is improved by 25.29 percent at average and by 54.06 percent at most compared to JMVC while maintaining the subject quality. Hence it can gain high compress ratio and low bit rate.

  14. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  15. Apollo 16/AS-511/LM-11 operational calibration curves. Volume 1: Calibration curves for command service module CSM 113

    NASA Technical Reports Server (NTRS)

    Demoss, J. F. (Compiler)

    1971-01-01

    Calibration curves for the Apollo 16 command service module pulse code modulation downlink and onboard display are presented. Subjects discussed are: (1) measurement calibration curve format, (2) measurement identification, (3) multi-mode calibration data summary, (4) pulse code modulation bilevel events listing, and (5) calibration curves for instrumentation downlink and meter link.

  16. Nanosatellite optical downlink experiment: design, simulation, and prototyping

    NASA Astrophysics Data System (ADS)

    Clements, Emily; Aniceto, Raichelle; Barnes, Derek; Caplan, David; Clark, James; Portillo, Iñigo del; Haughwout, Christian; Khatsenko, Maxim; Kingsbury, Ryan; Lee, Myron; Morgan, Rachel; Twichell, Jonathan; Riesing, Kathleen; Yoon, Hyosang; Ziegler, Caleb; Cahoy, Kerri

    2016-11-01

    The nanosatellite optical downlink experiment (NODE) implements a free-space optical communications (lasercom) capability on a CubeSat platform that can support low earth orbit (LEO) to ground downlink rates>10 Mbps. A primary goal of NODE is to leverage commercially available technologies to provide a scalable and cost-effective alternative to radio-frequency-based communications. The NODE transmitter uses a 200-mW 1550-nm master-oscillator power-amplifier design using power-efficient M-ary pulse position modulation. To facilitate pointing the 0.12-deg downlink beam, NODE augments spacecraft body pointing with a microelectromechanical fast steering mirror (FSM) and uses an 850-nm uplink beacon to an onboard CCD camera. The 30-cm aperture ground telescope uses an infrared camera and FSM for tracking to an avalanche photodiode detector-based receiver. Here, we describe our approach to transition prototype transmitter and receiver designs to a full end-to-end CubeSat-scale system. This includes link budget refinement, drive electronics miniaturization, packaging reduction, improvements to pointing and attitude estimation, implementation of modulation, coding, and interleaving, and ground station receiver design. We capture trades and technology development needs and outline plans for integrated system ground testing.

  17. Short-term sandbar variability based on video imagery: Comparison between Time-Average and Time-Variance techniques

    USGS Publications Warehouse

    Guedes, R.M.C.; Calliari, L.J.; Holland, K.T.; Plant, N.G.; Pereira, P.S.; Alves, F.N.A.

    2011-01-01

    Time-exposure intensity (averaged) images are commonly used to locate the nearshore sandbar position (xb), based on the cross-shore locations of maximum pixel intensity (xi) of the bright bands in the images. It is not known, however, how the breaking patterns seen in Variance images (i.e. those created through standard deviation of pixel intensity over time) are related to the sandbar locations. We investigated the suitability of both Time-exposure and Variance images for sandbar detection within a multiple bar system on the southern coast of Brazil, and verified the relation between wave breaking patterns, observed as bands of high intensity in these images and cross-shore profiles of modeled wave energy dissipation (xD). Not only is Time-exposure maximum pixel intensity location (xi-Ti) well related to xb, but also to the maximum pixel intensity location of Variance images (xi-Va), although the latter was typically located 15m offshore of the former. In addition, xi-Va was observed to be better associated with xD even though xi-Ti is commonly assumed as maximum wave energy dissipation. Significant wave height (Hs) and water level (??) were observed to affect the two types of images in a similar way, with an increase in both Hs and ?? resulting in xi shifting offshore. This ??-induced xi variability has an opposite behavior to what is described in the literature, and is likely an indirect effect of higher waves breaking farther offshore during periods of storm surges. Multiple regression models performed on xi, Hs and ?? allowed the reduction of the residual errors between xb and xi, yielding accurate estimates with most residuals less than 10m. Additionally, it was found that the sandbar position was best estimated using xi-Ti (xi-Va) when xb was located shoreward (seaward) of its mean position, for both the first and the second bar. Although it is unknown whether this is an indirect hydrodynamic effect or is indeed related to the morphology, we found that this behavior can be explored to optimize sandbar estimation using video imagery, even in the absence of hydrodynamic data. ?? 2011 Elsevier B.V..

  18. A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

    NASA Astrophysics Data System (ADS)

    Liu, Wen P.; Mirota, Daniel J.; Uneri, Ali; Otake, Yoshito; Hager, Gregory; Reh, Douglas D.; Ishii, Masaru; Gallia, Gary L.; Siewerdsen, Jeffrey H.

    2012-02-01

    Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7+/-0.3) pixels and mean target registration error of (2.3+/-1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.

  19. Onboard Algorithms for Data Prioritization and Summarization of Aerial Imagery

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Hayden, David; Thompson, David R.; Castano, Rebecca

    2013-01-01

    Many current and future NASA missions are capable of collecting enormous amounts of data, of which only a small portion can be transmitted to Earth. Communications are limited due to distance, visibility constraints, and competing mission downlinks. Long missions and high-resolution, multispectral imaging devices easily produce data exceeding the available bandwidth. To address this situation computationally efficient algorithms were developed for analyzing science imagery onboard the spacecraft. These algorithms autonomously cluster the data into classes of similar imagery, enabling selective downlink of representatives of each class, and a map classifying the terrain imaged rather than the full dataset, reducing the volume of the downlinked data. A range of approaches was examined, including k-means clustering using image features based on color, texture, temporal, and spatial arrangement

  20. Remote control of an impact demonstration vehicle

    NASA Technical Reports Server (NTRS)

    Harney, P. F.; Craft, J. B., Jr.; Johnson, R. G.

    1985-01-01

    Uplink and downlink telemetry systems were installed in a Boeing 720 aircraft that was remotely flown from Rogers Dry Lake at Edwards Air Force Base and impacted into a designated crash site on the lake bed. The controlled impact demonstration (CID) program was a joint venture by the National Aeronautics and Space Administration (NASA) and the Federal Aviation Administration (FAA) to test passenger survivability using antimisting kerosene (AMK) to inhibit postcrash fires, improve passenger seats and restraints, and improve fire-retardent materials. The uplink telemetry system was used to remotely control the aircraft and activate onboard systems from takeoff until after impact. Aircraft systems for remote control, aircraft structural response, passenger seat and restraint systems, and anthropomorphic dummy responses were recorded and displayed by the downlink stems. The instrumentation uplink and downlink systems are described.

  1. A Study of an Optical Lunar Surface Communications Network with High Bandwidth Direct to Earth Link

    NASA Technical Reports Server (NTRS)

    Wilson, K.; Biswas, A.; Schoolcraft, J.

    2011-01-01

    Analyzed optical DTE (direct to earth) and lunar relay satellite link analyses, greater than 200 Mbps downlink to 1-m Earth receiver and greater than 1 Mbps uplink achieved with mobile 5-cm lunar transceiver, greater than 1Gbps downlink and greater than 10 Mpbs uplink achieved with 10-cm stationary lunar transceiver, MITLL (MIT Lincoln Laboratory) 2013 LLCD (Lunar Laser Communications Demonstration) plans to demonstrate 622 Mbps downlink with 20 Mbps uplink between lunar orbiter and ground station; Identified top five technology challenges to deploying lunar optical network, Performed preliminary experiments on two of challenges: (i) lunar dust removal and (ii)DTN over optical carrier, Exploring opportunities to evaluate DTN (delay-tolerant networking) over optical link in a multi-node network e.g. Desert RATS.

  2. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  3. Description of the TCERT Vetting Reports for Data Release 25

    NASA Technical Reports Server (NTRS)

    Van Cleve, Jeffrey E.; Caldwell, Douglas A.

    2016-01-01

    This document, the Kepler Instrument Handbook (KIH), is for Kepler and K2 observers, which includes the Kepler Science Team, Guest Observers (GOs), and astronomers doing archival research on Kepler and K2 data in NASAs Astrophysics Data Analysis Program (ADAP). The KIH provides information about the design, performance, and operational constraints of the Kepler flight hardware and software, and an overview of the pixel data sets available. The KIH is meant to be read with these companion documents:1. Kepler Data Processing Handbook (KSCI-19081) or KDPH (Jenkins et al., 2016). The KDPH describes how pixels downlinked from the spacecraft are converted by the Kepler Data Processing Pipeline (henceforth just the pipeline) into the data products delivered to the MAST archive. 2. Kepler Archive Manual (KDMC-10008) or KAM (Thompson et al., 2016). The KAM describes the format and content of the data products, and how to search for them.3. Kepler Data Characteristics Handbook (KSCI-19040) or KDCH (Christiansen et al., 2016). The KDCH describes recurring non-astrophysical features of the Kepler data due to instrument signatures, spacecraft events, or solar activity, and explains how these characteristics are handled by the pipeline.4. Kepler Data Release Notes 25 (KSCI-19065) or DRN 25 (Thompson et al., 2015). DRN 25 describes signatures and events peculiar to individual quarters, and the pipeline software changes between a data release and the one preceding it.Together, these documents supply the information necessary for obtaining and understanding Kepler results, given the real properties of the hardware and the data analysis methods used, and for an independent evaluation of the methods used if so desired.

  4. From Pixels to Planets

    NASA Technical Reports Server (NTRS)

    Brownston, Lee; Jenkins, Jon M.

    2015-01-01

    The Kepler Mission was launched in 2009 as NASAs first mission capable of finding Earth-size planets in the habitable zone of Sun-like stars. Its telescope consists of a 1.5-m primary mirror and a 0.95-m aperture. The 42 charge-coupled devices in its focal plane are read out every half hour, compressed, and then downlinked monthly. After four years, the second of four reaction wheels failed, ending the original mission. Back on earth, the Science Operations Center developed the Science Pipeline to analyze about 200,000 target stars in Keplers field of view, looking for evidence of periodic dimming suggesting that one or more planets had crossed the face of its host star. The Pipeline comprises several steps, from pixel-level calibration, through noise and artifact removal, to detection of transit-like signals and the construction of a suite of diagnostic tests to guard against false positives. The Kepler Science Pipeline consists of a pipeline infrastructure written in the Java programming language, which marshals data input to and output from MATLAB applications that are executed as external processes. The pipeline modules, which underwent continuous development and refinement even after data started arriving, employ several analytic techniques, many developed for the Kepler Project. Because of the large number of targets, the large amount of data per target and the complexity of the pipeline algorithms, the processing demands are daunting. Some pipeline modules require days to weeks to process all of their targets, even when run on NASA's 128-node Pleiades supercomputer. The software developers are still seeking ways to increase the throughput. To date, the Kepler project has discovered more than 4000 planetary candidates, of which more than 1000 have been independently confirmed or validated to be exoplanets. Funding for this mission is provided by NASAs Science Mission Directorate.

  5. Light in flight photography and applications (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Faccio, Daniele

    2017-02-01

    The first successful attempts (Abramson) at capturing light in flight relied on the holographic interference between the ``object'' beam scattered from a screen and a short reference pulse propagating at an angle, acting as an ultrafast shutter cite{egg}. This interference pattern was recorded on a photographic plate or film and allowed the visualisation of light as it propagated through complex environments with unprecedented temporal and spatial resolution. More recently, advances in ultrafast camera technology and in particular the use of picosecond resolution streak cameras allowed the direct digital recording of a light pulse propagating through a plastic bottle (Rasker at el.). This represented a remarkable step forward as it provided the first ever video recording (in the traditional sense with which one intends a video, i.e. something that can be played back directly on a screen and saved in digital format) of a pulse of light in flight. We will discuss a different technology that is based on an imaging camera with a pixel array in which each individual pixel is a single photon avalanche diode (SPAD). SPADs offer both sensitivity to single photons and picosecond temporal resolution of the photon arrival time (with respect to a trigger event). When adding imaging capability, SPAD arrays can deliver videos of light pulse propagating in free space, without the need for a scattering medium or diffuser as in all previous work (Gariepy et al). This capability can then be harnessed for a variety of applications. We will discuss the details of SPAD camera detection of moving objects (e.g. human beings) that are hidden from view and then conclude with a discussion of future perspectives in the field of bio-imaging.

  6. Registration of retinal sequences from new video-ophthalmoscopic camera.

    PubMed

    Kolar, Radim; Tornow, Ralf P; Odstrcilik, Jan; Liberdova, Ivana

    2016-05-20

    Analysis of fast temporal changes on retinas has become an important part of diagnostic video-ophthalmology. It enables investigation of the hemodynamic processes in retinal tissue, e.g. blood-vessel diameter changes as a result of blood-pressure variation, spontaneous venous pulsation influenced by intracranial-intraocular pressure difference, blood-volume changes as a result of changes in light reflection from retinal tissue, and blood flow using laser speckle contrast imaging. For such applications, image registration of the recorded sequence must be performed. Here we use a new non-mydriatic video-ophthalmoscope for simple and fast acquisition of low SNR retinal sequences. We introduce a novel, two-step approach for fast image registration. The phase correlation in the first stage removes large eye movements. Lucas-Kanade tracking in the second stage removes small eye movements. We propose robust adaptive selection of the tracking points, which is the most important part of tracking-based approaches. We also describe a method for quantitative evaluation of the registration results, based on vascular tree intensity profiles. The achieved registration error evaluated on 23 sequences (5840 frames) is 0.78 ± 0.67 pixels inside the optic disc and 1.39 ± 0.63 pixels outside the optic disc. We compared the results with the commonly used approaches based on Lucas-Kanade tracking and scale-invariant feature transform, which achieved worse results. The proposed method can efficiently correct particular frames of retinal sequences for shift and rotation. The registration results for each frame (shift in X and Y direction and eye rotation) can also be used for eye-movement evaluation during single-spot fixation tasks.

  7. Qualification testing of fiber-based laser transmitters and on-orbit validation of a commercial laser system

    NASA Astrophysics Data System (ADS)

    Wright, M. W.; Wilkerson, M. W.; Tang, R. R.

    2017-11-01

    Qualification testing of fiber based laser transmitters is required for NASA's Deep Space Optical Communications program to mature the technology for space applications. In the absence of fully space qualified systems, commercial systems have been investigated in order to demonstrate the robustness of the technology. To this end, a 2.5 W fiber based laser source was developed as the transmitter for an optical communications experiment flown aboard the ISS as a part of a technology demonstration mission. The low cost system leveraged Mil Standard design principles and Telcordia certified components to the extent possible and was operated in a pressure vessel with active cooling. The laser was capable of high rate modulation but was limited by the mission requirements to 50 Mbps for downlinking stored video from the OPALS payload, externally mounted on the ISS. Environmental testing and space qualification of this unit will be discussed along with plans for a fully space qualified laser transmitter.

  8. LANDSAT-D accelerated payload correction subsystem output computer compatible tape format

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The NASA GSFC LANDSAT-D Ground Segment (GS) is developing an Accelerated Payload Correction Subsystem (APCS) to provide Thematic Mapper (TM) image correction data to be used outside the GS. This correction data is computed from a subset of the TM Payload Correction Data (PCD), which is downlinked from the spacecraft in a 32 Kbps data stream, and mirror scan correction data (MSCD), which is extracted from the wideband video data. This correction data is generated in the GS Thematic Mapper Mission Management Facility (MMF-T), and is recorded on a 9-track 1600 bit per inch computer compatible tape (CCT). This CCT is known as a APCS Output CCT (AOT). The AOT follows standardized corrections with respect to data formats, record construction and record identification. Applicable documents are delineated; common conventions which are used in further defining the structure, format and content of the AOT are defined; and the structure and content of the AOT are described.

  9. Electric field effects on a near-critical fluid in microgravity

    NASA Technical Reports Server (NTRS)

    Zimmerli, G.; Wilkinson, R. A.; Ferrell, R. A.; Hao, H.; Moldover, M. R.

    1994-01-01

    The effects of an electric field on a sample of SF6 fluid in the vicinity of the liquid-vapor critical point is studied. The isothermal increase of the density of a near-critical sample as a function of the applied electric field was measured. In agreement with theory, this electrostriction effect diverges near the critical point as the isothermal compressibility diverges. Also as expected, turning on the electric field in the presence of density gradients can induce flow within the fluid, in a way analogous to turning on gravity. These effects were observed in a microgravity environment by using the Critical Point Facility which flew onboard the Space Shuttle Columbia in July 1994 as part of the Second International Microgravity Laboratory Mission. Both visual and interferometric images of two separate sample cells were obtained by means of video downlink. The interferometric images provided quantitative information about the density distribution throughout the sample. The electric field was generated by applying 500 Volts to a fine wire passing through the critical fluid.

  10. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph; Mortensen, Dale; Evans, Michael; Briones, Janette; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was conducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round-trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  11. TWT design requirements for 30/20 GHz digital communications' satellite

    NASA Technical Reports Server (NTRS)

    Stankiewicz, N.; Anzic, G.

    1979-01-01

    The rapid growth of communication traffic (voice, data, and video) requires the development of additional frequency bands before the 1990's. The frequencies currently in use for satellite communications at 6/4 GHz are crowded and demands for 14/12 GHz systems are increasing. Projections are that these bands will be filled to capacity by the late 1980's. The next higher frequency band allocated for satellite communications is at 30/20 GHz. For interrelated reasons of efficiency, power level, and system reliability criteria, a candidate for the downlink amplifier in a 30/20 GHz communications' satellite is a dual mode traveling wave tube (TWT) equipped with a highly efficient depressed collector. A summary is given of the analyses which determine the TWT design requirements. The overall efficiency of such a tube is then inferred from a parametric study and from experimental data on multistaged depressed collectors. The expected TWT efficiency at 4 dB below output saturation is 24 percent in the high mode and 22 percent in the low mode.

  12. DS-SS with de Bruijn sequences for secure Inter Satellite Links

    NASA Astrophysics Data System (ADS)

    Spinsante, S.; Warty, C.; Gambi, E.

    Today, both the military and commercial sectors are placing an increased emphasis on global communications. This has prompted the development of several Low Earth Orbit satellite systems that promise a worldwide connectivity and real-time voice, data and video communications. Constellations that avoid repeated uplink and downlink work by exploiting Inter Satellite Links have proved to be very economical in space routing. However, traditionally Inter Satellite Links were considered to be out of reach for any malicious activity and thus little, or no security was employed. This paper proposes a secured Inter Satellite Links based network, built upon the adoption of the Direct Sequence Spread Spectrum technique, with binary de Bruijn sequences used as spreading codes. Selected sequences from the de Bruijn family may be used over directional spot beams. The main intent of the paper is to propose a secure and robust communication link for the next generation of satellite communications, relying on a classical spread spectrum approach employing innovative sequences.

  13. Adaptive Coding and Modulation Experiment With NASA's Space Communication and Navigation Testbed

    NASA Technical Reports Server (NTRS)

    Downey, Joseph A.; Mortensen, Dale J.; Evans, Michael A.; Briones, Janette C.; Tollis, Nicholas

    2016-01-01

    National Aeronautics and Space Administration (NASA)'s Space Communication and Navigation Testbed is an advanced integrated communication payload on the International Space Station. This paper presents results from an adaptive coding and modulation (ACM) experiment over S-band using a direct-to-earth link between the SCaN Testbed and the Glenn Research Center. The testing leverages the established Digital Video Broadcasting Second Generation (DVB-S2) standard to provide various modulation and coding options, and uses the Space Data Link Protocol (Consultative Committee for Space Data Systems (CCSDS) standard) for the uplink and downlink data framing. The experiment was con- ducted in a challenging environment due to the multipath and shadowing caused by the International Space Station structure. Several approaches for improving the ACM system are presented, including predictive and learning techniques to accommodate signal fades. Performance of the system is evaluated as a function of end-to-end system latency (round- trip delay), and compared to the capacity of the link. Finally, improvements over standard NASA waveforms are presented.

  14. A new display stream compression standard under development in VESA

    NASA Astrophysics Data System (ADS)

    Jacobson, Natan; Thirumalai, Vijayaraghavan; Joshi, Rajan; Goel, James

    2017-09-01

    The Advanced Display Stream Compression (ADSC) codec project is in development in response to a call for technologies from the Video Electronics Standards Association (VESA). This codec targets visually lossless compression of display streams at a high compression rate (typically 6 bits/pixel) for mobile/VR/HDR applications. Functionality of the ADSC codec is described in this paper, and subjective trials results are provided using the ISO 29170-2 testing protocol.

  15. The Rotated Speeded-Up Robust Features Algorithm (R-SURF)

    DTIC Science & Technology

    2014-06-01

    blue color model YUV one luminance two chrominance color model xviii THIS PAGE INTENTIONALLY LEFT BLANK xix EXECUTIVE SUMMARY Automatic...256 256 3  color scheme with an uncompressed image is used, each visual pixel has a possibility of 3256 combinations 2 [5]. There are...Portugal, 2009. [41] J. Sivic and A. Zisserman, “Efficient visual search of videos cast as text retrieval,” IEEE Transactions on Pattern Analysis and

  16. Statistical Results Concerning the Precision of the Methods of Correlation and Interpolation Sub-Pixel Used in Video PIV

    DTIC Science & Technology

    1998-08-27

    serait compI~ men - taire du logiciel "ferm6" du sysf~me dle PlY commercial et qui permettrait d𔄀tudier la pr6cision des math odes dle traitement...de rNaxNt piels antu pourts chauepixelr un nivea des grnis contaont (modanlisa iped 0.5 - R0 .2/9 0.570 Simulations num6riques par une repr~sentation

  17. Musculoskeletal motion flow fields using hierarchical variable-sized block matching in ultrasonographic video sequences.

    PubMed

    Revell, J D; Mirmehdi, M; McNally, D S

    2004-04-01

    We examine tissue deformations using non-invasive dynamic musculoskeletal ultrasonograhy, and quantify its performance on controlled in vitro gold standard (groundtruth) sequences followed by clinical in vivo data. The proposed approach employs a two-dimensional variable-sized block matching algorithm with a hierarchical full search. We extend this process by refining displacements to sub-pixel accuracy. We show by application that this technique yields quantitatively reliable results.

  18. A hyperspectral image projector for hyperspectral imagers

    NASA Astrophysics Data System (ADS)

    Rice, Joseph P.; Brown, Steven W.; Neira, Jorge E.; Bousquet, Robert R.

    2007-04-01

    We have developed and demonstrated a Hyperspectral Image Projector (HIP) intended for system-level validation testing of hyperspectral imagers, including the instrument and any associated spectral unmixing algorithms. HIP, based on the same digital micromirror arrays used in commercial digital light processing (DLP*) displays, is capable of projecting any combination of many different arbitrarily programmable basis spectra into each image pixel at up to video frame rates. We use a scheme whereby one micromirror array is used to produce light having the spectra of endmembers (i.e. vegetation, water, minerals, etc.), and a second micromirror array, optically in series with the first, projects any combination of these arbitrarily-programmable spectra into the pixels of a 1024 x 768 element spatial image, thereby producing temporally-integrated images having spectrally mixed pixels. HIP goes beyond conventional DLP projectors in that each spatial pixel can have an arbitrary spectrum, not just arbitrary color. As such, the resulting spectral and spatial content of the projected image can simulate realistic scenes that a hyperspectral imager will measure during its use. Also, the spectral radiance of the projected scenes can be measured with a calibrated spectroradiometer, such that the spectral radiance projected into each pixel of the hyperspectral imager can be accurately known. Use of such projected scenes in a controlled laboratory setting would alleviate expensive field testing of instruments, allow better separation of environmental effects from instrument effects, and enable system-level performance testing and validation of hyperspectral imagers as used with analysis algorithms. For example, known mixtures of relevant endmember spectra could be projected into arbitrary spatial pixels in a hyperspectral imager, enabling tests of how well a full system, consisting of the instrument + calibration + analysis algorithm, performs in unmixing (i.e. de-convolving) the spectra in all pixels. We discuss here the performance of a visible prototype HIP. The technology is readily extendable to the ultraviolet and infrared spectral ranges, and the scenes can be static or dynamic.

  19. Subjective quality of video sequences rendered on LCD with local backlight dimming at different lighting conditions

    NASA Astrophysics Data System (ADS)

    Mantel, Claire; Korhonen, Jari; Pedersen, Jesper M.; Bech, Søren; Andersen, Jakob Dahl; Forchhammer, Søren

    2015-01-01

    This paper focuses on the influence of ambient light on the perceived quality of videos displayed on Liquid Crystal Display (LCD) with local backlight dimming. A subjective test assessing the quality of videos with two backlight dimming methods and three lighting conditions, i.e. no light, low light level (5 lux) and higher light level (60 lux) was organized to collect subjective data. Results show that participants prefer the method exploiting local dimming possibilities to the conventional full backlight but that this preference varies depending on the ambient light level. The clear preference for one method at the low light conditions decreases at the high ambient light, confirming that the ambient light significantly attenuates the perception of the leakage defect (light leaking through dark pixels). Results are also highly dependent on the content of the sequence, which can modulate the effect of the ambient light from having an important influence on the quality grades to no influence at all.

  20. A video, text, and speech-driven realistic 3-d virtual head for human-machine interface.

    PubMed

    Yu, Jun; Wang, Zeng-Fu

    2015-05-01

    A multiple inputs-driven realistic facial animation system based on 3-D virtual head for human-machine interface is proposed. The system can be driven independently by video, text, and speech, thus can interact with humans through diverse interfaces. The combination of parameterized model and muscular model is used to obtain a tradeoff between computational efficiency and high realism of 3-D facial animation. The online appearance model is used to track 3-D facial motion from video in the framework of particle filtering, and multiple measurements, i.e., pixel color value of input image and Gabor wavelet coefficient of illumination ratio image, are infused to reduce the influence of lighting and person dependence for the construction of online appearance model. The tri-phone model is used to reduce the computational consumption of visual co-articulation in speech synchronized viseme synthesis without sacrificing any performance. The objective and subjective experiments show that the system is suitable for human-machine interaction.

  1. Feathering effect detection and artifact agglomeration index-based video deinterlacing technique

    NASA Astrophysics Data System (ADS)

    Martins, André Luis; Rodrigues, Evandro Luis Linhari; de Paiva, Maria Stela Veludo

    2018-03-01

    Several video deinterlacing techniques have been developed, and each one presents a better performance in certain conditions. Occasionally, even the most modern deinterlacing techniques create frames with worse quality than primitive deinterlacing processes. This paper validates that the final image quality can be improved by combining different types of deinterlacing techniques. The proposed strategy is able to select between two types of deinterlaced frames and, if necessary, make the local correction of the defects. This decision is based on an artifact agglomeration index obtained from a feathering effect detection map. Starting from a deinterlaced frame produced by the "interfield average" method, the defective areas are identified, and, if deemed appropriate, these areas are replaced by pixels generated through the "edge-based line average" method. Test results have proven that the proposed technique is able to produce video frames with higher quality than applying a single deinterlacing technique through getting what is good from intra- and interfield methods.

  2. Novel inter and intra prediction tools under consideration for the emerging AV1 video codec

    NASA Astrophysics Data System (ADS)

    Joshi, Urvang; Mukherjee, Debargha; Han, Jingning; Chen, Yue; Parker, Sarah; Su, Hui; Chiang, Angie; Xu, Yaowu; Liu, Zoe; Wang, Yunqing; Bankoski, Jim; Wang, Chen; Keyder, Emil

    2017-09-01

    Google started the WebM Project in 2010 to develop open source, royalty- free video codecs designed specifically for media on the Web. The second generation codec released by the WebM project, VP9, is currently served by YouTube, and enjoys billions of views per day. Realizing the need for even greater compression efficiency to cope with the growing demand for video on the web, the WebM team embarked on an ambitious project to develop a next edition codec AV1, in a consortium of major tech companies called the Alliance for Open Media, that achieves at least a generational improvement in coding efficiency over VP9. In this paper, we focus primarily on new tools in AV1 that improve the prediction of pixel blocks before transforms, quantization and entropy coding are invoked. Specifically, we describe tools and coding modes that improve intra, inter and combined inter-intra prediction. Results are presented on standard test sets.

  3. Astronaut Andrew M. Allen, mission commander, sets up systems for a television downlink on the

    NASA Technical Reports Server (NTRS)

    1996-01-01

    STS-75 ONBOARD VIEW --- Astronaut Andrew M. Allen, mission commander, sets up systems for a television downlink on the flight deck of the Space Shuttle Columbia. Allen was joined by four other astronauts and an international payload specialist for more than 16 days of research aboard Columbia. The photograph was taken with a 70mm handheld camera.

  4. Human Flight to Lunar and Beyond - Re-Learning Operations Paradigms

    NASA Technical Reports Server (NTRS)

    Kenny, Edward (Ted); Statman, Joseph

    2016-01-01

    For the first time since the Apollo era, NASA is planning on sending astronauts on flights beyond LEO. The Human Space Flight (HSF) program started with a successful initial flight in Earth orbit, in December 2014. The program will continue with two Exploration Missions (EM): EM-1 will be unmanned and EM-2, carrying astronauts, will follow. NASA established a multi-center team to address the communications, and related tacking/navigation needs. This paper will focus on the lessons learned by the team designing the architecture and operations for the missions. Many of these Beyond Earth Orbit lessons had to be re-learned, as the HSF program has operated for many years in Earth orbit. Unlike the Apollo missions that were largely tracked by a dedicated ground network, the HSF planned missions will be tracked (at distances beyond GEO) by the DSN, a network that mostly serves robotic missions. There have been surprising challenges to the DSN as unique modern human spaceflight needs stretch the experience base beyond that of tracking robotic missions in deep space. Close interaction between the DSN and the HSF community to understand the unique needs (e.g. 2-way voice) resulted in a Concept of Operations (ConOps) that leverages both the deep space robotic and the Human LEO experiences. Several examples will be used to highlight the unique challenges the team faced in establishing the communications and tracking capabilities for HSF missions beyond Earth Orbit, including: Navigation. At LEO, HSF missions can rely on GPS devices for orbit determination. For Lunar-and-beyond HSF missions, techniques such as precision 2-way and 3-way Doppler and ranging, Delta-Difference-of-range, and eventually possibly on-board navigation will be used. At the same time, HSF presents a challenge to navigators, beyond those presented by robotic missions - navigating a dynamic/"noisy" spacecraft. Impact of latency - the delay associated with Round-Trip-Light-Time (RTLT). Imagine trying to have a 2-way discussion (audio or video) with an astronaut, with a 2-3 sec or more delay inserted (for lunar distances) or 20 minutes delay (for Mars distances). Balanced communications link. For robotic missions, there has been a heavy emphasis on higher downlink data rates, e.g. bringing back science data. Higher uplink data rates were of secondary importance, as uplink was used only to send commands (and occasionally small files) to the spacecraft. The ratio of downlink-to-uplink data rates was often 10:1 or more. For HSF, a continuous forward link is established and rates for uplink and downlink are more similar.

  5. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  6. Pixel color feature enhancement for road signs detection

    NASA Astrophysics Data System (ADS)

    Zhang, Qieshi; Kamata, Sei-ichiro

    2010-02-01

    Road signs play an important role in our daily life which used to guide drivers to notice variety of road conditions and cautions. They provide important visual information that can help drivers operating their vehicles in a manner for enhancing traffic safety. The occurrence of some accidents can be reduced by using automatic road signs recognition system which can alert the drivers. This research attempts to develop a warning system to alert the drivers to notice the important road signs early enough to refrain road accidents from happening. For solving this, a non-linear weighted color enhancement method by pixels is presented. Due to the advantage of proposed method, different road signs can be detected from videos effectively. With suitably coefficients and operations, the experimental results have proved that the proposed method is robust, accurate and powerful in road signs detection.

  7. The CTIO Acquisition CCD-TV camera design

    NASA Astrophysics Data System (ADS)

    Schmidt, Ricardo E.

    1990-07-01

    A CCD-based Acquisition TV Camera has been developed at CTIO to replace the existing ISIT units. In a 60 second exposure, the new Camera shows a sixfold improvement in sensitivity over an ISIT used with a Leaky Memory. Integration times can be varied over a 0.5 to 64 second range. The CCD, contained in an evacuated enclosure, is operated at -45 C. Only the image section, an area of 8.5 mm x 6.4 mm, gets exposed to light. Pixel size is 22 microns and either no binning or 2 x 2 binning can be selected. The typical readout rates used vary between 3.5 and 9 microseconds/pixel. Images are stored in a PC/XT/AT, which generates RS-170 video. The contrast in the RS-170 frames is automatically enhanced by the software.

  8. Efficient Pricing Technique for Resource Allocation Problem in Downlink OFDM Cognitive Radio Networks

    NASA Astrophysics Data System (ADS)

    Abdulghafoor, O. B.; Shaat, M. M. R.; Ismail, M.; Nordin, R.; Yuwono, T.; Alwahedy, O. N. A.

    2017-05-01

    In this paper, the problem of resource allocation in OFDM-based downlink cognitive radio (CR) networks has been proposed. The purpose of this research is to decrease the computational complexity of the resource allocation algorithm for downlink CR network while concerning the interference constraint of primary network. The objective has been secured by adopting pricing scheme to develop power allocation algorithm with the following concerns: (i) reducing the complexity of the proposed algorithm and (ii) providing firm power control to the interference introduced to primary users (PUs). The performance of the proposed algorithm is tested for OFDM- CRNs. The simulation results show that the performance of the proposed algorithm approached the performance of the optimal algorithm at a lower computational complexity, i.e., O(NlogN), which makes the proposed algorithm suitable for more practical applications.

  9. Performance Analysis of Power Saving Class of Type 1 with Both Downlink and Uplink Traffics in IEEE 802.16e

    NASA Astrophysics Data System (ADS)

    Baek, Sangkyu; Choi, Bong Dae

    We investigate power consumption of a mobile station with the power saving class of type 1 in the IEEE 802.16e. We deal with stochastic behavior of mobile station during not only sleep mode period but also awake mode period with both downlink and uplink traffics. Our methods for investigating the power saving class of type 1 are to construct the embedded Markov chain and the semi-Markov chain generated by the embedded Markov chain. To see the effect of the sleep mode, we obtain the average power consumption of a mobile station and the mean queueing delay of a message. Numerical results show that the larger size of the sleep window makes the power consumption of a mobile station smaller and the queueing delay of a downlink message longer.

  10. A Direct Broadcast Operations Concept for the HyspIRI Mission

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.; Silverman, Dorothy; Rabideau, Gregg; Mandl, Daniel; Hengemihle, Jerry

    2010-01-01

    HyspIRI is evaluating a X-band Direct Broadcast (DB) capability that would enable data to be delivered to ground stations virtually as it is acquired. However the HyspIRI VSWIR and TIR instruments will produce 1 Gbps data while the DB capability is 15 M bps for an approximate 60x oversubscription. In order to address this data volume mismatch a DB concept has been developed that determines which data to downlink based on both: 1. the type of surface the spacecraft is overflying and 2. onboard processing of the data to detect events. For example when the spacecraft is overflying polar regions it might downlink a snow/ice product. Additionally the onboard software will search for thermal signatures indicative of a volcanic event or wild fire and downlink summary information (extent, spectra) when detected, thereby reducing data volume.

  11. Facial Video-Based Photoplethysmography to Detect HRV at Rest.

    PubMed

    Moreno, J; Ramos-Castro, J; Movellan, J; Parrado, E; Rodas, G; Capdevila, L

    2015-06-01

    Our aim is to demonstrate the usefulness of photoplethysmography (PPG) for analyzing heart rate variability (HRV) using a standard 5-min test at rest with paced breathing, comparing the results with real RR intervals and testing supine and sitting positions. Simultaneous recordings of R-R intervals were conducted with a Polar system and a non-contact PPG, based on facial video recording on 20 individuals. Data analysis and editing were performed with individually designated software for each instrument. Agreement on HRV parameters was assessed with concordance correlations, effect size from ANOVA and Bland and Altman plots. For supine position, differences between video and Polar systems showed a small effect size in most HRV parameters. For sitting position, these differences showed a moderate effect size in most HRV parameters. A new procedure, based on the pixels that contained more heart beat information, is proposed for improving the signal-to-noise ratio in the PPG video signal. Results were acceptable in both positions but better in the supine position. Our approach could be relevant for applications that require monitoring of stress or cardio-respiratory health, such as effort/recuperation states in sports. © Georg Thieme Verlag KG Stuttgart · New York.

  12. Spatial Pyramid Covariance based Compact Video Code for Robust Face Retrieval in TV-series.

    PubMed

    Li, Yan; Wang, Ruiping; Cui, Zhen; Shan, Shiguang; Chen, Xilin

    2016-10-10

    We address the problem of face video retrieval in TV-series which searches video clips based on the presence of specific character, given one face track of his/her. This is tremendously challenging because on one hand, faces in TV-series are captured in largely uncontrolled conditions with complex appearance variations, and on the other hand retrieval task typically needs efficient representation with low time and space complexity. To handle this problem, we propose a compact and discriminative representation for the huge body of video data, named Compact Video Code (CVC). Our method first models the face track by its sample (i.e., frame) covariance matrix to capture the video data variations in a statistical manner. To incorporate discriminative information and obtain more compact video signature suitable for retrieval, the high-dimensional covariance representation is further encoded as a much lower-dimensional binary vector, which finally yields the proposed CVC. Specifically, each bit of the code, i.e., each dimension of the binary vector, is produced via supervised learning in a max margin framework, which aims to make a balance between the discriminability and stability of the code. Besides, we further extend the descriptive granularity of covariance matrix from traditional pixel-level to more general patchlevel, and proceed to propose a novel hierarchical video representation named Spatial Pyramid Covariance (SPC) along with a fast calculation method. Face retrieval experiments on two challenging TV-series video databases, i.e., the Big Bang Theory and Prison Break, demonstrate the competitiveness of the proposed CVC over state-of-the-art retrieval methods. In addition, as a general video matching algorithm, CVC is also evaluated in traditional video face recognition task on a standard Internet database, i.e., YouTube Celebrities, showing its quite promising performance by using an extremely compact code with only 128 bits.

  13. A novel method of the image processing on irregular triangular meshes

    NASA Astrophysics Data System (ADS)

    Vishnyakov, Sergey; Pekhterev, Vitaliy; Sokolova, Elizaveta

    2018-04-01

    The paper describes a novel method of the image processing based on irregular triangular meshes implementation. The triangular mesh is adaptive to the image content, least mean square linear approximation is proposed for the basic interpolation within the triangle. It is proposed to use triangular numbers to simplify using of the local (barycentric) coordinates for the further analysis - triangular element of the initial irregular mesh is to be represented through the set of the four equilateral triangles. This allows to use fast and simple pixels indexing in local coordinates, e.g. "for" or "while" loops for access to the pixels. Moreover, representation proposed allows to use discrete cosine transform of the simple "rectangular" symmetric form without additional pixels reordering (as it is used for shape-adaptive DCT forms). Furthermore, this approach leads to the simple form of the wavelet transform on triangular mesh. The results of the method application are presented. It is shown that advantage of the method proposed is a combination of the flexibility of the image-adaptive irregular meshes with the simple form of the pixel indexing in local triangular coordinates and the using of the common forms of the discrete transforms for triangular meshes. Method described is proposed for the image compression, pattern recognition, image quality improvement, image search and indexing. It also may be used as a part of video coding (intra-frame or inter-frame coding, motion detection).

  14. Pixel-level tunable liquid crystal lenses for auto-stereoscopic display

    NASA Astrophysics Data System (ADS)

    Li, Kun; Robertson, Brian; Pivnenko, Mike; Chu, Daping; Zhou, Jiong; Yao, Jun

    2014-02-01

    Mobile video and gaming are now widely used, and delivery of a glass-free 3D experience is of both research and development interest. The key drawbacks of a conventional 3D display based on a static lenticular lenslet array and parallax barriers are low resolution, limited viewing angle and reduced brightness, mainly because of the need of multiple-pixels for each object point. This study describes the concept and performance of pixel-level cylindrical liquid crystal (LC) lenses, which are designed to steer light to the left and right eye sequentially to form stereo parallax. The width of the LC lenses can be as small as 20-30 μm, so that the associated auto-stereoscopic display will have the same resolution as the 2D display panel in use. Such a thin sheet of tunable LC lens array can be applied directly on existing mobile displays, and can deliver 3D viewing experience while maintaining 2D viewing capability. Transparent electrodes were laser patterned to achieve the single pixel lens resolution, and a high birefringent LC material was used to realise a large diffraction angle for a wide field of view. Simulation was carried out to model the intensity profile at the viewing plane and optimise the lens array based on the measured LC phase profile. The measured viewing angle and intensity profile were compared with the simulation results.

  15. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  16. Design of a High-resolution Optoelectronic Retinal Prosthesis

    NASA Astrophysics Data System (ADS)

    Palanker, Daniel

    2005-03-01

    It has been demonstrated that electrical stimulation of the retina can produce visual percepts in blind patients suffering from macular degeneration and retinitis pigmentosa. So far retinal implants have had just a few electrodes, whereas at least several thousand pixels would be required for any functional restoration of sight. We will discuss physical limitations on the number of stimulating electrodes and on delivery of information and power to the retinal implant. Using a model of extracellular stimulation we derive the threshold values of current and voltage as a function of electrode size and distance to the target cell. Electrolysis, tissue heating, and cross-talk between neighboring electrodes depend critically on separation between electrodes and cells, thus strongly limiting the pixels size and spacing. Minimal pixel density required for 20/80 visual acuity (2500 pixels/mm2, pixel size 20 um) cannot be achieved unless the target neurons are within 7 um of the electrodes. At a separation of 50 um, the density drops to 44 pixels/mm2, and at 100 um it is further reduced to 10 pixels/mm2. We will present designs of subretinal implants that provide close proximity of electrodes to cells using migration of retinal cells to target areas. Two basic implant geometries will be described: perforated membranes and protruding electrode arrays. In addition, we will discuss delivery of information to the implant that allows for natural eye scanning of the scene, rather than scanning with a head-mounted camera. It operates similarly to ``virtual reality'' imaging devices where an image from a video camera is projected by a goggle-mounted collimated infrared LED-LCD display onto the retina, activating an array of powered photodiodes in the retinal implant. Optical delivery of visual information to the implant allows for flexible control of the image processing algorithms and stimulation parameters. In summary, we will describe solutions to some of the major problems facing the realization of a functional retinal implant: high pixel density, proximity of electrodes to target cells, natural eye scanning capability, and real-time image processing adjustable to retinal architecture.

  17. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.

  18. Video and thermal imaging system for monitoring interiors of high temperature reaction vessels

    DOEpatents

    Saveliev, Alexei V [Chicago, IL; Zelepouga, Serguei A [Hoffman Estates, IL; Rue, David M [Chicago, IL

    2012-01-10

    A system and method for real-time monitoring of the interior of a combustor or gasifier wherein light emitted by the interior surface of a refractory wall of the combustor or gasifier is collected using an imaging fiber optic bundle having a light receiving end and a light output end. Color information in the light is captured with primary color (RGB) filters or complimentary color (GMCY) filters placed over individual pixels of color sensors disposed within a digital color camera in a BAYER mosaic layout, producing RGB signal outputs or GMCY signal outputs. The signal outputs are processed using intensity ratios of the primary color filters or the complimentary color filters, producing video images and/or thermal images of the interior of the combustor or gasifier.

  19. Fire flame detection based on GICA and target tracking

    NASA Astrophysics Data System (ADS)

    Rong, Jianzhong; Zhou, Dechuang; Yao, Wei; Gao, Wei; Chen, Juan; Wang, Jian

    2013-04-01

    To improve the video fire detection rate, a robust fire detection algorithm based on the color, motion and pattern characteristics of fire targets was proposed, which proved a satisfactory fire detection rate for different fire scenes. In this fire detection algorithm: (a) a rule-based generic color model was developed based on analysis on a large quantity of flame pixels; (b) from the traditional GICA (Geometrical Independent Component Analysis) model, a Cumulative Geometrical Independent Component Analysis (C-GICA) model was developed for motion detection without static background and (c) a BP neural network fire recognition model based on multi-features of the fire pattern was developed. Fire detection tests on benchmark fire video clips of different scenes have shown the robustness, accuracy and fast-response of the algorithm.

  20. Performance of the JPEG Estimated Spectrum Adaptive Postfilter (JPEG-ESAP) for Low Bit Rates

    NASA Technical Reports Server (NTRS)

    Linares, Irving (Inventor)

    2016-01-01

    Frequency-based, pixel-adaptive filtering using the JPEG-ESAP algorithm for low bit rate JPEG formatted color images may allow for more compressed images while maintaining equivalent quality at a smaller file size or bitrate. For RGB, an image is decomposed into three color bands--red, green, and blue. The JPEG-ESAP algorithm is then applied to each band (e.g., once for red, once for green, and once for blue) and the output of each application of the algorithm is rebuilt as a single color image. The ESAP algorithm may be repeatedly applied to MPEG-2 video frames to reduce their bit rate by a factor of 2 or 3, while maintaining equivalent video quality, both perceptually, and objectively, as recorded in the computed PSNR values.

  1. Guidance system operations plan for manned CM earth orbital missions using program Skylark 1. Section 2: Data links

    NASA Technical Reports Server (NTRS)

    Hamilton, M. H.

    1972-01-01

    A computer program to define the digital uplink and downlink for use in manned command module orbital missions is presented. The subjects discussed are: (1) digital uplink to command module, (2) CMC digital downlink, (3) downlist formats, (4) description of telemetered qualities, (5) flagbits, and (6) effects of Fresh Start (V36) and Hardware Restart on flagword and channel bits.

  2. The 3D Human Motion Control Through Refined Video Gesture Annotation

    NASA Astrophysics Data System (ADS)

    Jin, Yohan; Suk, Myunghoon; Prabhakaran, B.

    In the beginning of computer and video game industry, simple game controllers consisting of buttons and joysticks were employed, but recently game consoles are replacing joystick buttons with novel interfaces such as the remote controllers with motion sensing technology on the Nintendo Wii [1] Especially video-based human computer interaction (HCI) technique has been applied to games, and the representative game is 'Eyetoy' on the Sony PlayStation 2. Video-based HCI technique has great benefit to release players from the intractable game controller. Moreover, in order to communicate between humans and computers, video-based HCI is very crucial since it is intuitive, easy to get, and inexpensive. On the one hand, extracting semantic low-level features from video human motion data is still a major challenge. The level of accuracy is really dependent on each subject's characteristic and environmental noises. Of late, people have been using 3D motion-capture data for visualizing real human motions in 3D space (e.g, 'Tiger Woods' in EA Sports, 'Angelina Jolie' in Bear-Wolf movie) and analyzing motions for specific performance (e.g, 'golf swing' and 'walking'). 3D motion-capture system ('VICON') generates a matrix for each motion clip. Here, a column is corresponding to a human's sub-body part and row represents time frames of data capture. Thus, we can extract sub-body part's motion only by selecting specific columns. Different from low-level feature values of video human motion, 3D human motion-capture data matrix are not pixel values, but is closer to human level of semantics.

  3. A practical implementation of free viewpoint video system for soccer games

    NASA Astrophysics Data System (ADS)

    Suenaga, Ryo; Suzuki, Kazuyoshi; Tezuka, Tomoyuki; Panahpour Tehrani, Mehrdad; Takahashi, Keita; Fujii, Toshiaki

    2015-03-01

    In this paper, we present a free viewpoint video generation system with billboard representation for soccer games. Free viewpoint video generation is a technology that enables users to watch 3-D objects from their desired viewpoints. Practical implementation of free viewpoint video for sports events is highly demanded. However, a commercially acceptable system has not yet been developed. The main obstacles are insufficient user-end quality of the synthesized images and highly complex procedures that sometimes require manual operations. In this work, we aim to develop a commercially acceptable free viewpoint video system with a billboard representation. A supposed scenario is that soccer games during the day can be broadcasted in 3-D, even in the evening of the same day. Our work is still ongoing. However, we have already developed several techniques to support our goal. First, we captured an actual soccer game at an official stadium where we used 20 full-HD professional cameras. Second, we have implemented several tools for free viewpoint video generation as follow. In order to facilitate free viewpoint video generation, all cameras should be calibrated. We calibrated all cameras using checker board images and feature points on the field (cross points of the soccer field lines). We extract each player region from captured images manually. The background region is estimated by observing chrominance changes of each pixel in temporal domain (automatically). Additionally, we have developed a user interface for visualizing free viewpoint video generation using a graphic library (OpenGL), which is suitable for not only commercialized TV sets but also devices such as smartphones. However, practical system has not yet been completed and our study is still ongoing.

  4. Implementation of an Optical Readout System for High-Sensitivity Terahertz Microelectromechanical Sensor Array

    DTIC Science & Technology

    2014-09-01

    rod moves about the illumination scene, the pixels in the detector start to flicker . The ‘ flickering ’ effect is due to the metal rod blocking THz...still possible to mitigate convective heat exchange between the sensor and the ambient surroundings. To mitigate the effects of convective heat...detector start to flicker . The ‘ flickering ’ effect is due to the metal rod blocking THz radiation. This effect is more apparent in the video

  5. Infrared video based gas leak detection method using modified FAST features

    NASA Astrophysics Data System (ADS)

    Wang, Min; Hong, Hanyu; Huang, Likun

    2018-03-01

    In order to detect the invisible leaking gas that is usually dangerous and easily leads to fire or explosion in time, many new technologies have arisen in the recent years, among which the infrared video based gas leak detection is widely recognized as a viable tool. However, all the moving regions of a video frame can be detected as leaking gas regions by the existing infrared video based gas leak detection methods, without discriminating the property of each detected region, e.g., a walking person in a video frame may be also detected as gas by the current gas leak detection methods.To solve this problem, we propose a novel infrared video based gas leak detection method in this paper, which is able to effectively suppress strong motion disturbances.Firstly, the Gaussian mixture model(GMM) is used to establish the background model.Then due to the observation that the shapes of gas regions are different from most rigid moving objects, we modify the Features From Accelerated Segment Test (FAST) algorithm and use the modified FAST (mFAST) features to describe each connected component. In view of the fact that the statistical property of the mFAST features extracted from gas regions is different from that of other motion regions, we propose the Pixel-Per-Points (PPP) condition to further select candidate connected components.Experimental results show that the algorithm is able to effectively suppress most strong motion disturbances and achieve real-time leaking gas detection.

  6. A full-duplex optical access system with hybrid 64/16/4QAM-OFDM downlink

    NASA Astrophysics Data System (ADS)

    He, Chao; Tan, Ze-fu; Shao, Yu-feng; Cai, Li; Pu, He-sheng; Zhu, Yun-le; Huang, Si-si; Liu, Yu

    2016-09-01

    A full-duplex optical passive access scheme is proposed and verified by simulation, in which hybrid 64/16/4-quadrature amplitude modulation (64/16/4QAM) orthogonal frequency division multiplexing (OFDM) optical signal is for downstream transmission and non-return-to-zero (NRZ) optical signal is for upstream transmission. In view of the transmitting and receiving process for downlink optical signal, in-phase/quadrature-phase (I/Q) modulation based on Mach-Zehnder modulator (MZM) and homodyne coherent detection technology are employed, respectively. The simulation results show that the bit error ratio ( BER) less than hardware decision forward error correction (HD-FEC) threshold is successfully obtained over transmission path with 20-km-long standard single mode fiber (SSMF) for hybrid downlink modulation OFDM optical signal. In addition, by dividing the system bandwidth into several subchannels consisting of some continuous subcarriers, it is convenient for users to select different channels depending on requirements of communication.

  7. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  8. Tracker-on-C for cone-beam CT-guided surgery: evaluation of geometric accuracy and clinical applications

    NASA Astrophysics Data System (ADS)

    Reaungamornrat, S.; Otake, Y.; Uneri, A.; Schafer, S.; Mirota, D. J.; Nithiananthan, S.; Stayman, J. W.; Khanna, A. J.; Reh, D. D.; Gallia, G. L.; Taylor, R. H.; Siewerdsen, J. H.

    2012-02-01

    Conventional surgical tracking configurations carry a variety of limitations in line-of-sight, geometric accuracy, and mismatch with the surgeon's perspective (for video augmentation). With increasing utilization of mobile C-arms, particularly those allowing cone-beam CT (CBCT), there is opportunity to better integrate surgical trackers at bedside to address such limitations. This paper describes a tracker configuration in which the tracker is mounted directly on the Carm. To maintain registration within a dynamic coordinate system, a reference marker visible across the full C-arm rotation is implemented, and the "Tracker-on-C" configuration is shown to provide improved target registration error (TRE) over a conventional in-room setup - (0.9+/-0.4) mm vs (1.9+/-0.7) mm, respectively. The system also can generate digitally reconstructed radiographs (DRRs) from the perspective of a tracked tool ("x-ray flashlight"), the tracker, or the C-arm ("virtual fluoroscopy"), with geometric accuracy in virtual fluoroscopy of (0.4+/-0.2) mm. Using a video-based tracker, planning data and DRRs can be superimposed on the video scene from a natural perspective over the surgical field, with geometric accuracy (0.8+/-0.3) pixels for planning data overlay and (0.6+/-0.4) pixels for DRR overlay across all C-arm angles. The field-of-view of fluoroscopy or CBCT can also be overlaid on real-time video ("Virtual Field Light") to assist C-arm positioning. The fixed transformation between the x-ray image and tracker facilitated quick, accurate intraoperative registration. The workflow and precision associated with a variety of realistic surgical tasks were significantly improved using the Tracker-on-C - for example, nearly a factor of 2 reduction in time required for C-arm positioning, reduction or elimination of dose in "hunting" for a specific fluoroscopic view, and confident placement of the x-ray FOV on the surgical target. The proposed configuration streamlines the integration of C-arm CBCT with realtime tracking and demonstrated utility in a spectrum of image-guided interventions (e.g., spine surgery) benefiting from improved accuracy, enhanced visualization, and reduced radiation exposure.

  9. Development of new family of wide-angle anamorphic lens with controlled distortion profile

    NASA Astrophysics Data System (ADS)

    Gauvin, Jonny; Doucet, Michel; Wang, Min; Thibault, Simon; Blanc, Benjamin

    2005-08-01

    It is well known that a fish-eye lens produces a circular image of the scene with a particular distortion profile. When using a fish-eye lens with a standard sensor (e.g. 1/3", 1/4",.), only a part of the rectangular detector area is used, leaving many pixels unused. We proposed a new approach to get enhanced resolution for panoramic imaging. In this paper, various arrangements of innovative 180-degree anamorphic wide-angle lens design are considered. Their performances as well as lens manufacturability are also discussed. The concept of the design is to use anamorphic optics to produce elliptical image that maximize pixel resolution in both axis. Furthermore, a non-linear distortion profile is also introduced to enhance spatial resolution for specific field angle. Typical applications such as panoramic photography, video conferencing, and homeland/transportation security are also presented.

  10. Actively addressed single pixel full-colour plasmonic display

    NASA Astrophysics Data System (ADS)

    Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis

    2017-05-01

    Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video.

  11. Backside-illuminated 6.6-μm pixel video-rate CCDs for scientific imaging applications

    NASA Astrophysics Data System (ADS)

    Tower, John R.; Levine, Peter A.; Hsueh, Fu-Lung; Patel, Vipulkumar; Swain, Pradyumna K.; Meray, Grazyna M.; Andrews, James T.; Dawson, Robin M.; Sudol, Thomas M.; Andreas, Robert

    2000-05-01

    A family of backside illuminated CCD imagers with 6.6 micrometers pixels has been developed. The imagers feature full 12 bit (> 4,000:1) dynamic range with measured noise floor of < 10 e RMS at 5 MHz clock rates, and measured full well capacity of > 50,000 e. The modulation transfer function performance is excellent, with measured MTF at Nyquist of 46% for 500 nm illumination. Three device types have been developed. The first device is a 1 K X 1 K full frame device with a single output port, which can be run as a 1 K X 512 frame transfer device. The second device is a 512 X 512 frame transfer device with a single output port. The third device is a 512 X 512 split frame transfer device with four output ports. All feature the high quantum efficiency afforded by backside illumination.

  12. Recurrent neural network based virtual detection line

    NASA Astrophysics Data System (ADS)

    Kadikis, Roberts

    2018-04-01

    The paper proposes an efficient method for detection of moving objects in the video. The objects are detected when they cross a virtual detection line. Only the pixels of the detection line are processed, which makes the method computationally efficient. A Recurrent Neural Network processes these pixels. The machine learning approach allows one to train a model that works in different and changing outdoor conditions. Also, the same network can be trained for various detection tasks, which is demonstrated by the tests on vehicle and people counting. In addition, the paper proposes a method for semi-automatic acquisition of labeled training data. The labeling method is used to create training and testing datasets, which in turn are used to train and evaluate the accuracy and efficiency of the detection method. The method shows similar accuracy as the alternative efficient methods but provides greater adaptability and usability for different tasks.

  13. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  14. MMIC Phased Array Demonstrations with ACTS

    NASA Technical Reports Server (NTRS)

    Raquet, Charles A. (Compiler); Martzaklis, Konstantinos (Compiler); Zakrajsek, Robert J. (Compiler); Andro, Monty (Compiler); Turtle, John P.

    1996-01-01

    Over a one year period from May 1994 to May 1995, a number of demonstrations were conducted by the NASA Lewis Research Center (LeRC) in which voice, data, and/or video links were established via NASA's advanced communications technology satellite (ACTS) between the ACTS link evaluation terminal (LET) in Cleveland, OH, and aeronautical and mobile or fixed Earth terminals having monolithic microwave integrated circuit (MMIC) phased array antenna systems. This paper describes four of these. In one, a duplex voice link between an aeronautical terminal on the LeRC Learjet and the ACTS was achieved. Two others demonstrated duplex voice (and in one case video as well) links between the ACTS and an Army vehicle. The fourth demonstrated a high data rate downlink from ACTS to a fixed terminal. Array antenna systems used in these demonstrations were developed by LeRC and featured LeRC and Air Force experimental arrays using gallium arsenide MMIC devices at each radiating element for electronic beam steering and distributed power amplification. The single 30 GHz transmit array was developed by NASA/LeRC and Texas Instruments. The three 20 GHz receive arrays were developed in a cooperative effort with the Air Force Rome Laboratory, taking advantage of existing Air Force array development contracts with Boeing and Lockheed Martin. The paper describes the four proof-of-concept arrays and the array control system. The system configured for each of the demonstrations is described, and results are discussed.

  15. Transactions of the Army Conference on Applied Mathematics and Computing (3rd) Held at Atlanta, Georgia on 13-16 May 1986

    DTIC Science & Technology

    1986-02-01

    jask 1 in (x,y) plane. Porn in Work Socca - Y” S,=2.0m -. I - .em Fig. 4b Task 2 in (x,y) plane. I IE:/lvli-r;: ILIldtili ml=m2= 2.0 kg...represented by a grid of 400x200 points, each point corresponding to a pixel of a computer video terminal. For each point A = (Re(A), ImW), a free critical

  16. Estimation and Mitigation of Channel Non-Reciprocity in Massive MIMO

    NASA Astrophysics Data System (ADS)

    Raeesi, Orod; Gokceoglu, Ahmet; Valkama, Mikko

    2018-05-01

    Time-division duplex (TDD) based massive MIMO systems rely on the reciprocity of the wireless propagation channels when calculating the downlink precoders based on uplink pilots. However, the effective uplink and downlink channels incorporating the analog radio front-ends of the base station (BS) and user equipments (UEs) exhibit non-reciprocity due to non-identical behavior of the individual transmit and receive chains. When downlink precoder is not aware of such channel non-reciprocity (NRC), system performance can be significantly degraded due to NRC induced interference terms. In this work, we consider a general TDD-based massive MIMO system where frequency-response mismatches at both the BS and UEs, as well as the mutual coupling mismatch at the BS large-array system all coexist and induce channel NRC. Based on the NRC-impaired signal models, we first propose a novel iterative estimation method for acquiring both the BS and UE side NRC matrices and then also propose a novel NRC-aware downlink precoder design which utilizes the obtained estimates. Furthermore, an efficient pilot signaling scheme between the BS and UEs is introduced in order to facilitate executing the proposed estimation method and the NRC-aware precoding technique in practical systems. Comprehensive numerical results indicate substantially improved spectral efficiency performance when the proposed NRC estimation and NRC-aware precoding methods are adopted, compared to the existing state-of-the-art methods.

  17. A 1024×768-12μm Digital ROIC for uncooled microbolometer FPAs

    NASA Astrophysics Data System (ADS)

    Eminoglu, Selim

    2017-02-01

    This paper reports the development of a new digital microbolometer Readout Integrated Circuit (D-ROIC), called MT10212BD. It has a format of 1024 × 768 (XGA) and a pixel pitch of 12μm. MT10212BD is Mikro Tasarim's second 12μm pitch microbolometer ROIC, which is developed specifically for surface micro machined microbolometer detector arrays with small pixel pitch using high-TCR pixel materials, such as VOx and a Si. MT10212BD has an alldigital system on-chip architecture, which generates programmable timing and biasing, and performs 14-bit analog to digital conversion (ADC). The signal processing chain in the ROIC is composed of pixel bias circuitry, integrator based programmable gain amplifier followed by column parallel ADC circuitry. MT10212BD has a serial programming interface that can be used to configure the programmable ROIC features and to load the Non-Uniformity-Correction (NUC) date to the ROIC. MT10212BD has a total of 8 high-speed serial digital video outputs, which can be programmed to operate in the 2, 4, and 8-output modes and can support frames rates above 60 fps. The high-speed serial digital outputs supports data rates as high as 400 Mega-bits/s, when operated at 50 MHz system clock frequency. There is an on-chip phase-locked-loop (PLL) based timing circuitry to generate the high speed clocks used in the ROIC. The ROIC is designed to support pixel resistance values ranging from 30KΩ to 90kΩ, with a nominal value of 60KΩ. The ROIC has a globally programmable gain in the column readout, which can be adjusted based on the detector resistance value.

  18. Real-time demonstration hardware for enhanced DPCM video compression algorithm

    NASA Technical Reports Server (NTRS)

    Bizon, Thomas P.; Whyte, Wayne A., Jr.; Marcopoli, Vincent R.

    1992-01-01

    The lack of available wideband digital links as well as the complexity of implementation of bandwidth efficient digital video CODECs (encoder/decoder) has worked to keep the cost of digital television transmission too high to compete with analog methods. Terrestrial and satellite video service providers, however, are now recognizing the potential gains that digital video compression offers and are proposing to incorporate compression systems to increase the number of available program channels. NASA is similarly recognizing the benefits of and trend toward digital video compression techniques for transmission of high quality video from space and therefore, has developed a digital television bandwidth compression algorithm to process standard National Television Systems Committee (NTSC) composite color television signals. The algorithm is based on differential pulse code modulation (DPCM), but additionally utilizes a non-adaptive predictor, non-uniform quantizer and multilevel Huffman coder to reduce the data rate substantially below that achievable with straight DPCM. The non-adaptive predictor and multilevel Huffman coder combine to set this technique apart from other DPCM encoding algorithms. All processing is done on a intra-field basis to prevent motion degradation and minimize hardware complexity. Computer simulations have shown the algorithm will produce broadcast quality reconstructed video at an average transmission rate of 1.8 bits/pixel. Hardware implementation of the DPCM circuit, non-adaptive predictor and non-uniform quantizer has been completed, providing realtime demonstration of the image quality at full video rates. Video sampling/reconstruction circuits have also been constructed to accomplish the analog video processing necessary for the real-time demonstration. Performance results for the completed hardware compare favorably with simulation results. Hardware implementation of the multilevel Huffman encoder/decoder is currently under development along with implementation of a buffer control algorithm to accommodate the variable data rate output of the multilevel Huffman encoder. A video CODEC of this type could be used to compress NTSC color television signals where high quality reconstruction is desirable (e.g., Space Station video transmission, transmission direct-to-the-home via direct broadcast satellite systems or cable television distribution to system headends and direct-to-the-home).

  19. The Texas Thermal Interface: A real-time computer interface for an Inframetrics infrared camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Storek, D.J.; Gentle, K.W.

    1996-03-01

    The Texas Thermal Interface (TTI) offers an advantageous alternative to the conventional video path for computer analysis of infrared images from Inframetrics cameras. The TTI provides real-time computer data acquisition of 48 consecutive fields (version described here) with 8-bit pixels. The alternative requires time-consuming individual frame grabs from video tape with frequent loss of resolution in the D/A/D conversion. Within seconds after the event, the TTI temperature files may be viewed and processed to infer heat fluxes or other quantities as needed. The system cost is far less than commercial units which offer less capability. The system was developed formore » and is being used to measure heat fluxes to the plasma-facing components in a tokamak. {copyright} {ital 1996 American Institute of Physics.}« less

  20. Computational multispectral video imaging [Invited].

    PubMed

    Wang, Peng; Menon, Rajesh

    2018-01-01

    Multispectral imagers reveal information unperceivable to humans and conventional cameras. Here, we demonstrate a compact single-shot multispectral video-imaging camera by placing a micro-structured diffractive filter in close proximity to the image sensor. The diffractive filter converts spectral information to a spatial code on the sensor pixels. Following a calibration step, this code can be inverted via regularization-based linear algebra to compute the multispectral image. We experimentally demonstrated spectral resolution of 9.6 nm within the visible band (430-718 nm). We further show that the spatial resolution is enhanced by over 30% compared with the case without the diffractive filter. We also demonstrate Vis-IR imaging with the same sensor. Because no absorptive color filters are utilized, sensitivity is preserved as well. Finally, the diffractive filters can be easily manufactured using optical lithography and replication techniques.

  1. GoPro HERO 4 Black recording of scleral buckle placement during retinal detachment repair.

    PubMed

    Ho, Vincent Y; Shah, Vaishali G; Yates, David M; Shah, Gaurav K

    2017-08-01

    GoPro and Google Glass technology have previously been used to record procedures in ophthalmology and other medical fields. In this manuscript, GoPro's latest HERO 4 Black edition camera (GoPro Inc, San Mateo, Calif.) will be used to record the placement of a scleral buckle during retinal detachment surgery. GoPro HERO 4 Black edition camera, which records 4K-quality video with a resolution of 3840 (pixels) x 2160 (lines), was mounted on a head strap to record placement of a scleral buckle for a retinal detachment. Excellent video quality was achieved with the 4K SuperView setting. Bluetooth connection with an Apple iPad (Apple Inc, Cupertino, Calif.) provided live streaming and use of the GoPro App. Zoom, horizontal/vertical alignment, exposure, and contrast adjustments were made with postproduction editing on GoPro Studio software. Video recording with the GoPro HERO 4 Black edition camera is an excellent way to document extraocular procedures to improve medical education, self-training, or medicolegal documentation. Copyright © 2017 Canadian Ophthalmological Society. Published by Elsevier Inc. All rights reserved.

  2. Automatic vehicle counting using background subtraction method on gray scale images and morphology operation

    NASA Astrophysics Data System (ADS)

    Adi, K.; Widodo, A. P.; Widodo, C. E.; Pamungkas, A.; Putranto, A. B.

    2018-05-01

    Traffic monitoring on road needs to be done, the counting of the number of vehicles passing the road is necessary. It is more emphasized for highway transportation management in order to prevent efforts. Therefore, it is necessary to develop a system that is able to counting the number of vehicles automatically. Video processing method is able to counting the number of vehicles automatically. This research has development a system of vehicle counting on toll road. This system includes processes of video acquisition, frame extraction, and image processing for each frame. Video acquisition is conducted in the morning, at noon, in the afternoon, and in the evening. This system employs of background subtraction and morphology methods on gray scale images for vehicle counting. The best vehicle counting results were obtained in the morning with a counting accuracy of 86.36 %, whereas the lowest accuracy was in the evening, at 21.43 %. Differences in morning and evening results are caused by different illumination in the morning and evening. This will cause the values in the image pixels to be different.

  3. Studying fish near ocean energy devices using underwater video

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Matzner, Shari; Hull, Ryan E.; Harker-Klimes, Genevra EL

    The effects of energy devices on fish populations are not well-understood, and studying the interactions of fish with tidal and instream turbines is challenging. To address this problem, we have evaluated algorithms to automatically detect fish in underwater video and propose a semi-automated method for ocean and river energy device ecological monitoring. The key contributions of this work are the demonstration of a background subtraction algorithm (ViBE) that detected 87% of human-identified fish events and is suitable for use in a real-time system to reduce data volume, and the demonstration of a statistical model to classify detections as fish ormore » not fish that achieved a correct classification rate of 85% overall and 92% for detections larger than 5 pixels. Specific recommendations for underwater video acquisition to better facilitate automated processing are given. The recommendations will help energy developers put effective monitoring systems in place, and could lead to a standard approach that simplifies the monitoring effort and advances the scientific understanding of the ecological impacts of ocean and river energy devices.« less

  4. Network-based H.264/AVC whole frame loss visibility model and frame dropping methods.

    PubMed

    Chang, Yueh-Lun; Lin, Ting-Lan; Cosman, Pamela C

    2012-08-01

    We examine the visual effect of whole frame loss by different decoders. Whole frame losses are introduced in H.264/AVC compressed videos which are then decoded by two different decoders with different common concealment effects: frame copy and frame interpolation. The videos are seen by human observers who respond to each glitch they spot. We found that about 39% of whole frame losses of B frames are not observed by any of the subjects, and over 58% of the B frame losses are observed by 20% or fewer of the subjects. Using simple predictive features which can be calculated inside a network node with no access to the original video and no pixel level reconstruction of the frame, we developed models which can predict the visibility of whole B frame losses. The models are then used in a router to predict the visual impact of a frame loss and perform intelligent frame dropping to relieve network congestion. Dropping frames based on their visual scores proves superior to random dropping of B frames.

  5. A gradient method for the quantitative analysis of cell movement and tissue flow and its application to the analysis of multicellular Dictyostelium development.

    PubMed

    Siegert, F; Weijer, C J; Nomura, A; Miike, H

    1994-01-01

    We describe the application of a novel image processing method, which allows quantitative analysis of cell and tissue movement in a series of digitized video images. The result is a vector velocity field showing average direction and velocity of movement for every pixel in the frame. We apply this method to the analysis of cell movement during different stages of the Dictyostelium developmental cycle. We analysed time-lapse video recordings of cell movement in single cells, mounds and slugs. The program can correctly assess the speed and direction of movement of either unlabelled or labelled cells in a time series of video images depending on the illumination conditions. Our analysis of cell movement during multicellular development shows that the entire morphogenesis of Dictyostelium is characterized by rotational cell movement. The analysis of cell and tissue movement by the velocity field method should be applicable to the analysis of morphogenetic processes in other systems such as gastrulation and neurulation in vertebrate embryos.

  6. Object tracking mask-based NLUT on GPUs for real-time generation of holographic videos of three-dimensional scenes.

    PubMed

    Kwon, M-W; Kim, S-C; Yoon, S-E; Ho, Y-S; Kim, E-S

    2015-02-09

    A new object tracking mask-based novel-look-up-table (OTM-NLUT) method is proposed and implemented on graphics-processing-units (GPUs) for real-time generation of holographic videos of three-dimensional (3-D) scenes. Since the proposed method is designed to be matched with software and memory structures of the GPU, the number of compute-unified-device-architecture (CUDA) kernel function calls and the computer-generated hologram (CGH) buffer size of the proposed method have been significantly reduced. It therefore results in a great increase of the computational speed of the proposed method and enables real-time generation of CGH patterns of 3-D scenes. Experimental results show that the proposed method can generate 31.1 frames of Fresnel CGH patterns with 1,920 × 1,080 pixels per second, on average, for three test 3-D video scenarios with 12,666 object points on three GPU boards of NVIDIA GTX TITAN, and confirm the feasibility of the proposed method in the practical application of electro-holographic 3-D displays.

  7. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  8. Facial recognition using simulated prosthetic pixelized vision.

    PubMed

    Thompson, Robert W; Barnett, G David; Humayun, Mark S; Dagnelie, Gislin

    2003-11-01

    To evaluate a model of simulated pixelized prosthetic vision using noncontiguous circular phosphenes, to test the effects of phosphene and grid parameters on facial recognition. A video headset was used to view a reference set of four faces, followed by a partially averted image of one of those faces viewed through a square pixelizing grid that contained 10x10 to 32x32 dots separated by gaps. The grid size, dot size, gap width, dot dropout rate, and gray-scale resolution were varied separately about a standard test condition, for a total of 16 conditions. All tests were first performed at 99% contrast and then repeated at 12.5% contrast. Discrimination speed and performance were influenced by all stimulus parameters. The subjects achieved highly significant facial recognition accuracy for all high-contrast tests except for grids with 70% random dot dropout and two gray levels. In low-contrast tests, significant facial recognition accuracy was achieved for all but the most adverse grid parameters: total grid area less than 17% of the target image, 70% dropout, four or fewer gray levels, and a gap of 40.5 arcmin. For difficult test conditions, a pronounced learning effect was noticed during high-contrast trials, and a more subtle practice effect on timing was evident during subsequent low-contrast trials. These findings suggest that reliable face recognition with crude pixelized grids can be learned and may be possible, even with a crude visual prosthesis.

  9. Coordinated Global Measurements of TLEs from the Space Shuttle and Ground Stations during MEIDEX

    NASA Astrophysics Data System (ADS)

    Yair, Y.; Price, C.; Levin, Z.; Israelevitch, P.; Devir, A.; Ziv, B.; Jospeh, J.; Mekler, Y.

    2001-12-01

    The Mediterranean Israeli Dust Experiment (MEIDEX) is scheduled to fly on-board the Columbia in May 2002, in a 39º inclination orbit for 16 days, passing over the major thunderstorm regions on Earth. The primary science instrument is a Xybion IMC-201 image-intensified radiometric camera with 6 narrow band filters (340nm, 380nm, 470nm, 555nm, 665nm, 860nm). A Sekai color video camera is a boresighted wide-FOV viewfinder. The cameras are mounted on a single-axis gimbal with a cross-track scan of ±22º degrees, inside a pressurized canister sealed with a coated quartz window that is mounted in the shuttle cargo bay. Data will be recorded in 3 digital VCRs and downlinked to the ground. During the night-side of the orbit there will be dedicated observations toward the Earth's limb above areas of active thunderstorms, in an effort to image TLEs from space. While earlier shuttle flights have succeeded in recording several ionospheric discharges by using cargo bay video cameras, MEIDEX offers a unique opportunity to conduct targeted observations with a calibrated, multispectral instrument. The Xybion camera has a rectangular FOV of 14.04(H) x 10.76 (V) degrees, that covers a volume of 466km (H) x 358km (V) at the Earth's limb, 1900km away from the shuttle. The spatial resolution is 665m (H) x 745m (V) per pixel, enabling to resolve some structural features of TLEs. Optical observations from space will be conducted with the 665nm filter that matches the observed wide peak centered at 670nm that typifies red sprites, and also with the 380 and 470nm filters to record blue jets. Observations will consist of a continuous recording of the Earth's limb, from the direction of the dusk terminator towards the night side. Areas of high convective activity will be forecast by using global aviation SIG maps, and uplinked to the crew before the observation. The astronaut will direct the camera toward areas with lightning activity, observed visually through the windows and on monitors in the crew cabin. Simultaneously with the optical observations from space, dedicated ground measurements will be conducted on a global scale. Two field sites in the Negev Desert in Israel will be used to collect electromagnetic data in the ELF and VLF frequency range. Additional ground stations in Germany, Hungary, USA, Antarctica, Chile, South Africa, Australia, Taiwan and Japan will also record Schumann Resonance and VLF signals. The coordinated measurements from various locations on Earth and from space will enable us to triangulate the location, and determine the polarity and charge moment of the parent lightning of the optically observed TLEs. The success of the campaign will further clarify the global picture of TLE occurrence.

  10. 10 Gbps Shuttle-to-Ground Adjunct Communication Link Capability Experiment

    NASA Technical Reports Server (NTRS)

    Ceniceros, J. M.; Sandusky, J. V.; Hemmati, H.

    1999-01-01

    A 1.2 Gbps space-to-ground laser communication experiment being developed for use on an EXpedite the PRocessing of Experiments to the Space Station (EXPRESS) Pallet Adapter can be adapted to fit the Hitchhiker cross-bay-carrier pallet and upgraded to data rates exceeding 1O Gbps. So modified, this instrument would enable both real-time data delivery and increased data volume for payloads using the Space Shuttle. Applications such as synthetic aperture radar and multispectral imaging collect large data volumes at a high rate and would benefit from the capability for real-time data delivery and from increased data downlink volume. Current shuttle downlink capability is limited to 50 Mbps, forcing such instruments to store large amounts of data for later analysis. While the technology is not yet sufficiently proven to be relied on as the primary communication link, when in view of the ground station it would increase the shuttle downlink rate capability 200 times, with typical total daily downlinks of 200 GB - as much data as the shuttle could downlink if it were able to maintain its maximum data rate continuously for one day. The lasercomm experiment, the Optical Communication Demonstration and High-Rate Link Facility (OCDHRLF), is being developed by the Jet Propulsion Laboratory's (JPL) Optical Communication Group through support from the International Space Station Engineering Research and Technology Development program. It is designed to work in conjunction with the Optical Communication Telescope Laboratory (OCTL) NASA's first optical communication ground station, which is under construction at JPL's Table Mountain Facility near Wrightwood, California. This paper discusses the modifications to the preliminary design of the flight system that would be necessary to adapt it to fit the Hitchhiker Cross-Bay Carrier. It also discusses orbit geometries which are favorable to the OCTL and potential non-NASA ground stations, anticipated burst-error-rates and bit-error-rates, and requirements for data collection on the ground.

  11. Exposure assessment of one-year-old child to 3G tablet in uplink mode and to 3G femtocell in downlink mode using polynomial chaos decomposition

    NASA Astrophysics Data System (ADS)

    Liorni, I.; Parazzini, M.; Varsier, N.; Hadjem, A.; Ravazzani, P.; Wiart, J.

    2016-04-01

    So far, the assessment of the exposure of children, in the ages 0-2 years old, to relatively new radio-frequency (RF) technologies, such as tablets and femtocells, remains an open issue. This study aims to analyse the exposure of a one year-old child to these two sources, tablets and femtocells, operating in uplink (tablet) and downlink (femtocell) modes, respectively. In detail, a realistic model of an infant has been used to model separately the exposures due to (i) a 3G tablet emitting at the frequency of 1940 MHz (uplink mode) placed close to the body and (ii) a 3G femtocell emitting at 2100 MHz (downlink mode) placed at a distance of at least 1 m from the infant body. For both RF sources, the input power was set to 250 mW. The variability of the exposure due to the variation of the position of the RF sources with respect to the infant body has been studied by stochastic dosimetry, based on polynomial chaos to build surrogate models of both whole-body and tissue specific absorption rate (SAR), which makes it easy and quick to investigate the exposure in a full range of possible positions of the sources. The major outcomes of the study are: (1) the maximum values of the whole-body SAR (WB SAR) have been found to be 9.5 mW kg-1 in uplink mode and 65 μW kg-1 in downlink mode, i.e. within the limits of the ICNIRP 1998 Guidelines; (2) in both uplink and downlink mode the highest SAR values were approximately found in the same tissues, i.e. in the skin, eye and penis for the whole-tissue SAR and in the bone, skin and muscle for the peak SAR; (3) the change in the position of both the 3G tablet and the 3G femtocell significantly influences the infant exposure.

  12. NPOESS Field Terminal Updates

    NASA Astrophysics Data System (ADS)

    Heckmann, G.; Route, G.

    2009-12-01

    The National Oceanic and Atmospheric Administration (NOAA), Department of Defense (DoD), and National Aeronautics and Space Administration (NASA) are jointly acquiring the next-generation weather and environmental satellite system; the National Polar-orbiting Operational Environmental Satellite System (NPOESS). NPOESS replaces the current Polar-orbiting Operational Environmental Satellites (POES) managed by NOAA and the Defense Meteorological Satellite Program (DMSP) managed by the DoD. The NPOESS satellites carry a suite of sensors that collect meteorological, oceanographic, climatological, and solar-geophysical observations of the earth, atmosphere, and space. The ground data processing segment for NPOESS is the Interface Data Processing Segment (IDPS), developed by Raytheon Intelligence and Information Systems. The IDPS processes NPOESS satellite data to provide environmental data products (aka, Environmental Data Records or EDRs) to NOAA and DoD processing centers operated by the United States government. The IDPS will process EDRs beginning with the NPOESS Preparatory Project (NPP) and continuing through the lifetime of the NPOESS system. IDPS also provides the software and requirements for the Field Terminal Segment (FTS). NPOESS provides support to deployed field terminals by providing mission data in the Low Rate and High Rate downlinks (LRD/HRD), mission support data needed to generate EDRs and decryption keys needed to decrypt mission data during Selective data Encryption (SDE). Mission support data consists of globally relevant data, geographically constrained data, and two line element sets. NPOESS provides these mission support data via the Internet accessible Mission Support Data Server and HRD/LRD downlinks. This presentation will illustrate and describe the NPOESS capabilities in support of Field Terminal users. This discussion will include the mission support data available to Field Terminal users, content of the direct broadcast HRD and LRD downlinks identifying differences between the direct broadcast downlinks including the variability of the LRD downlink and NPOESS management and distribution of decryption keys to approved field terminals using Public Key Infrastructure (PKI) AES standard with 256 bit encryption and elliptical curve cryptography.

  13. Exposure assessment of one-year-old child to 3G tablet in uplink mode and to 3G femtocell in downlink mode using polynomial chaos decomposition.

    PubMed

    Liorni, I; Parazzini, M; Varsier, N; Hadjem, A; Ravazzani, P; Wiart, J

    2016-04-21

    So far, the assessment of the exposure of children, in the ages 0-2 years old, to relatively new radio-frequency (RF) technologies, such as tablets and femtocells, remains an open issue. This study aims to analyse the exposure of a one year-old child to these two sources, tablets and femtocells, operating in uplink (tablet) and downlink (femtocell) modes, respectively. In detail, a realistic model of an infant has been used to model separately the exposures due to (i) a 3G tablet emitting at the frequency of 1940 MHz (uplink mode) placed close to the body and (ii) a 3G femtocell emitting at 2100 MHz (downlink mode) placed at a distance of at least 1 m from the infant body. For both RF sources, the input power was set to 250 mW. The variability of the exposure due to the variation of the position of the RF sources with respect to the infant body has been studied by stochastic dosimetry, based on polynomial chaos to build surrogate models of both whole-body and tissue specific absorption rate (SAR), which makes it easy and quick to investigate the exposure in a full range of possible positions of the sources. The major outcomes of the study are: (1) the maximum values of the whole-body SAR (WB SAR) have been found to be 9.5 mW kg(-1) in uplink mode and 65 μW kg(-1) in downlink mode, i.e. within the limits of the ICNIRP 1998 Guidelines; (2) in both uplink and downlink mode the highest SAR values were approximately found in the same tissues, i.e. in the skin, eye and penis for the whole-tissue SAR and in the bone, skin and muscle for the peak SAR; (3) the change in the position of both the 3G tablet and the 3G femtocell significantly influences the infant exposure.

  14. Multilevel analysis of sports video sequences

    NASA Astrophysics Data System (ADS)

    Han, Jungong; Farin, Dirk; de With, Peter H. N.

    2006-01-01

    We propose a fully automatic and flexible framework for analysis and summarization of tennis broadcast video sequences, using visual features and specific game-context knowledge. Our framework can analyze a tennis video sequence at three levels, which provides a broad range of different analysis results. The proposed framework includes novel pixel-level and object-level tennis video processing algorithms, such as a moving-player detection taking both the color and the court (playing-field) information into account, and a player-position tracking algorithm based on a 3-D camera model. Additionally, we employ scene-level models for detecting events, like service, base-line rally and net-approach, based on a number real-world visual features. The system can summarize three forms of information: (1) all court-view playing frames in a game, (2) the moving trajectory and real-speed of each player, as well as relative position between the player and the court, (3) the semantic event segments in a game. The proposed framework is flexible in choosing the level of analysis that is desired. It is effective because the framework makes use of several visual cues obtained from the real-world domain to model important events like service, thereby increasing the accuracy of the scene-level analysis. The paper presents attractive experimental results highlighting the system efficiency and analysis capabilities.

  15. Real-time rendering for multiview autostereoscopic displays

    NASA Astrophysics Data System (ADS)

    Berretty, R.-P. M.; Peters, F. J.; Volleberg, G. T. G.

    2006-02-01

    In video systems, the introduction of 3D video might be the next revolution after the introduction of color. Nowadays multiview autostereoscopic displays are in development. Such displays offer various views at the same time and the image content observed by the viewer depends upon his position with respect to the screen. His left eye receives a signal that is different from what his right eye gets; this gives, provided the signals have been properly processed, the impression of depth. The various views produced on the display differ with respect to their associated camera positions. A possible video format that is suited for rendering from different camera positions is the usual 2D format enriched with a depth related channel, e.g. for each pixel in the video not only its color is given, but also e.g. its distance to a camera. In this paper we provide a theoretical framework for the parallactic transformations which relates captured and observed depths to screen and image disparities. Moreover we present an efficient real time rendering algorithm that uses forward mapping to reduce aliasing artefacts and that deals properly with occlusions. For improved perceived resolution, we take the relative position of the color subpixels and the optics of the lenticular screen into account. Sophisticated filtering techniques results in high quality images.

  16. Precise determination of anthropometric dimensions by means of image processing methods for estimating human body segment parameter values.

    PubMed

    Baca, A

    1996-04-01

    A method has been developed for the precise determination of anthropometric dimensions from the video images of four different body configurations. High precision is achieved by incorporating techniques for finding the location of object boundaries with sub-pixel accuracy, the implementation of calibration algorithms, and by taking into account the varying distances of the body segments from the recording camera. The system allows automatic segment boundary identification from the video image, if the boundaries are marked on the subject by black ribbons. In connection with the mathematical finite-mass-element segment model of Hatze, body segment parameters (volumes, masses, the three principal moments of inertia, the three local coordinates of the segmental mass centers etc.) can be computed by using the anthropometric data determined videometrically as input data. Compared to other, recently published video-based systems for the estimation of the inertial properties of body segments, the present algorithms reduce errors originating from optical distortions, inaccurate edge-detection procedures, and user-specified upper and lower segment boundaries or threshold levels for the edge-detection. The video-based estimation of human body segment parameters is especially useful in situations where ease of application and rapid availability of comparatively precise parameter values are of importance.

  17. Depth assisted compression of full parallax light fields

    NASA Astrophysics Data System (ADS)

    Graziosi, Danillo B.; Alpaslan, Zahir Y.; El-Ghoroury, Hussein S.

    2015-03-01

    Full parallax light field displays require high pixel density and huge amounts of data. Compression is a necessary tool used by 3D display systems to cope with the high bandwidth requirements. One of the formats adopted by MPEG for 3D video coding standards is the use of multiple views with associated depth maps. Depth maps enable the coding of a reduced number of views, and are used by compression and synthesis software to reconstruct the light field. However, most of the developed coding and synthesis tools target linearly arranged cameras with small baselines. Here we propose to use the 3D video coding format for full parallax light field coding. We introduce a view selection method inspired by plenoptic sampling followed by transform-based view coding and view synthesis prediction to code residual views. We determine the minimal requirements for view sub-sampling and present the rate-distortion performance of our proposal. We also compare our method with established video compression techniques, such as H.264/AVC, H.264/MVC, and the new 3D video coding algorithm, 3DV-ATM. Our results show that our method not only has an improved rate-distortion performance, it also preserves the structure of the perceived light fields better.

  18. A natural approach to convey numerical digits using hand activity recognition based on hand shape features

    NASA Astrophysics Data System (ADS)

    Chidananda, H.; Reddy, T. Hanumantha

    2017-06-01

    This paper presents a natural representation of numerical digit(s) using hand activity analysis based on number of fingers out stretched for each numerical digit in sequence extracted from a video. The analysis is based on determining a set of six features from a hand image. The most important features used from each frame in a video are the first fingertip from top, palm-line, palm-center, valley points between the fingers exists above the palm-line. Using this work user can convey any number of numerical digits using right or left or both the hands naturally in a video. Each numerical digit ranges from 0 to9. Hands (right/left/both) used to convey digits can be recognized accurately using the valley points and with this recognition whether the user is a right / left handed person in practice can be analyzed. In this work, first the hand(s) and face parts are detected by using YCbCr color space and face part is removed by using ellipse based method. Then, the hand(s) are analyzed to recognize the activity that represents a series of numerical digits in a video. This work uses pixel continuity algorithm using 2D coordinate geometry system and does not use regular use of calculus, contours, convex hull and datasets.

  19. Information Hiding In Digital Video Using DCT, DWT and CvT

    NASA Astrophysics Data System (ADS)

    Abed Shukur, Wisam; Najah Abdullah, Wathiq; Kareem Qurban, Luheb

    2018-05-01

    The type of video that used in this proposed hiding a secret information technique is .AVI; the proposed technique of a data hiding to embed a secret information into video frames by using Discrete Cosine Transform (DCT), Discrete Wavelet Transform (DWT) and Curvelet Transform (CvT). An individual pixel consists of three color components (RGB), the secret information is embedded in Red (R) color channel. On the receiver side, the secret information is extracted from received video. After extracting secret information, robustness of proposed hiding a secret information technique is measured and obtained by computing the degradation of the extracted secret information by comparing it with the original secret information via calculating the Normalized cross Correlation (NC). The experiments shows the error ratio of the proposed technique is (8%) while accuracy ratio is (92%) when the Curvelet Transform (CvT) is used, but compared with Discrete Wavelet Transform (DWT) and Discrete Cosine Transform (DCT), the error rates are 11% and 14% respectively, while the accuracy ratios are (89%) and (86%) respectively. So, the experiments shows the Poisson noise gives better results than other types of noises, while the speckle noise gives worst results compared with other types of noises. The proposed technique has been established by using MATLAB R2016a programming language.

  20. PixonVision real-time Deblurring Anisoplanaticism Corrector (DAC)

    NASA Astrophysics Data System (ADS)

    Hier, R. G.; Puetter, R. C.

    2007-09-01

    DigiVision, Inc. and PixonImaging LLC have teamed to develop a real-time Deblurring Anisoplanaticism Corrector (DAC) for the Army. The DAC measures the geometric image warp caused by anisoplanaticism and removes it to rectify and stabilize (dejitter) the incoming image. Each new geometrically corrected image field is combined into a running-average reference image. The image averager employs a higher-order filter that uses temporal bandpass information to help identify true motion of objects and thereby adaptively moderate the contribution of each new pixel to the reference image. This result is then passed to a real-time PixonVision video processor (see paper 6696-04 note, the DAC also first dehazes the incoming video) where additional blur from high-order seeing effects is removed, the image is spatially denoised, and contrast is adjusted in a spatially adaptive manner. We plan to implement the entire algorithm within a few large modern FPGAs on a circuit board for video use. Obvious applications are within the DOD, surveillance and intelligence, security and law enforcement communities. Prototype hardware is scheduled to be available in late 2008. To demonstrate the capabilities of the DAC, we present a software simulation of the algorithm applied to real atmosphere-corrupted video data collected by Sandia Labs.

  1. OPSO - The OpenGL based Field Acquisition and Telescope Guiding System

    NASA Astrophysics Data System (ADS)

    Škoda, P.; Fuchs, J.; Honsa, J.

    2006-07-01

    We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.

  2. Video quality assessment based on correlation between spatiotemporal motion energies

    NASA Astrophysics Data System (ADS)

    Yan, Peng; Mou, Xuanqin

    2016-09-01

    Video quality assessment (VQA) has been a hot research topic because of rapid increase of huge demand of video communications. From the earliest PSNR metric to advanced models that are perceptual aware, researchers have made great progress in this field by introducing properties of human vision system (HVS) into VQA model design. Among various algorithms that model the property of HVS perceiving motion, the spatiotemporal energy model has been validated to be high consistent with psychophysical experiments. In this paper, we take the spatiotemporal energy model into VQA model design by the following steps. 1) According to the pristine spatiotemporal energy model proposed by Adelson et al, we apply the linear filters, which are oriented in space-time and tuned in spatial frequency, to filter the reference and test videos respectively. The outputs of quadrature pairs of above filters are then squared and summed to give two measures of motion energy, which are named rightward and leftward energy responses, respectively. 2) Based on the pristine model, we calculate summation of the rightward and leftward energy responses as spatiotemporal features to represent perceptual quality information for videos, named total spatiotemporal motion energy maps. 3) The proposed FR-VQA model, named STME, is calculated with statistics based on the pixel-wise correlation between the total spatiotemporal motion energy maps of the reference and distorted videos. The STME model was validated on the LIVE VQA Database by comparing with existing FR-VQA models. Experimental results show that STME performs with excellent prediction accuracy and stays in state-of-the-art VQA models.

  3. Single-layer HDR video coding with SDR backward compatibility

    NASA Astrophysics Data System (ADS)

    Lasserre, S.; François, E.; Le Léannec, F.; Touzé, D.

    2016-09-01

    The migration from High Definition (HD) TV to Ultra High Definition (UHD) is already underway. In addition to an increase of picture spatial resolution, UHD will bring more color and higher contrast by introducing Wide Color Gamut (WCG) and High Dynamic Range (HDR) video. As both Standard Dynamic Range (SDR) and HDR devices will coexist in the ecosystem, the transition from Standard Dynamic Range (SDR) to HDR will require distribution solutions supporting some level of backward compatibility. This paper presents a new HDR content distribution scheme, named SL-HDR1, using a single layer codec design and providing SDR compatibility. The solution is based on a pre-encoding HDR-to-SDR conversion, generating a backward compatible SDR video, with side dynamic metadata. The resulting SDR video is then compressed, distributed and decoded using standard-compliant decoders (e.g. HEVC Main 10 compliant). The decoded SDR video can be directly rendered on SDR displays without adaptation. Dynamic metadata of limited size are generated by the pre-processing and used to reconstruct the HDR signal from the decoded SDR video, using a post-processing that is the functional inverse of the pre-processing. Both HDR quality and artistic intent are preserved. Pre- and post-processing are applied independently per picture, do not involve any inter-pixel dependency, and are codec agnostic. Compression performance, and SDR quality are shown to be solidly improved compared to the non-backward and backward-compatible approaches, respectively using the Perceptual Quantization (PQ) and Hybrid Log Gamma (HLG) Opto-Electronic Transfer Functions (OETF).

  4. Adaptive nonlinear Volterra equalizer for mitigation of chirp-induced distortions in cost effective IMDD OFDM systems.

    PubMed

    André, Nuno Sequeira; Habel, Kai; Louchet, Hadrien; Richter, André

    2013-11-04

    We report experimental validations of an adaptive 2nd order Volterra equalization scheme for cost effective IMDD OFDM systems. This equalization scheme was applied to both uplink and downlink transmission. Downlink settings were optimized for maximum bitrate where we achieved 34 Gb/s over 10 km of SSMF using an EML with 10 GHz bandwidth. For the uplink, maximum reach was optimized achieving 14 Gb/s using a low-cost DML with 2.5 GHz bandwidth.

  5. A Synchronous Digital Duplexing Technique for OFDMA-Based Indoor Communications

    NASA Astrophysics Data System (ADS)

    Park, Chang-Hwan; Ko, Yo-Han; Kim, Yeong-Jun; Park, Kyung-Won; Jeon, Won-Gi; Paik, Jong-Ho; Lee, Seok-Pil; Cho, Yong-Soo

    In this paper, we propose a new digital duplexing scheme, called synchronous digital duplexing (SDD), which can increase data efficiency and flexibility of resource by transmitting uplink signal and downlink signal simultaneously in wireless communication. In order to transmit uplink and downlink signals simultaneously, the proposed SDD obtains mutual information among subscriber stations (SSs) with a mutual ranging symbol. This information is used for selection of transmission time, decision on cyclic suffix (CS) insertion, determination of CS length, and re-establishment of FFT starting point.

  6. Unsupervised motion-based object segmentation refined by color

    NASA Astrophysics Data System (ADS)

    Piek, Matthijs C.; Braspenning, Ralph; Varekamp, Chris

    2003-06-01

    For various applications, such as data compression, structure from motion, medical imaging and video enhancement, there is a need for an algorithm that divides video sequences into independently moving objects. Because our focus is on video enhancement and structure from motion for consumer electronics, we strive for a low complexity solution. For still images, several approaches exist based on colour, but these lack in both speed and segmentation quality. For instance, colour-based watershed algorithms produce a so-called oversegmentation with many segments covering each single physical object. Other colour segmentation approaches exist which somehow limit the number of segments to reduce this oversegmentation problem. However, this often results in inaccurate edges or even missed objects. Most likely, colour is an inherently insufficient cue for real world object segmentation, because real world objects can display complex combinations of colours. For video sequences, however, an additional cue is available, namely the motion of objects. When different objects in a scene have different motion, the motion cue alone is often enough to reliably distinguish objects from one another and the background. However, because of the lack of sufficient resolution of efficient motion estimators, like the 3DRS block matcher, the resulting segmentation is not at pixel resolution, but at block resolution. Existing pixel resolution motion estimators are more sensitive to noise, suffer more from aperture problems or have less correspondence to the true motion of objects when compared to block-based approaches or are too computationally expensive. From its tendency to oversegmentation it is apparent that colour segmentation is particularly effective near edges of homogeneously coloured areas. On the other hand, block-based true motion estimation is particularly effective in heterogeneous areas, because heterogeneous areas improve the chance a block is unique and thus decrease the chance of the wrong position producing a good match. Consequently, a number of methods exist which combine motion and colour segmentation. These methods use colour segmentation as a base for the motion segmentation and estimation or perform an independent colour segmentation in parallel which is in some way combined with the motion segmentation. The presented method uses both techniques to complement each other by first segmenting on motion cues and then refining the segmentation with colour. To our knowledge few methods exist which adopt this approach. One example is te{meshrefine}. This method uses an irregular mesh, which hinders its efficient implementation in consumer electronics devices. Furthermore, the method produces a foreground/background segmentation, while our applications call for the segmentation of multiple objects. NEW METHOD As mentioned above we start with motion segmentation and refine the edges of this segmentation with a pixel resolution colour segmentation method afterwards. There are several reasons for this approach: + Motion segmentation does not produce the oversegmentation which colour segmentation methods normally produce, because objects are more likely to have colour discontinuities than motion discontinuities. In this way, the colour segmentation only has to be done at the edges of segments, confining the colour segmentation to a smaller part of the image. In such a part, it is more likely that the colour of an object is homogeneous. + This approach restricts the computationally expensive pixel resolution colour segmentation to a subset of the image. Together with the very efficient 3DRS motion estimation algorithm, this helps to reduce the computational complexity. + The motion cue alone is often enough to reliably distinguish objects from one another and the background. To obtain the motion vector fields, a variant of the 3DRS block-based motion estimator which analyses three frames of input was used. The 3DRS motion estimator is known for its ability to estimate motion vectors which closely resemble the true motion. BLOCK-BASED MOTION SEGMENTATION As mentioned above we start with a block-resolution segmentation based on motion vectors. The presented method is inspired by the well-known K-means segmentation method te{K-means}. Several other methods (e.g. te{kmeansc}) adapt K-means for connectedness by adding a weighted shape-error. This adds the additional difficulty of finding the correct weights for the shape-parameters. Also, these methods often bias one particular pre-defined shape. The presented method, which we call K-regions, encourages connectedness because only blocks at the edges of segments may be assigned to another segment. This constrains the segmentation method to such a degree that it allows the method to use least squares for the robust fitting of affine motion models for each segment. Contrary to te{parmkm}, the segmentation step still operates on vectors instead of model parameters. To make sure the segmentation is temporally consistent, the segmentation of the previous frame will be used as initialisation for every new frame. We also present a scheme which makes the algorithm independent of the initially chosen amount of segments. COLOUR-BASED INTRA-BLOCK SEGMENTATION The block resolution motion-based segmentation forms the starting point for the pixel resolution segmentation. The pixel resolution segmentation is obtained from the block resolution segmentation by reclassifying pixels only at the edges of clusters. We assume that an edge between two objects can be found in either one of two neighbouring blocks that belong to different clusters. This assumption allows us to do the pixel resolution segmentation on each pair of such neighbouring blocks separately. Because of the local nature of the segmentation, it largely avoids problems with heterogeneously coloured areas. Because no new segments are introduced in this step, it also does not suffer from oversegmentation problems. The presented method has no problems with bifurcations. For the pixel resolution segmentation itself we reclassify pixels such that we optimize an error norm which favour similarly coloured regions and straight edges. SEGMENTATION MEASURE To assist in the evaluation of the proposed algorithm we developed a quality metric. Because the problem does not have an exact specification, we decided to define a ground truth output which we find desirable for a given input. We define the measure for the segmentation quality as being how different the segmentation is from the ground truth. Our measure enables us to evaluate oversegmentation and undersegmentation seperately. Also, it allows us to evaluate which parts of a frame suffer from oversegmentation or undersegmentation. The proposed algorithm has been tested on several typical sequences. CONCLUSIONS In this abstract we presented a new video segmentation method which performs well in the segmentation of multiple independently moving foreground objects from each other and the background. It combines the strong points of both colour and motion segmentation in the way we expected. One of the weak points is that the segmentation method suffers from undersegmentation when adjacent objects display similar motion. In sequences with detailed backgrounds the segmentation will sometimes display noisy edges. Apart from these results, we think that some of the techniques, and in particular the K-regions technique, may be useful for other two-dimensional data segmentation problems.

  7. Lane detection using Randomized Hough Transform

    NASA Astrophysics Data System (ADS)

    Mongkonyong, Peerawat; Nuthong, Chaiwat; Siddhichai, Supakorn; Yamakita, Masaki

    2018-01-01

    According to the report of the Royal Thai Police between 2006 and 2015, lane changing without consciousness is one of the most accident causes. To solve this problem, many methods are considered. Lane Departure Warning System (LDWS) is considered to be one of the potential solutions. LDWS is a mechanism designed to warn the driver when the vehicle begins to move out of its current lane. LDWS contains many parts including lane boundary detection, driver warning and lane marker tracking. This article focuses on the lane boundary detection part. The proposed lane boundary detection detects the lines of the image from the input video and selects the lane marker of the road surface from those lines. Standard Hough Transform (SHT) and Randomized Hough Transform (RHT) are considered in this article. They are used to extract lines of an image. SHT extracts the lines from all of the edge pixels. RHT extracts only the lines randomly picked by the point pairs from edge pixels. RHT algorithm reduces the time and memory usage when compared with SHT. The increase of the threshold value in RHT will increase the voted limit of the line that has a high possibility to be the lane marker, but it also consumes the time and memory. By comparison between SHT and RHT with the different threshold values, 500 frames of input video from the front car camera will be processed. The accuracy and the computational time of RHT are similar to those of SHT in the result of the comparison.

  8. Autonomous support for microorganism research in space

    NASA Astrophysics Data System (ADS)

    Fleet, M. L.; Smith, J. D.; Klaus, D. M.; Luttges, M. W.

    1993-02-01

    A preliminary design for performing on orbit, autonomous research on microorganisms and cultured cells/tissues is presented. The payload is designed to be compatible with the COMercial Experiment Transporter (COMET), an orbiter middeck locker interface and with Space Station Freedom. Uplink/downlink capabilities and sample return through controlled reentry are available for all carriers. Autonomous testing activities are preprogrammed with in-flight reprogrammability. Sensors for monitoring temperature, pH, light, gravity levels, vibrations, and radiation are provided for environmental regulation and experimental data collection. Additional data acquisition includes optical density measurement, microscopy, video, and film photography. On-board data storage capabilities are provided. A fluid transfer mechanism is utilized for inoculation, sampling, and nutrient replenishment of experiment cultures. In addition to payload design, research opportunities are explored to illustrate hardware versatility and function. The project is defined to provide biological data pertinent to extended duration crewed space flight including crew health issues and development of a Controlled Ecological Life Support System (CELSS). In addition, opportunities are opened for investigations leading to commercial applications of space, such as pharmaceutical development, modeling of terrestrial diseases, and material processing.

  9. Airborne Subscale Transport Aircraft Research Testbed: Aircraft Model Development

    NASA Technical Reports Server (NTRS)

    Jordan, Thomas L.; Langford, William M.; Hill, Jeffrey S.

    2005-01-01

    The Airborne Subscale Transport Aircraft Research (AirSTAR) testbed being developed at NASA Langley Research Center is an experimental flight test capability for research experiments pertaining to dynamics modeling and control beyond the normal flight envelope. An integral part of that testbed is a 5.5% dynamically scaled, generic transport aircraft. This remotely piloted vehicle (RPV) is powered by twin turbine engines and includes a collection of sensors, actuators, navigation, and telemetry systems. The downlink for the plane includes over 70 data channels, plus video, at rates up to 250 Hz. Uplink commands for aircraft control include over 30 data channels. The dynamic scaling requirement, which includes dimensional, weight, inertial, actuator, and data rate scaling, presents distinctive challenges in both the mechanical and electrical design of the aircraft. Discussion of these requirements and their implications on the development of the aircraft along with risk mitigation strategies and training exercises are included here. Also described are the first training (non-research) flights of the airframe. Additional papers address the development of a mobile operations station and an emulation and integration laboratory.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ryan Hruska

    Currently, small Unmanned Aerial Vehicles (UAVs) are primarily used for capturing and down-linking real-time video. To date, their role as a low-cost airborne platform for capturing high-resolution, georeferenced still imagery has not been fully utilized. On-going work within the Unmanned Vehicle Systems Program at the Idaho National Laboratory (INL) is attempting to exploit this small UAV-acquired, still imagery potential. Initially, a UAV-based still imagery work flow model was developed that includes initial UAV mission planning, sensor selection, UAV/sensor integration, and imagery collection, processing, and analysis. Components to support each stage of the work flow are also being developed. Critical tomore » use of acquired still imagery is the ability to detect changes between images of the same area over time. To enhance the analysts’ change detection ability, a UAV-specific, GIS-based change detection system called SADI or System for Analyzing Differences in Imagery is under development. This paper will discuss the associated challenges and approaches to collecting still imagery with small UAVs. Additionally, specific components of the developed work flow system will be described and graphically illustrated using varied examples of small UAV-acquired still imagery.« less

  11. AirSTAR: A UAV Platform for Flight Dynamics and Control System Testing

    NASA Technical Reports Server (NTRS)

    Jordan, Thomas L.; Foster, John V.; Bailey, Roger M.; Belcastro, Christine M.

    2006-01-01

    As part of the NASA Aviation Safety Program at Langley Research Center, a dynamically scaled unmanned aerial vehicle (UAV) and associated ground based control system are being developed to investigate dynamics modeling and control of large transport vehicles in upset conditions. The UAV is a 5.5% (seven foot wingspan), twin turbine, generic transport aircraft with a sophisticated instrumentation and telemetry package. A ground based, real-time control system is located inside an operations vehicle for the research pilot and associated support personnel. The telemetry system supports over 70 channels of data plus video for the downlink and 30 channels for the control uplink. Data rates are in excess of 200 Hz. Dynamic scaling of the UAV, which includes dimensional, weight, inertial, actuation, and control system scaling, is required so that the sub-scale vehicle will realistically simulate the flight characteristics of the full-scale aircraft. This testbed will be utilized to validate modeling methods, flight dynamics characteristics, and control system designs for large transport aircraft, with the end goal being the development of technologies to reduce the fatal accident rate due to loss-of-control.

  12. Transceiver optics for interplanetary communications

    NASA Astrophysics Data System (ADS)

    Roberts, W. T.; Farr, W. H.; Rider, B.; Sampath, D.

    2017-11-01

    In-situ interplanetary science missions constantly push the spacecraft communications systems to support successively higher downlink rates. However, the highly restrictive mass and power constraints placed on interplanetary spacecraft significantly limit the desired bandwidth increases in going forward with current radio frequency (RF) technology. To overcome these limitations, we have evaluated the ability of free-space optical communications systems to make substantial gains in downlink bandwidth, while holding to the mass and power limits allocated to current state-of-the-art Ka-band communications systems. A primary component of such an optical communications system is the optical assembly, comprised of the optical support structure, optical elements, baffles and outer enclosure. We wish to estimate the total mass that such an optical assembly might require, and assess what form it might take. Finally, to ground this generalized study, we should produce a conceptual design, and use that to verify its ability to achieve the required downlink gain, estimate it's specific optical and opto-mechanical requirements, and evaluate the feasibility of producing the assembly.

  13. Solving the Swath Segment Selection Problem

    NASA Technical Reports Server (NTRS)

    Knight, Russell; Smith, Benjamin

    2006-01-01

    Several artificial-intelligence search techniques have been tested as means of solving the swath segment selection problem (SSSP) -- a real-world problem that is not only of interest in its own right, but is also useful as a test bed for search techniques in general. In simplest terms, the SSSP is the problem of scheduling the observation times of an airborne or spaceborne synthetic-aperture radar (SAR) system to effect the maximum coverage of a specified area (denoted the target), given a schedule of downlinks (opportunities for radio transmission of SAR scan data to a ground station), given the limit on the quantity of SAR scan data that can be stored in an onboard memory between downlink opportunities, and given the limit on the achievable downlink data rate. The SSSP is NP complete (short for "nondeterministic polynomial time complete" -- characteristic of a class of intractable problems that can be solved only by use of computers capable of making guesses and then checking the guesses in polynomial time).

  14. Onboard Classifiers for Science Event Detection on a Remote Sensing Spacecraft

    NASA Technical Reports Server (NTRS)

    Castano, Rebecca; Mazzoni, Dominic; Tang, Nghia; Greeley, Ron; Doggett, Thomas; Cichy, Ben; Chien, Steve; Davies, Ashley

    2006-01-01

    Typically, data collected by a spacecraft is downlinked to Earth and pre-processed before any analysis is performed. We have developed classifiers that can be used onboard a spacecraft to identify high priority data for downlink to Earth, providing a method for maximizing the use of a potentially bandwidth limited downlink channel. Onboard analysis can also enable rapid reaction to dynamic events, such as flooding, volcanic eruptions or sea ice break-up. Four classifiers were developed to identify cryosphere events using hyperspectral images. These classifiers include a manually constructed classifier, a Support Vector Machine (SVM), a Decision Tree and a classifier derived by searching over combinations of thresholded band ratios. Each of the classifiers was designed to run in the computationally constrained operating environment of the spacecraft. A set of scenes was hand-labeled to provide training and testing data. Performance results on the test data indicate that the SVM and manual classifiers outperformed the Decision Tree and band-ratio classifiers with the SVM yielding slightly better classifications than the manual classifier.

  15. An investigation of error characteristics and coding performance

    NASA Technical Reports Server (NTRS)

    Ebel, William J.; Ingels, Frank M.

    1992-01-01

    The performance of forward error correcting coding schemes on errors anticipated for the Earth Observation System (EOS) Ku-band downlink are studied. The EOS transmits picture frame data to the ground via the Telemetry Data Relay Satellite System (TDRSS) to a ground-based receiver at White Sands. Due to unintentional RF interference from other systems operating in the Ku band, the noise at the receiver is non-Gaussian which may result in non-random errors output by the demodulator. That is, the downlink channel cannot be modeled by a simple memoryless Gaussian-noise channel. From previous experience, it is believed that those errors are bursty. The research proceeded by developing a computer based simulation, called Communication Link Error ANalysis (CLEAN), to model the downlink errors, forward error correcting schemes, and interleavers used with TDRSS. To date, the bulk of CLEAN was written, documented, debugged, and verified. The procedures for utilizing CLEAN to investigate code performance were established and are discussed.

  16. Computationally efficient target classification in multispectral image data with Deep Neural Networks

    NASA Astrophysics Data System (ADS)

    Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca

    2016-10-01

    Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.

  17. A novel method to reduce time investment when processing videos from camera trap studies.

    PubMed

    Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs.

  18. Virtual 3D bladder reconstruction for augmented medical records from white light cystoscopy (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Lurie, Kristen L.; Zlatev, Dimitar V.; Angst, Roland; Liao, Joseph C.; Ellerbee, Audrey K.

    2016-02-01

    Bladder cancer has a high recurrence rate that necessitates lifelong surveillance to detect mucosal lesions. Examination with white light cystoscopy (WLC), the standard of care, is inherently subjective and data storage limited to clinical notes, diagrams, and still images. A visual history of the bladder wall can enhance clinical and surgical management. To address this clinical need, we developed a tool to transform in vivo WLC videos into virtual 3-dimensional (3D) bladder models using advanced computer vision techniques. WLC videos from rigid cystoscopies (1280 x 720 pixels) were recorded at 30 Hz followed by immediate camera calibration to control for image distortions. Video data were fed into an automated structure-from-motion algorithm that generated a 3D point cloud followed by a 3D mesh to approximate the bladder surface. The highest quality cystoscopic images were projected onto the approximated bladder surface to generate a virtual 3D bladder reconstruction. In intraoperative WLC videos from 36 patients undergoing transurethral resection of suspected bladder tumors, optimal reconstruction was achieved from frames depicting well-focused vasculature, when the bladder was maintained at constant volume with minimal debris, and when regions of the bladder wall were imaged multiple times. A significant innovation of this work is the ability to perform the reconstruction using video from a clinical procedure collected with standard equipment, thereby facilitating rapid clinical translation, application to other forms of endoscopy and new opportunities for longitudinal studies of cancer recurrence.

  19. Downlink Probability Density Functions for EOS-McMurdo Sound

    NASA Technical Reports Server (NTRS)

    Christopher, P.; Jackson, A. H.

    1996-01-01

    The visibility times and communication link dynamics for the Earth Observations Satellite (EOS)-McMurdo Sound direct downlinks have been studied. The 16 day EOS periodicity may be shown with the Goddard Trajectory Determination System (GTDS) and the entire 16 day period should be simulated for representative link statistics. We desire many attributes of the downlink, however, and a faster orbital determination method is desirable. We use the method of osculating elements for speed and accuracy in simulating the EOS orbit. The accuracy of the method of osculating elements is demonstrated by closely reproducing the observed 16 day Landsat periodicity. An autocorrelation function method is used to show the correlation spike at 16 days. The entire 16 day record of passes over McMurdo Sound is then used to generate statistics for innage time, outage time, elevation angle, antenna angle rates, and propagation loss. The levation angle probability density function is compared with 1967 analytic approximation which has been used for medium to high altitude satellites. One practical result of this comparison is seen to be the rare occurrence of zenith passes. The new result is functionally different than the earlier result, with a heavy emphasis on low elevation angles. EOS is one of a large class of sun synchronous satellites which may be downlinked to McMurdo Sound. We examine delay statistics for an entire group of sun synchronous satellites ranging from 400 km to 1000 km altitude. Outage probability density function results are presented three dimensionally.

  20. Real time three-dimensional space video rate sensors for millimeter waves imaging based very inexpensive plasma LED lamps

    NASA Astrophysics Data System (ADS)

    Levanon, Assaf; Yitzhaky, Yitzhak; Kopeika, Natan S.; Rozban, Daniel; Abramovich, Amir

    2014-10-01

    In recent years, much effort has been invested to develop inexpensive but sensitive Millimeter Wave (MMW) detectors that can be used in focal plane arrays (FPAs), in order to implement real time MMW imaging. Real time MMW imaging systems are required for many varied applications in many fields as homeland security, medicine, communications, military products and space technology. It is mainly because this radiation has high penetration and good navigability through dust storm, fog, heavy rain, dielectric materials, biological tissue, and diverse materials. Moreover, the atmospheric attenuation in this range of the spectrum is relatively low and the scattering is also low compared to NIR and VIS. The lack of inexpensive room temperature imaging systems makes it difficult to provide a suitable MMW system for many of the above applications. In last few years we advanced in research and development of sensors using very inexpensive (30-50 cents) Glow Discharge Detector (GDD) plasma indicator lamps as MMW detectors. This paper presents three kinds of GDD sensor based lamp Focal Plane Arrays (FPA). Those three kinds of cameras are different in the number of detectors, scanning operation, and detection method. The 1st and 2nd generations are 8 × 8 pixel array and an 18 × 2 mono-rail scanner array respectively, both of them for direct detection and limited to fixed imaging. The last designed sensor is a multiplexing frame rate of 16x16 GDD FPA. It permits real time video rate imaging of 30 frames/ sec and comprehensive 3D MMW imaging. The principle of detection in this sensor is a frequency modulated continuous wave (FMCW) system while each of the 16 GDD pixel lines is sampled simultaneously. Direct detection is also possible and can be done with a friendly user interface. This FPA sensor is built over 256 commercial GDD lamps with 3 mm diameter International Light, Inc., Peabody, MA model 527 Ne indicator lamps as pixel detectors. All three sensors are fully supported by software Graphical Unit Interface (GUI). They were tested and characterized through different kinds of optical systems for imaging applications, super resolution, and calibration methods. Capability of the 16x16 sensor is to employ a chirp radar like method to produced depth and reflectance information in the image. This enables 3-D MMW imaging in real time with video frame rate. In this work we demonstrate different kinds of optical imaging systems. Those systems have capability of 3-D imaging for short range and longer distances to at least 10-20 meters.

  1. [Online-conference using JGN.].

    PubMed

    Nakayama, Kazuya; Kojima, Kazuhiko; Suzuki, Masayuki; Kikuchi, Yuzo; Iwahara, Masayoshi; Matsui, Osamu; Noguchi, Masato

    2004-01-01

    Telemedicine and online conference systems have some benefits so that equalizing medical level, improving efficiency of medical care and improving service for patients. It is possible to give advice and to support its medical projects stationed in other facility and to provide the same quality treatments for patients. In this paper, we set up an experimental network system to teleconference using JGN (Japan Gigabit Network) and tried to discussion alternatively for case study between Kanazawa university and Fukui red cross hospital, 70 km away. The JGN used in this study is an ultra-high-speed network for the purpose of research and development. Kanazawa university, and Fukui red cross hospital are connected by a 10 Mbps communication link of the JGN. We tried online conference on the experimental network using video chat system. In result, using video chat system, the average transmission rate of MRI images (256 X 256pixel, 16bit) is 0.2 s/frame.

  2. Real-time filtering and detection of dynamics for compression of HDTV

    NASA Technical Reports Server (NTRS)

    Sauer, Ken D.; Bauer, Peter

    1991-01-01

    The preprocessing of video sequences for data compressing is discussed. The end goal associated with this is a compression system for HDTV capable of transmitting perceptually lossless sequences at under one bit per pixel. Two subtopics were emphasized to prepare the video signal for more efficient coding: (1) nonlinear filtering to remove noise and shape the signal spectrum to take advantage of insensitivities of human viewers; and (2) segmentation of each frame into temporally dynamic/static regions for conditional frame replenishment. The latter technique operates best under the assumption that the sequence can be modelled as a superposition of active foreground and static background. The considerations were restricted to monochrome data, since it was expected to use the standard luminance/chrominance decomposition, which concentrates most of the bandwidth requirements in the luminance. Similar methods may be applied to the two chrominance signals.

  3. Region-Based Prediction for Image Compression in the Cloud.

    PubMed

    Begaint, Jean; Thoreau, Dominique; Guillotel, Philippe; Guillemot, Christine

    2018-04-01

    Thanks to the increasing number of images stored in the cloud, external image similarities can be leveraged to efficiently compress images by exploiting inter-images correlations. In this paper, we propose a novel image prediction scheme for cloud storage. Unlike current state-of-the-art methods, we use a semi-local approach to exploit inter-image correlation. The reference image is first segmented into multiple planar regions determined from matched local features and super-pixels. The geometric and photometric disparities between the matched regions of the reference image and the current image are then compensated. Finally, multiple references are generated from the estimated compensation models and organized in a pseudo-sequence to differentially encode the input image using classical video coding tools. Experimental results demonstrate that the proposed approach yields significant rate-distortion performance improvements compared with the current image inter-coding solutions such as high efficiency video coding.

  4. A Three-Line Stereo Camera Concept for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon

    1997-01-01

    This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.

  5. Outer planet Pioneer imaging communications system study. [data compression

    NASA Technical Reports Server (NTRS)

    1974-01-01

    The effects of different types of imaging data compression on the elements of the Pioneer end-to-end data system were studied for three imaging transmission methods. These were: no data compression, moderate data compression, and the advanced imaging communications system. It is concluded that: (1) the value of data compression is inversely related to the downlink telemetry bit rate; (2) the rolling characteristics of the spacecraft limit the selection of data compression ratios; and (3) data compression might be used to perform acceptable outer planet mission at reduced downlink telemetry bit rates.

  6. Satellite diversity and its implications on the RAKE receiver architecture for CDMA-based S-PCN's

    NASA Technical Reports Server (NTRS)

    Taaghol, P.; Sammut, A.; Tafazolli, R.; Evans, B. G.

    1995-01-01

    In this paper we examine the applicability of RAKE receivers in a mobile LEO satellite channel and identify the potential problem areas. We then proceed to investigate the possibility of a coherent combining architecture (downlink) in the presence of satellite diversity. We closely examine the path delay difference statistics of a diversity channel and propose a delay compensation scheme for the downlink in order to reduce the complexity of the user terminal. Finally, the required modifications to the conventional RAKE receiver are proposed and discussed.

  7. Information Switching Processor (ISP) contention analysis and control

    NASA Technical Reports Server (NTRS)

    Shyy, D.; Inukai, T.

    1993-01-01

    Future satellite communications, as a viable means of communications and an alternative to terrestrial networks, demand flexibility and low end-user cost. On-board switching/processing satellites potentially provide these features, allowing flexible interconnection among multiple spot beams, direct to the user communications services using very small aperture terminals (VSAT's), independent uplink and downlink access/transmission system designs optimized to user's traffic requirements, efficient TDM downlink transmission, and better link performance. A flexible switching system on the satellite in conjunction with low-cost user terminals will likely benefit future satellite network users.

  8. [Design and Implementation of Image Interpolation and Color Correction for Ultra-thin Electronic Endoscope on FPGA].

    PubMed

    Luo, Qiang; Yan, Zhuangzhi; Gu, Dongxing; Cao, Lei

    This paper proposed an image interpolation algorithm based on bilinear interpolation and a color correction algorithm based on polynomial regression on FPGA, which focused on the limited number of imaging pixels and color distortion of the ultra-thin electronic endoscope. Simulation experiment results showed that the proposed algorithm realized the real-time display of 1280 x 720@60Hz HD video, and using the X-rite color checker as standard colors, the average color difference was reduced about 30% comparing with that before color correction.

  9. An Optical System for Body Imaging from a Distance Using Near-TeraHertz Frequencies

    NASA Astrophysics Data System (ADS)

    Duncan, W. D.; Schwall, R. E.; Irwin, K. D.; Beall, J. A.; Reintsema, C. D.; Doriese, William; Cho, Hsiao-Mei; Estey, Brian; Chattopadhyay, Goutam; Ade, Peter; Tucker, Carole

    2008-05-01

    We present the outline of the optical design of a TeraHertz (THz) imager for the detection of shrapnel-loaded improvised explosive devices (IED) devices at “stand-off” distances of 14 26 meters. The system will use 4 antenna-coupled TES detector arrays of 16 by 16 pixels cooled in a cryogen-free system with microwave readout to see beneath clothing at non-lethal detonation distances. A spatial resolution of ˜10 mm and close to video frame rates is anticipated.

  10. Operations of a spaceflight experiment to investigate plant tropisms

    NASA Astrophysics Data System (ADS)

    Kiss, John Z.; Kumar, Prem; Millar, Katherine D. L.; Edelmann, Richard E.; Correll, Melanie J.

    2009-10-01

    Plants will be an important component in bioregenerative systems for long-term missions to the Moon and Mars. Since gravity is reduced both on the Moon and Mars, studies that identify the basic mechanisms of plant growth and development in altered gravity are required to ensure successful plant production on these space colonization missions. To address these issues, we have developed a project on the International Space Station (ISS) to study the interaction between gravitropism and phototropism in Arabidopsis thaliana. These experiments were termed TROPI (for tropisms) and were performed on the European Modular Cultivation System (EMCS) in 2006. In this paper, we provide an operational summary of TROPI and preliminary results on studies of tropistic curvature of seedlings grown in space. Seed germination in TROPI was lower compared to previous space experiments, and this was likely due to extended storage in hardware for up to 8 months. Video downlinks provided an important quality check on the automated experimental time line that also was monitored with telemetry. Good quality images of seedlings were obtained, but the use of analog video tapes resulted in delays in image processing and analysis procedures. Seedlings that germinated exhibited robust phototropic curvature. Frozen plant samples were returned on three space shuttle missions, and improvements in cold stowage and handing procedures in the second and third missions resulted in quality RNA extracted from the seedlings that was used in subsequent microarray analyses. While the TROPI experiment had technical and logistical difficulties, most of the procedures worked well due to refinement during the project.

  11. Adaptive foveated single-pixel imaging with dynamic supersampling

    PubMed Central

    Phillips, David B.; Sun, Ming-Jie; Taylor, Jonathan M.; Edgar, Matthew P.; Barnett, Stephen M.; Gibson, Graham M.; Padgett, Miles J.

    2017-01-01

    In contrast to conventional multipixel cameras, single-pixel cameras capture images using a single detector that measures the correlations between the scene and a set of patterns. However, these systems typically exhibit low frame rates, because to fully sample a scene in this way requires at least the same number of correlation measurements as the number of pixels in the reconstructed image. To mitigate this, a range of compressive sensing techniques have been developed which use a priori knowledge to reconstruct images from an undersampled measurement set. Here, we take a different approach and adopt a strategy inspired by the foveated vision found in the animal kingdom—a framework that exploits the spatiotemporal redundancy of many dynamic scenes. In our system, a high-resolution foveal region tracks motion within the scene, yet unlike a simple zoom, every frame delivers new spatial information from across the entire field of view. This strategy rapidly records the detail of quickly changing features in the scene while simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This architecture provides video streams in which both the resolution and exposure time spatially vary and adapt dynamically in response to the evolution of the scene. The degree of local frame rate enhancement is scene-dependent, but here, we demonstrate a factor of 4, thereby helping to mitigate one of the main drawbacks of single-pixel imaging techniques. The methods described here complement existing compressive sensing approaches and may be applied to enhance computational imagers that rely on sequential correlation measurements. PMID:28439538

  12. Photovoltaic restoration of sight in rodents with retinal degeneration (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Palanker, Daniel V.

    2017-02-01

    To restore vision in patients who lost their photoreceptors due to retinal degeneration, we developed a photovoltaic subretinal prosthesis which converts light into pulsed electric current, stimulating the nearby inner retinal neurons. Visual information is projected onto the retina by video goggles using pulsed near-infrared ( 900nm) light. This design avoids the use of bulky electronics and wiring, thereby greatly reducing the surgical complexity. Optical activation of the photovoltaic pixels allows scaling the implants to thousands of electrodes, and multiple modules can be tiled under the retina to expand the visual field. We found that similarly to normal vision, retinal response to prosthetic stimulation exhibits flicker fusion at high frequencies (>20Hz), adaptation to static images, and non-linear summation of subunits in the receptive fields. Photovoltaic arrays with 70um pixels restored visual acuity up to a single pixel pitch, which is only two times lower than natural acuity in rats. If these results translate to human retina, such implants could restore visual acuity up to 20/250. With eye scanning and perceptual learning, human patients might even cross the 20/200 threshold of legal blindness. In collaboration with Pixium Vision, we are preparing this system (PRIMA) for a clinical trial. To further improve visual acuity, we are developing smaller pixels - down to 40um, and on 3-D interface to improve proximity to the target neurons. Scalability, ease of implantation and tiling of these wireless modules to cover a large visual field, combined with high resolution opens the door to highly functional restoration of sight.

  13. Determination of high temperature strains using a PC based vision system

    NASA Astrophysics Data System (ADS)

    McNeill, Stephen R.; Sutton, Michael A.; Russell, Samuel S.

    1992-09-01

    With the widespread availability of video digitizers and cheap personal computers, the use of computer vision as an experimental tool is becoming common place. These systems are being used to make a wide variety of measurements that range from simple surface characterization to velocity profiles. The Sub-Pixel Digital Image Correlation technique has been developed to measure full field displacement and gradients of the surface of an object subjected to a driving force. The technique has shown its utility by measuring the deformation and movement of objects that range from simple translation to fluid velocity profiles to crack tip deformation of solid rocket fuel. This technique has recently been improved and used to measure the surface displacement field of an object at high temperature. The development of a PC based Sub-Pixel Digital Image Correlation system has yielded an accurate and easy to use system for measuring surface displacements and gradients. Experiments have been performed to show the system is viable for measuring thermal strain.

  14. Vertex shading of the three-dimensional model based on ray-tracing algorithm

    NASA Astrophysics Data System (ADS)

    Hu, Xiaoming; Sang, Xinzhu; Xing, Shujun; Yan, Binbin; Wang, Kuiru; Dou, Wenhua; Xiao, Liquan

    2016-10-01

    Ray Tracing Algorithm is one of the research hotspots in Photorealistic Graphics. It is an important light and shadow technology in many industries with the three-dimensional (3D) structure, such as aerospace, game, video and so on. Unlike the traditional method of pixel shading based on ray tracing, a novel ray tracing algorithm is presented to color and render vertices of the 3D model directly. Rendering results are related to the degree of subdivision of the 3D model. A good light and shade effect is achieved by realizing the quad-tree data structure to get adaptive subdivision of a triangle according to the brightness difference of its vertices. The uniform grid algorithm is adopted to improve the rendering efficiency. Besides, the rendering time is independent of the screen resolution. In theory, as long as the subdivision of a model is adequate, cool effects as the same as the way of pixel shading will be obtained. Our practical application can be compromised between the efficiency and the effectiveness.

  15. Depth-Based Selective Blurring in Stereo Images Using Accelerated Framework

    NASA Astrophysics Data System (ADS)

    Mukherjee, Subhayan; Guddeti, Ram Mohana Reddy

    2014-09-01

    We propose a hybrid method for stereo disparity estimation by combining block and region-based stereo matching approaches. It generates dense depth maps from disparity measurements of only 18 % image pixels (left or right). The methodology involves segmenting pixel lightness values using fast K-Means implementation, refining segment boundaries using morphological filtering and connected components analysis; then determining boundaries' disparities using sum of absolute differences (SAD) cost function. Complete disparity maps are reconstructed from boundaries' disparities. We consider an application of our method for depth-based selective blurring of non-interest regions of stereo images, using Gaussian blur to de-focus users' non-interest regions. Experiments on Middlebury dataset demonstrate that our method outperforms traditional disparity estimation approaches using SAD and normalized cross correlation by up to 33.6 % and some recent methods by up to 6.1 %. Further, our method is highly parallelizable using CPU-GPU framework based on Java Thread Pool and APARAPI with speed-up of 5.8 for 250 stereo video frames (4,096 × 2,304).

  16. Actively addressed single pixel full-colour plasmonic display

    PubMed Central

    Franklin, Daniel; Frank, Russell; Wu, Shin-Tson; Chanda, Debashis

    2017-01-01

    Dynamic, colour-changing surfaces have many applications including displays, wearables and active camouflage. Plasmonic nanostructures can fill this role by having the advantages of ultra-small pixels, high reflectivity and post-fabrication tuning through control of the surrounding media. However, previous reports of post-fabrication tuning have yet to cover a full red-green-blue (RGB) colour basis set with a single nanostructure of singular dimensions. Here, we report a method which greatly advances this tuning and demonstrates a liquid crystal-plasmonic system that covers the full RGB colour basis set, only as a function of voltage. This is accomplished through a surface morphology-induced, polarization-dependent plasmonic resonance and a combination of bulk and surface liquid crystal effects that manifest at different voltages. We further demonstrate the system's compatibility with existing LCD technology by integrating it with a commercially available thin-film-transistor array. The imprinted surface interfaces readily with computers to display images as well as video. PMID:28488671

  17. An array of antenna-coupled superconducting microbolometers for passive indoors real-time THz imaging

    NASA Astrophysics Data System (ADS)

    Luukanen, A.; Grönberg, L.; Helistö, P.; Penttilä, J. S.; Seppä, H.; Sipola, H.; Dietlein, C. R.; Grossman, E. N.

    2006-05-01

    The temperature resolving power (NETD) of millimeter wave imagers based on InP HEMT MMIC radiometers is typically about 1 K (30 ms), but the MMIC technology is limited to operating frequencies below ~ 150 GHz. In this paper we report the first results from a pixel developed for an eight pixel sub-array of superconducting antenna-coupled microbolometers, a first step towards a real-time imaging system, with frequency coverage of 0.2 - 3.6 THz. These detectors have demonstrated video-rate NETDs in the millikelvin range, close to the fundamental photon noise limit, when operated at a bath temperature of ~ 4K. The detectors will be operated within a turn-key cryogen-free pulse tube refrigerator, which allows for continuous operation without the need for liquid cryogens. The outstanding frequency agility of bolometric detectors allows for multi-frequency imaging, which greatly enhances the discrimination of e.g. explosives against innoncuous items concealed underneath clothing.

  18. Real time pipelined system for forming the sum of products in the processing of video data

    NASA Technical Reports Server (NTRS)

    Wilcox, Brian (Inventor)

    1988-01-01

    A 3-by-3 convolver utilizes 9 binary arithmetic units connected in cascade for multiplying 12-bit binary pixel values P sub i which are positive or two's complement binary numbers by 5-bit magnitide (plus sign) weights W sub i which may be positive or negative. The weights are stored in registers including the sign bits. For a negative weight, the one's complement of the pixel value to be multiplied is formed at each unit by a bank of 17 exclusive or gates G sub i under control of the sign of the corresponding weight W sub i, and a correction is made by adding the sum of the absolute values of all the negative weights for each 3-by-3 kernel. Since this correction value remains constant as long as the weights are constant, it can be precomputed and stored in a register as a value to be added to the product PW of the first arithmetic unit.

  19. Radiofrequency radiation at Stockholm Central Railway Station in Sweden and some medical aspects on public exposure to RF fields.

    PubMed

    Hardell, Lennart; Koppel, Tarmo; Carlberg, Michael; Ahonen, Mikko; Hedendahl, Lena

    2016-10-01

    The Stockholm Central Railway Station in Sweden was investigated for public radiofrequency (RF) radiation exposure. The exposimeter EME Spy 200 was used to collect the RF exposure data across the railway station. The exposimeter covers 20 different radiofrequency bands from 88 to 5,850 MHz. In total 1,669 data points were recorded. The median value for total exposure was 921 µW/m2 (or 0.092 µW/cm2; 1 µW/m2=0.0001 µW/cm2) with some outliers over 95,544 µW/m2 (6 V/m, upper detection limit). The mean total RF radiation level varied between 2,817 to 4,891 µW/m2 for each walking round. High mean measurements were obtained for GSM + UMTS 900 downlink varying between 1,165 and 2,075 µW/m2. High levels were also obtained for UMTS 2100 downlink; 442 to 1,632 µW/m2. Also LTE 800 downlink, GSM 1800 downlink, and LTE 2600 downlink were in the higher range of measurements. Hot spots were identified, for example close to a wall mounted base station yielding over 95,544 µW/m2 and thus exceeding the exposimeter's detection limit. Almost all of the total measured levels were above the precautionary target level of 3-6 µW/m2 as proposed by the BioInitiative Working Group in 2012. That target level was one-tenth of the scientific benchmark providing a safety margin either for children, or chronic exposure conditions. We compare the levels of RF radiation exposures identified in the present study to published scientific results reporting adverse biological effects and health harm at levels equivalent to, or below those measured in this Stockholm Central Railway Station project. It should be noted that these RF radiation levels give transient exposure, since people are generally passing through the areas tested, except for subsets of people who are there for hours each day of work.

  20. Radiofrequency radiation at Stockholm Central Railway Station in Sweden and some medical aspects on public exposure to RF fields

    PubMed Central

    Hardell, Lennart; Koppel, Tarmo; Carlberg, Michael; Ahonen, Mikko; Hedendahl, Lena

    2016-01-01

    The Stockholm Central Railway Station in Sweden was investigated for public radiofrequency (RF) radiation exposure. The exposimeter EME Spy 200 was used to collect the RF exposure data across the railway station. The exposimeter covers 20 different radiofrequency bands from 88 to 5,850 MHz. In total 1,669 data points were recorded. The median value for total exposure was 921 μW/m2 (or 0.092 μW/cm2; 1 μW/m2=0.0001 μW/cm2) with some outliers over 95,544 μW/m2 (6 V/m, upper detection limit). The mean total RF radiation level varied between 2,817 to 4,891 μW/m2 for each walking round. High mean measurements were obtained for GSM + UMTS 900 downlink varying between 1,165 and 2,075 μW/m2. High levels were also obtained for UMTS 2100 downlink; 442 to 1,632 μW/m2. Also LTE 800 downlink, GSM 1800 downlink, and LTE 2600 downlink were in the higher range of measurements. Hot spots were identified, for example close to a wall mounted base station yielding over 95,544 μW/m2 and thus exceeding the exposimeter's detection limit. Almost all of the total measured levels were above the precautionary target level of 3–6 μW/m2 as proposed by the BioInitiative Working Group in 2012. That target level was one-tenth of the scientific benchmark providing a safety margin either for children, or chronic exposure conditions. We compare the levels of RF radiation exposures identified in the present study to published scientific results reporting adverse biological effects and health harm at levels equivalent to, or below those measured in this Stockholm Central Railway Station project. It should be noted that these RF radiation levels give transient exposure, since people are generally passing through the areas tested, except for subsets of people who are there for hours each day of work. PMID:27633090

Top