Hubble Space Telescope: Faint object camera instrument handbook. Version 2.0
NASA Technical Reports Server (NTRS)
Paresce, Francesco (Editor)
1990-01-01
The Faint Object Camera (FOC) is a long focal ratio, photon counting device designed to take high resolution two dimensional images of areas of the sky up to 44 by 44 arcseconds squared in size, with pixel dimensions as small as 0.0007 by 0.0007 arcseconds squared in the 1150 to 6500 A wavelength range. The basic aim of the handbook is to make relevant information about the FOC available to a wide range of astronomers, many of whom may wish to apply for HST observing time. The FOC, as presently configured, is briefly described, and some basic performance parameters are summarized. Also included are detailed performance parameters and instructions on how to derive approximate FOC exposure times for the proposed targets.
Hubble Space Telescope faint object camera instrument handbook (Post-COSTAR), version 5.0
NASA Technical Reports Server (NTRS)
Nota, A. (Editor); Jedrzejewski, R. (Editor); Greenfield, P. (Editor); Hack, W. (Editor)
1994-01-01
The faint object camera (FOC) is a long-focal-ratio, photon-counting device capable of taking high-resolution two-dimensional images of the sky up to 14 by 14 arc seconds squared in size with pixel dimensions as small as 0.014 by 0.014 arc seconds squared in the 1150 to 6500 A wavelength range. Its performance approaches that of an ideal imaging system at low light levels. The FOC is the only instrument on board the Hubble Space Telescope (HST) to fully use the spatial resolution capabilities of the optical telescope assembly (OTA) and is one of the European Space Agency's contributions to the HST program.
Hubble Space Telescope, Faint Object Camera
NASA Technical Reports Server (NTRS)
1981-01-01
This drawing illustrates Hubble Space Telescope's (HST's), Faint Object Camera (FOC). The FOC reflects light down one of two optical pathways. The light enters a detector after passing through filters or through devices that can block out light from bright objects. Light from bright objects is blocked out to enable the FOC to see background images. The detector intensifies the image, then records it much like a television camera. For faint objects, images can be built up over long exposure times. The total image is translated into digital data, transmitted to Earth, and then reconstructed. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.
NASA Technical Reports Server (NTRS)
Albrecht, R.; Barbieri, C.; Adorf, H.-M.; Corrain, G.; Gemmo, A.; Greenfield, P.; Hainaut, O.; Hook, R. N.; Tholen, D. J.; Blades, J. C.
1994-01-01
Images of the Pluto-Charon system were obtained with the Faint Object Camera (FOC) of the Hubble Space Telescope (HST) after the refurbishment of the telescope. The images are of superb quality, allowing the determination of radii, fluxes, and albedos. Attempts were made to improve the resolution of the already diffraction limited images by image restoration. These yielded indications of surface albedo distributions qualitatively consistent with models derived from observations of Pluto-Charon mutual eclipses.
Faint Object Camera observations of M87 - The jet and nucleus
NASA Technical Reports Server (NTRS)
Boksenberg, A.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.
1992-01-01
UV and optical images of the central region and jet of the nearby elliptical galaxy M87 have been obtained with about 0.1 arcsec resolution in several spectral bands with the Faint Object Camera (FOC) on the HST, including polarization images. Deconvolution enhances the contrast of the complex structure and filamentary patterns in the jet already evident in the aberrated images. Morphologically there is close similarity between the FOC images of the extended jet and the best 2-cm radio maps obtained at similar resolution, and the magnetic field vectors from the UV and radio polarimetric data also correspond well. We observe structure in the inner jet within a few tenths arcsec of the nucleus which also has been well studied at radio wavelengths. Our UV and optical photometry of regions along the jet shows little variation in spectral index from the value 1.0 between markedly different regions and no trend to a steepening spectrum with distance along the jet.
HST PSF simulation using Tiny Tim
NASA Technical Reports Server (NTRS)
Krist, J. E.
1992-01-01
Tiny Tim is a program which simulates Hubble Space Telescope imaging camera PSF's. It is portable (written and distributed in C) and is reasonably fast. It can model the WFPC, WFPC 2, FOC, and COSTAR corrected FOC cameras. In addition to aberrations such as defocus and spherical, it also includes WFPC obscuration shifting, mirror zonal error maps, and jitter. The program has been used at a number of sites for deconvolving HST images. Tiny Tim is available via anonymous ftp on stsci.edu in the directory software/tinytim.
Near-ultraviolet imaging of Jupiter's satellite Io with the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Paresce, F.; Sartoretti, P.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.
1992-01-01
The surface of Jupiter's Galilean satellite Io has been resolved for the first time in the near ultraviolet at 2850 A by the Faint Object Camera (FOC) on the Hubble Space Telescope (HST). The restored images reveal significant surface structure down to the resolution limit of the optical system corresponding to approximately 250 km at the sub-earth point.
Space telescope scientific instruments
NASA Technical Reports Server (NTRS)
Leckrone, D. S.
1979-01-01
The paper describes the Space Telescope (ST) observatory, the design concepts of the five scientific instruments which will conduct the initial observatory observations, and summarizes their astronomical capabilities. The instruments are the wide-field and planetary camera (WFPC) which will receive the highest quality images, the faint-object camera (FOC) which will penetrate to the faintest limiting magnitudes and achieve the finest angular resolution possible, and the faint-object spectrograph (FOS), which will perform photon noise-limited spectroscopy and spectropolarimetry on objects substantially fainter than those accessible to ground-based spectrographs. In addition, the high resolution spectrograph (HRS) will provide higher spectral resolution with greater photometric accuracy than previously possible in ultraviolet astronomical spectroscopy, and the high-speed photometer will achieve precise time-resolved photometric observations of rapidly varying astronomical sources on short time scales.
2002-03-01
Carrying the STS-109 crew of seven, the Space Shuttle Orbiter Columbia blasted from its launch pad as it began its 27th flight and 108th flight overall in NASA's Space Shuttle Program. Launched March 1, 2002, the goal of the mission was the maintenance and upgrade of the Hubble Space Telescope (HST) which was developed, designed, and constructed by the Marshall Space Flight Center. Captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, the HST received the following upgrades: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when it original coolant ran out. Four of the crewmembers performed 5 space walks in the 10 days, 22 hours, and 11 minutes of the the STS-109 mission.
HST image of Gravitational Lens G2237 + 305 or 'Einstein Cross'
NASA Technical Reports Server (NTRS)
1990-01-01
European Space Agency (ESA) Faint Object Camera (FOC) science image was taken from the Hubble Space Telescope (HST) of Gravitational Lens G2237 + 305 or 'Einstein Cross'. The gravitational lens G2237 + 305 or 'Einstein Cross' shows four images of a very distant quasar which has been multiple-imaged by a relatively nearby galaxy acting as a gravitational lens. The angular separation between the upper and lower images is 1.6 arc seconds. Photo was released from Goddard Space Flight Center (GSFC) 09-12-90.
2002-03-08
After five days of service and upgrade work on the Hubble Space Telescope (HST), the STS-109 crew photographed the giant telescope in the shuttle's cargo bay. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where 4 of the 7-member crew performed 5 space walks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. Launched March 1, 2002, the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-07
Inside the Space Shuttle Columbia's cabin, astronaut Nancy J. Currie, mission specialist, controlled the Remote Manipulator System (RMS) on the crew cabin's aft flight deck to assist fellow astronauts during the STS-109 mission Extra Vehicular Activities (EVA). The RMS was used to capture the telescope and secure it into Columbia's cargo bay. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-09
After five days of service and upgrade work on the Hubble Space Telescope (HST), the STS-109 crew photographed the giant telescope returning to its normal routine. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where 4 of the 7-member crew performed 5 space walks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near- Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. Launched March 1, 2002, the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-03
The Hubble Space Telescope (HST), with its normal routine temporarily interrupted, is about to be captured by the Space Shuttle Columbia prior to a week of servicing and upgrading by the STS-109 crew. The telescope was captured by the shuttle's Remote Manipulator System (RMS) robotic arm and secured on a work stand in Columbia's payload bay where 4 of the 7-member crew performed 5 space walks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. Launched March 1, 2002, the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-05
STS-109 Astronauts Michael J. Massimino and James H. Newman were making their second extravehicular activity (EVA) of their mission when astronaut Massimino, mission specialist, peered into Columbia's crew cabin during a brief break from work on the Hubble Space Telescope (HST). The HST is latched down just a few feet behind him in Columbia's cargo bay. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.
STS-109 Astronaut Michael J. Massimino Peers Into Window of Shuttle During EVA
NASA Technical Reports Server (NTRS)
2002-01-01
STS-109 Astronauts Michael J. Massimino and James H. Newman were making their second extravehicular activity (EVA) of their mission when astronaut Massimino, mission specialist, peered into Columbia's crew cabin during a brief break from work on the Hubble Space Telescope (HST). The HST is latched down just a few feet behind him in Columbia's cargo bay. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.
Nieminen, Katri; Wijma, Klaas; Johansson, Sanna; Kinberger, Emelie K; Ryding, Elsa-Lena; Andersson, Gerhard; Bernfort, Lars; Wijma, Barbro
2017-04-01
The objective of this study was to calculate costs associated with severe fear of childbirth (FOC) during pregnancy and peripartum by comparing two groups of women expecting their first child and attending an ordinary antenatal program; one with low FOC and one with severe FOC. In a prospective case-control cohort study one group with low FOC [Wijma Delivery Expectancy/Experience Questionnaire (W-DEQ) sum score ≤60, n = 107] and one with severe FOC (W-DEQ ≥85, n = 43) were followed up till 3 months postpartum and included in the analysis. Medical records were assessed and medical parameters were mapped. Mean costs for healthcare consumption and sick leave during pregnancy were calculated and compared. When means were compared between the groups, the group with severe FOC had more visits for psychosocial reasons (p = 0.001) and more hours on sick leave (p = 0.03) during pregnancy, and stayed longer at the maternity ward (p = 0.04). They also more seldom had normal spontaneous deliveries (p = 0.03), and more often had an elective cesarean section on maternal request (p = 0.02). Postpartum, they more often than the group with low FOC paid visits to the maternity clinic because of complications (p = 0.001) and to the antenatal unit because of adverse childbirth experiences (p = 0.001). The costs for handling women with severe FOC was 38% higher than those for women with low FOC. Women with severe FOC generate considerably higher perinatal costs than women with low FOC when handled in care as usual. © 2017 Nordic Federation of Societies of Obstetrics and Gynecology.
Hubble Space Telescope Imaging of the Mass-losing Supergiant VY Canis Majoris
NASA Astrophysics Data System (ADS)
Kastner, Joel H.; Weintraub, David A.
1998-04-01
The highly luminous M supergiant VY CMa is a massive star that appears to be in its final death throes, losing mass at high rate en route to exploding as a supernova. Subarcsecond-resolution optical images of VY CMa, obtained with the Faint Object Camera (FOC) aboard the Hubble Space Telescope, vividly demonstrate that mass loss from VY CMa is highly anisotropic. In the FOC images, the optical ``star'' VY CMa constitutes the bright, well-resolved core of an elongated reflection nebula. The imaged nebula is ~3" (~4500 AU) in extent and is clumpy and highly asymmetric. The images indicate that the bright core, which lies near one edge of the nebula, is pure scattered starlight. We conclude that at optical wavelengths VY CMa is obscured from view along our line of sight by its own dusty envelope. The presence of the extended reflection nebula then suggests that this envelope is highly flattened and/or that the star is surrounded by a massive circumstellar disk. Such axisymmetric circumstellar density structure should have profound effects on post-red supergiant mass loss from VY CMa and, ultimately, on the shaping of the remnant of the supernova that will terminate its post-main-sequence evolution.
NASA Technical Reports Server (NTRS)
King, I. R.; Deharveng, J. M.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.
1992-01-01
A 5161 s exposure was taken with the FOC on the central 44 arcsec of M31, through a filter centered at 1750 A. Much of the light is redleak from visible wavelengths, but nearly half of it is genuine UV. The image shows the same central peak found earlier by Stratoscope, with a somewhat steeper dropoff outside that peak. More than 100 individual objects are seen, some pointlike and some slightly extended. We identify them as post-asymptotic giant branch stars, some of them surrounded by a contribution from their accompanying planetary nebulae. These objects contribute almost a fifth of the total UV light, but fall far short of accounting for all of it. We suggest that the remainder may result from the corresponding evolutionary tracks in a population more metal-rich than solar.
STS-109 Onboard Photo of Extra-Vehicular Activity (EVA)
NASA Technical Reports Server (NTRS)
2002-01-01
This is an onboard photo of the Hubble Space Telescope (HST) power control unit (PCU), the heart of the HST's power system. STS-109 payload commander John M. Grunsfeld, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its PCU while participating in the third of five spacewalks dedicated to servicing and upgrading the HST. Other upgrades performed were: replacement of the solar array panels; replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-Object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed the system upgrades. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-01
This is an onboard photo of the Hubble Space Telescope (HST) power control unit (PCU), the heart of the HST's power system. STS-109 payload commander John M. Grunsfeld, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its PCU while participating in the third of five spacewalks dedicated to servicing and upgrading the HST. Other upgrades performed were: replacement of the solar array panels; replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-Object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed the system upgrades. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
10. VIEW TOWARD PORT BOW IN THE FOC'S'LE OF THE ...
10. VIEW TOWARD PORT BOW IN THE FOC'S'LE OF THE EVELINA M. GOULART. OBJECT IN THE FOREGROUND IS A FOLDING MESS TABLE LOCATED BETWEEN THE TIERS OF BUNKS. - Auxiliary Fishing Schooner "Evelina M. Goulart", Essex Shipbuilding Museum, 66 Main Street, Essex, Essex County, MA
Fear of childbirth in urban and rural regions of Turkey: Comparison of two resident populations
Okumus, Filiz; Sahin, Nevin
2017-01-01
OBJECTIVE: Childbirth is a natural physiological event experienced by many women; however, it is frequently also a source of fear in women. Rates of cesarean sections in Turkey are higher in the urban areas than in the rural areas. We hypothesized that lower fear of childbirth (FOC) rates would be observed in the city having the lowest cesarean section rates in Turkey. This study aimed to compare FOC in women in two resident populations: one in a rural area and the other in an urban area. METHODS: This study was conducted on 253 pregnant women in Istanbul, a large urban municipality, and Siirt, a city in rural Turkey. A descriptive information form and the A version of the Wijma Delivery Expectancy/Experience Questionnaire (W-DEQ) were used. RESULTS: Severe FOC levels were recorded in women in the Istanbul sample; moreover, these levels were higher than those recorded in women in the Siirt sample. In addition, women in the Istanbul sample preferred vaginal birth to cesarean section and had greater FOC, a finding which demonstrates that women prefer vaginal birth even though they have a higher FOC level and live in a city with high cesarean section rates. Where women live (rural versus urban areas) affects their perception of birth and consequently, their FOC levels. CONCLUSION: The results of this study suggest that further cross-cultural and regional research is needed for better understanding FOC and factors associated with elevated FOC levels within each cultural setting. PMID:29270574
Kuang, Ruibin; Yang, Qiaosong; Hu, Chunhua; Sheng, Ou; Zhang, Sheng; Ma, Lijun; Wei, Yuerong; Yang, Jing; Liu, Siwen; Biswas, Manosh Kumar; Viljoen, Altus; Yi, Ganjun
2013-01-01
Background Fusarium wilt, caused by the fungal pathogen Fusarium oxysporum f. sp. cubense (Foc), is one of the most destructive diseases of banana. Toxins produced by Foc have been proposed to play an important role during the pathogenic process. The objectives of this study were to investigate the contamination of banana with toxins produced by Foc, and to elucidate their role in pathogenesis. Methodology/Principal Findings Twenty isolates of Foc representing races 1 and 4 were isolated from diseased bananas in five Chinese provinces. Two toxins were consistently associated with Foc, fusaric acid (FA) and beauvericin (BEA). Cytotoxicity of the two toxins on banana protoplast was determined using the Alamar Blue assay. The virulence of 20 Foc isolates was further tested by inoculating tissue culture banana plantlets, and the contents of toxins determined in banana roots, pseudostems and leaves. Virulence of Foc isolates correlated well with toxin deposition in the host plant. To determine the natural occurrence of the two toxins in banana plants with Fusarium wilt symptoms, samples were collected before harvest from the pseudostems, fruit and leaves from 10 Pisang Awak ‘Guangfen #1’ and 10 Cavendish ‘Brazilian’ plants. Fusaric acid and BEA were detected in all the tissues, including the fruits. Conclusions/Signficance The current study provides the first investigation of toxins produced by Foc in banana. The toxins produced by Foc, and their levels of contamination of banana fruits, however, were too low to be of concern to human and animal health. Rather, these toxins appear to contribute to the pathogenicity of the fungus during infection of banana plants. PMID:23922960
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buie, Marc W.; Young, Eliot F.; Young, Leslie A.
We present new imaging of the surface of Pluto and Charon obtained during 2002-2003 with the Hubble Space Telescope (HST) Advanced Camera for Surveys (ACS) instrument. Using these data, we construct two-color albedo maps for the surfaces of both Pluto and Charon. Similar mapping techniques are used to re-process HST/Faint Object Camera (FOC) images taken in 1994. The FOC data provide information in the ultraviolet and blue wavelengths that show a marked trend of UV-bright material toward the sunlit pole. The ACS data are taken at two optical wavelengths and show widespread albedo and color variegation on the surface ofmore » Pluto and hint at a latitudinal albedo trend on Charon. The ACS data also provide evidence for a decreasing albedo for Pluto at blue (435 nm) wavelengths, while the green (555 nm) data are consistent with a static surface over the one-year period of data collection. We use the two maps to synthesize a true visual color map of Pluto's surface and investigate trends in color. The mid- to high-latitude region on the sunlit pole is, on average, more neutral in color and generally higher albedo than the rest of the surface. Brighter surfaces also tend to be more neutral in color and show minimal color variations. The darker regions show considerable color diversity arguing that there must be a range of compositional units in the dark regions. Color variations are weak when sorted by longitude. These data are also used to constrain astrometric corrections that enable more accurate orbit fitting, both for the heliocentric orbit of the barycenter and the orbit of Pluto and Charon about their barycenter.« less
Further detection of the optical low frequency QPO in the black hole transient MAXI J1820+070
NASA Astrophysics Data System (ADS)
Yu, Wenfei; Lin, Jie; Mao, Dongming; Zhang, Jujia; Yan, Zhen; Bai, Jinming
2018-05-01
We report on the optical photometric observation of MAXI J1820+070 with the 2.4m telescope at Lijiang Gaomeigu Station of Yunnan observatories with our Fast Optical Camera (FOC) on April 22, 2018, following the detection of low frequency QPO in the optical band (ATEL #11510).
2001-08-01
This is the insignia of the STS-109 Space Shuttle mission. Carrying a crew of seven, the Space Shuttle Orbiter Columbia was launched with goals of maintenance and upgrades to the Hubble Space Telescope (HST). The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. During the STS-109 mission, the telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm where four members of the crew performed five spacewalks completing system upgrades to the HST. Included in those upgrades were: The replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when it original coolant ran out. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 27th flight of the Orbiter Columbia and the 108th flight overall in NASA's Space Shuttle Program.
2002-03-03
This is a photo of the Hubble Space Telescope (HST),in its origianl configuration, berthed in the cargo bay of the Space Shuttle Columbia during the STS-109 mission silhouetted against the airglow of the Earth's horizon. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where 4 of the 7-member crew performed 5 spacewalks completing system upgrades to the HST. Included in those upgrades were: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-05
Astronaut James H. Newman, mission specialist, floats about in the Space Shuttle Columbia's cargo bay while working in tandem with astronaut Michael J. Massimino (out of frame),mission specialist, during the STS-109 mission's second day of extravehicular activity (EVA). Inside Columbia's cabin, astronaut Nancy J. Currie, mission specialist, controlled the Remote Manipulator System (RMS) to assist the two in their work on the Hubble Space Telescope (HST). The RMS was used to capture the telescope and secure it into Columbia's cargo bay.Part of the giant telescope's base, latched down in the payload bay, can be seen behind Newman. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the HST. The Marshall Space Flight Center in Huntsville, Alabama had responsibility for the design, development, and contruction of the HST, which is the most powerful and sophisticated telescope ever built. STS-109 upgrades to the HST included: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-07
STS-109 Astronaut Michael J. Massimino, mission specialist, perched on the Shuttle's robotic arm is working at the stowage area for the Hubble Space Telescope's port side solar array. Working in tandem with James. H. Newman, Massimino removed the old port solar array and stored it in Columbia's payload bay for return to Earth. The two went on to install a third generation solar array and its associated electrical components. Two crew mates had accomplished the same feat with the starboard array on the previous day. In addition to the replacement of the solar arrays, the STS-109 crew also installed the experimental cooling system for the Hubble's Near-Infrared Camera (NICMOS), replaced the power control unit (PCU), and replaced the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS). The 108th flight overall in NASA's Space Shuttle Program, the Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 for 10 days, 22 hours, and 11 minutes. Five space walks were conducted to complete the HST upgrades. The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built.
2002-03-07
STS-109 Astronaut Michael J. Massimino, mission specialist, perched on the Shuttle's robotic arm, is preparing to install the Electronic Support Module (ESM) in the aft shroud of the Hubble Space telescope (HST), with the assistance of astronaut James H. Newman (out of frame). The module will support a new experimental cooling system to be installed during the next day's fifth and final space walk of the mission. That cooling system is designed to bring the telescope's Near-Infrared Camera and Multi Spectrometer (NICMOS) back to life the which had been dormant since January 1999 when its original coolant ran out. The Space Shuttle Columbia STS-109 mission lifted off March 1, 2002 with goals of repairing and upgrading the Hubble Space Telescope (HST). The Marshall Space Flight Center in Huntsville, Alabama had the responsibility for the design, development, and construction of the HST, which is the most powerful and sophisticated telescope ever built. In addition to the installation of the experimental cooling system for the Hubble's Near-Infrared Camera and NICMOS, STS-109 upgrades to the HST included replacement of the solar array panels, replacement of the power control unit (PCU), and replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS). Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.
STS-109 Onboard Photo of Extra-Vehicular Activity (EVA)
NASA Technical Reports Server (NTRS)
2002-01-01
This is an onboard photo of Astronaut John M. Grunsfield, STS-109 payload commander, participating in the third of five spacewalks to perform work on the Hubble Space Telescope (HST). On this particular walk, Grunsfield, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its power control unit (PCU), the heart of the HST's power system. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed system upgrades to the HST. Included in those upgrades were: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-06
This is an onboard photo of Astronaut John M. Grunsfield, STS-109 payload commander, participating in the third of five spacewalks to perform work on the Hubble Space Telescope (HST). On this particular walk, Grunsfield, joined by Astronaut Richard M. Lirnehan, turned off the telescope in order to replace its power control unit (PCU), the heart of the HST's power system. The telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm, where crew members completed system upgrades to the HST. Included in those upgrades were: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when its original coolant ran out. The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. Launched March 1, 2002 the STS-109 HST servicing mission lasted 10 days, 22 hours, and 11 minutes. It was the 108th flight overall in NASA's Space Shuttle Program.
2002-03-01
Carrying a crew of seven, the Space Shuttle Orbiter Columbia soared through some pre-dawn clouds into the sky as it began its 27th flight, STS-109. Launched March 1, 2002, the goal of the mission was the maintenance and upgrade of the Hubble Space Telescope (HST). The Marshall Space Flight Center had the responsibility for the design, development, and construction of the HST, which is the most complex and sensitive optical telescope ever made, to study the cosmos from a low-Earth orbit. The HST detects objects 25 times fainter than the dimmest objects seen from Earth and provides astronomers with an observable universe 250 times larger than is visible from ground-based telescopes, perhaps as far away as 14 billion light-years. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. During the STS-109 mission, the telescope was captured and secured on a work stand in Columbia's payload bay using Columbia's robotic arm. Here four members of the crew performed five spacewalks completing system upgrades to the HST. Included in those upgrades were: replacement of the solar array panels; replacement of the power control unit (PCU); replacement of the Faint Object Camera (FOC) with a new advanced camera for Surveys (ACS); and installation of the experimental cooling system for the Hubble's Near-Infrared Camera and Multi-object Spectrometer (NICMOS), which had been dormant since January 1999 when it original coolant ran out. Lasting 10 days, 22 hours, and 11 minutes, the STS-109 mission was the 108th flight overall in NASA's Space Shuttle Program.
Hubble Space Telescope: The Telescope, the Observations & the Servicing Mission
NASA Astrophysics Data System (ADS)
1999-11-01
Today the HST Archives contain more than 260 000 astronomical observations. More than 13 000 astronomical objects have been observed by hundreds of different groups of scientists. Direct proof of the scientific significance of this project is the record-breaking number of papers published : over 2400 to date. Some of HST's most memorable achievements are: * the discovery of myriads of very faint galaxies in the early Universe, * unprecedented, accurate measurements of distances to the farthest galaxies, * significant improvement in the determination of the Hubble constant and thus the age of the Universe, * confirmation of the existence of blacks holes, * a far better understanding of the birth, life and death of stars, * a very detailed look at the secrets of the process by which planets are created. Europe and HST ESA's contribution to HST represents a nominal investment of 15%. ESA provided one of the two imaging instruments - the Faint Object Camera (FOC) - and the solar panels. It also has 15 scientists and computer staff working at the Space Telescope Science Institute in Baltimore (Maryland). In Europe the astronomical community receives observational assistance from the Space Telescope European Coordinating Facility (ST-ECF) located in Garching, Munich. In return for ESA's investment, European astronomers have access to approximately 15% of the observing time. In reality the actual observing time competitively allocated to European astronomers is closer to 20%. Looking back at almost ten years of operation, the head of ST-ECF, European HST Project Scientist Piero Benvenuti states: "Hubble has been of paramount importance to European astronomy, much more than the mere 20% of observing time. It has given the opportunity for European scientists to use a top class instrument that Europe alone would not be able to build and operate. In specific areas of research they have now, mainly due to HST, achieved international leadership." One of the major reasons for Hubble's success is the advantage of being in orbit, beyond the Earth's atmosphere. From there it enjoys a crystal-clear view of the universe - without clouds and atmospheric disturbances to blur its vision. European astronomer Guido De Marchi from ESO in Munich has been using Hubble since the early days of the project. He explains: "HST can see the faintest and smallest details and lets us study the stars with great accuracy, even where they are packed together - just as with those in the centre of our Galaxy". Dieter Reimers from Hamburg Observatory adds: "HST has capabilities to see ultraviolet light, which is not possible from the ground due to the blocking effect of the atmosphere. And this is really vital to our work, the main aim of which is to discover the chemical composition of the Universe." The Servicing Missions In the early plans for telescope operations, maintenance visits were to have been made every 2.5 years. And every five years HST should have been transported back to the ground for thorough overhaul. This plan has changed somewhat over time and a servicing scheme, which includes Space Shuttle Servicing Missions every three years, was decided upon. The two first Servicing Missions, in December 1993 (STS-61) and February 1997 (STS-82) respectively, were very successful. In the first three years of operations HST did not meet expectations because its primary mirror was 2 microns too flat at the edge. The first Servicing Mission in 1993 (on which the European astronaut Claude Nicollier flew) dealt with this problem by installing a new instrument with corrective optics (COSTAR - Corrective Optics Space Telescope Axial Replacement). With this pair of "glasses" HST's golden age began. The images were as sharp as originally hoped and astonishing new results started to emerge on a regular basis. The first Servicing Mission also replaced the solar panels and installed a new camera (Wide Field and Planetary Camera 2 - WFPC2). The High-Speed Photometer (HSP) was replaced by COSTAR. During the second Servicing Mission instruments and other equipment were repaired and updated. The Space Telescope Imaging Spectrograph (STIS) replaced the Goddard High Resolution Spectrograph (GHRS) and the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS) replaced the Faint Object Spectrograph (FOS). Servicing mission 3A The original Servicing Mission 3 (initially planned for June 2000) has been split into two missions - SM3A and SM3B - due in part to its complexity, and in part to the urgent need to replace the failed gyroscopes on board. Three gyroscopes must function to meet the telescope's very precise pointing requirements. With only two new operational, observations have had to be suspended, but the telescope will remain safely in orbit until the servicing crew arrives. During this servicing mission * all six gyroscopes will be replaced, * a Fine Guidance Sensor will be replaced, * the spacecraft's computer will be replaced by a new one which will reduce the burden of flight software maintenance and significantly lower costs, * six voltage/temperature kits will be installed to protect spacecraft batteries from overcharging and overheating if the spacecraft enters safe mode, * a new S-Band Single Access Transmitter will replace a failed spare currently aboard the spacecraft, * a solid-state recorder will be installed to replace the tape recorder, * degraded telescope thermal insulation will be replaced if time allows; this insulation is necessary to control the internal temperature on HST. For the mission to be fully successful the gyroscopes, the Fine Guidance Sensor, the computer and the voltage/temperature kits must be installed. The minimum mission success criterion is that HST will have 5 operational gyros after the mission, 4 of them newly installed. The Future During SM3B (presently scheduled for 2001) the astronauts will replace the Faint Object Camera with the Advanced Camera for Surveys (ACS), install a cooling system for NICMOS enabling it to resume operation, and install a new set of solar panels. Replacement of the thermal insulation will continue and the telescope will be reboosted to a higher orbit. The plans for the fourth Servicing Mission are preliminary at this time, but two new science instruments are being developed for that mission: Cosmic Origins Spectrograph (COS), which will replace COSTAR, and Wide Field Camera 3 (WFC3), which will replace WFPC2. It is planned to retrieve Hubble at the end of its life (around 2010) and bring it back to Earth. In the future ESA may have the opportunity to continue its collaboration with NASA on the Next Generation Space Telescope (NGST), which in many ways can be seen as Hubble's successor. The plan is to launch NGST in 2008, and ESA is currently considering a possible role in the project. Piero Benvenuti concludes: "The European Space Agency, in deciding to join NASA on the HST Project, made a very successful investment on behalf of European science. Today, NASA would not consider proceeding alone on the continued operation of HST or on the design of NGST. Not just because of the benefit of shared cost, but mainly because of the intellectual contribution by the European astronomers, who have made such effective scientific use of HST." Hubble Space Telescope - Fact sheet Description The Hubble Space Telescope (HST) is a co-operation between ESA and NASA. It is a long-term space-based observatory. Its observations are carried out in visible, infrared and ultraviolet light. HST has in many ways revolutionised modern astronomy, being a highly efficient tool for making new discoveries, but also by driving astronomical research in general. Objective HST was designed to take advantage of being above the Earth's disturbing atmosphere, and thereby providing astronomers with observations of very high resolution - opening new windows on planets, stars and galaxies. HST was designed as a flagship mission of the highest standard, and has served to pave the way for other space-based observatories. How the mission was named Hubble Space Telescope is named after Edwin Powell Hubble (1889-1953), who was one of the great pioneers of modern astronomy. Industrial Involvement The ESA contribution to HST included the Solar Panels and the Faint Object Camera (FOC). Prime contractors for the FOC were Dornier (now DaimlerChrysler Aerospace, Germany), and Matra (France); for the Solar Panels British Aerospace (UK). Launch date: April 25, 1990 Launcher: Space Shuttle Discovery (STS-31) Launch mass: 11 110 kg Dimensions Length: 15.9 m, diameter: 4.2 m. In addition two solar panels each 2.4 x 12.1 m. Payload (current) A 2.4 m f/24 Ritchey-Chretien telescope with four main instruments, currently WFPC2, STIS, NICMOS and FOC. In addition the three fine-guidance sensors are used for astrometric observations (positional astronomy). WFPC2 - Wide Field/Planetary Camera 2 is an electronic camera working at two magnifications. It has four CCD detectors with 800 x 800 pixels. One of these (called Planetary Camera) has a higher resolution (<0.1 arcsecond). STIS - Space Telescope Imaging Spectrograph uses so-called MAMAs and CCDs to provide images and spectra. It is sensitive to a wide range of light from UV to Infrared. NICMOS - Near-Infrared Camera and Multi-Object Spectrometer provides images and spectra in the infrared. NICMOS uses cooled HgCdTe detectors. Currently NICMOS is dormant and awaits a new cooler to be provided during Servicing Mission 3B. FOC - Faint Object Camera - a very high resolution camera built by ESA. FOC is no longer in use and will be replaced by the new Advanced Camera for Surveys (ACS) during Servicing Mission 3B. Orbit Circular, 593 km with a 28.5 degree inclination. Operations Science operations are co-ordinated and conducted by the Space Telescope Science Institute (STScI) in Baltimore. Overall management of daily on-orbit operations is carried out by NASA's Goddard Space Flight Center (GSFC) in Greenbelt. Ground stations The data from HST are transmitted to the Tracking and Data Relay Satellite System (TDRSS). From TDRSS they are sent to the TDRSS ground stations and on to Goddard Space Flight Center, from where the science data are sent to STScI. Foreseen operational lifetime : 20 years Costs ESA's financial contribution to the Hubble Space Telescope amounts to EUR 593m at 1999 economic conditions (including development of the Faint Object Camera and the Solar Arrays, participation in operations and in servicing missions).
Zeng, Huicai; Fan, Dingding; Zhu, Yabin; Feng, Yue; Wang, Guofen; Peng, Chunfang; Jiang, Xuanting; Zhou, Dajie; Ni, Peixiang; Liang, Changcong; Liu, Lei; Wang, Jun; Mao, Chao
2014-01-01
Background The asexual fungus Fusarium oxysporum f. sp. cubense (Foc) causing vascular wilt disease is one of the most devastating pathogens of banana (Musa spp.). To understand the molecular underpinning of pathogenicity in Foc, the genomes and transcriptomes of two Foc isolates were sequenced. Methodology/Principal Findings Genome analysis revealed that the genome structures of race 1 and race 4 isolates were highly syntenic with those of F. oxysporum f. sp. lycopersici strain Fol4287. A large number of putative virulence associated genes were identified in both Foc genomes, including genes putatively involved in root attachment, cell degradation, detoxification of toxin, transport, secondary metabolites biosynthesis and signal transductions. Importantly, relative to the Foc race 1 isolate (Foc1), the Foc race 4 isolate (Foc4) has evolved with some expanded gene families of transporters and transcription factors for transport of toxins and nutrients that may facilitate its ability to adapt to host environments and contribute to pathogenicity to banana. Transcriptome analysis disclosed a significant difference in transcriptional responses between Foc1 and Foc4 at 48 h post inoculation to the banana ‘Brazil’ in comparison with the vegetative growth stage. Of particular note, more virulence-associated genes were up regulated in Foc4 than in Foc1. Several signaling pathways like the mitogen-activated protein kinase Fmk1 mediated invasion growth pathway, the FGA1-mediated G protein signaling pathway and a pathogenicity associated two-component system were activated in Foc4 rather than in Foc1. Together, these differences in gene content and transcription response between Foc1 and Foc4 might account for variation in their virulence during infection of the banana variety ‘Brazil’. Conclusions/Significance Foc genome sequences will facilitate us to identify pathogenicity mechanism involved in the banana vascular wilt disease development. These will thus advance us develop effective methods for managing the banana vascular wilt disease, including improvement of disease resistance in banana. PMID:24743270
Hubble gets revitalised in new Servicing Mission for more and better science!
NASA Astrophysics Data System (ADS)
2002-02-01
As a unique collaboration between the European Space Agency (ESA), and NASA, Hubble has had a phenomenal scientific impact. The unsurpassed sharp images from this space observatory have penetrated into the hidden depths of space and revealed breathtaking phenomena. But Hubble's important contributions to science have only been possible through a carefully planned strategy to service and upgrade Hubble every two or three years. ESA, the European Space Agency has a particular role to play in this Servicing Mission. One of the most exciting events of this mission will come when the ESA-built solar panels are replaced by newer and more powerful ones. The new panels, developed in the US, are equipped with ESA developed drive mechanisms and were tested at the facilities at ESA's European Space Research and Technology Centre (ESTEC) in the Netherlands. This facility is the only place in the world where such tests can be performed. According to Ton Linssen, HST Project Manager at ESA, who supervised all ESA involvement in the new solar panels development including the test campaign at Estec - "a particularly tense moment occurs when the present solar panels have to be rolled up to fit into the Shuttle's cargo bay. The hard environment of space has taken its toll on the panels and it will be a very delicate operation to roll them up. Our team will be waiting and watching with bated breath. If the panels can't be rolled up they will possibly have to be left in space." "With this Servicing Mission Hubble is once again going to be brought back to the frontline of scientific technology", says Piero Benvenuti, Hubble Project Scientist at ESA. "New super-advanced instrumentation will revitalise the observatory. For example, Hubble's new digital camera - The new Advanced Camera for Surveys, or ACS - can take images of twice the area of the sky and with five times the sensitivity of Hubble's previous instruments, therefore increasing by ten times Hubble's discovery capability! The European astronomers look forward to use the new camera and perform new science building on the great breakthroughs they have already achieved." ACS is going to replace the Faint Object Camera, or FOC, built by ESA. The FOC, which has functioned perfectly since the beginning, has been a key instrument to get the best out of the unprecedented imaging capability of Hubble. The FOC was a "state-of-the art" instrument in the 80s, but the field of digital imaging has progressed so much in the past 20 years that, having fulfilled its scientific goals, this ESA flagship on Hubble is chivalrously giving way to newer technology. However, the story of FOC is not over yet: experts will still learn from it, as it will be brought back to Earth and inspected, to study the effects on the hardware of the long duration exposure in space. Hubble is expected to continue to explore the sky during the next decade, after which its work will be taken over by its successor, the powerful ESA/NASA/CSA(*) Next Generation Space Telescope. NGST's main focus will be observations of the faint infrared light from the first stars and galaxies in the Universe. Notes for editors The Hubble Space Telescope is a project of international co-operation between ESA and NASA. It was launched in 1990. The partnership agreement between ESA and NASA was signed on 7 October 1977; as a result of this agreement European astronomers have guaranteed access to more than 20% of Hubble's observing time. Astronauts have already paid visits to Hubble in 1993, '97, '99 and now, in the spring of 2002, it is time for the fourth Servicing Mission (named Servicing Mission 3B), planned for launch on 28th February. Originally planned as one mission, the third Servicing Mission was split into two parts (Servicing Mission 3A and 3B) because of the sheer number of tasks to be carried out and the urgency with which Hubble's gyroscopes had to be replaced in late '99. In addition to the new solar panels and the ACS camera, astronauts will install a very high-tech cooling system for Hubble's infrared camera, NICMOS. NICMOS has been dormant since 1999 when it ran out of coolant. The new cooling system is a mechanical cooler, and works like an advanced refrigerator. Servicing Mission 3B will also include other maintenance tasks. Altogether five extensive space walks are planned.
Ultraviolet observations of the Saturnian north aurora and polar haze distribution with the HST-FOC
NASA Technical Reports Server (NTRS)
Gerard, J. C.; Dols, V.; Grodent, D.; Waite, J. H.; Gladstone, G. R.; Prange, R.
1995-01-01
Near simultaneous observations of the Saturnian H2 north ultraviolet aurora and the polar haze were made at 153 nm and 210 nm respectively with the Faint Object Camera on board the Hubble Space Telescope. The auroral observations cover a complete rotation of the planet and, when co-added, reveal the presence of an auroral emission near 80 deg N with a peak brightness of about 150 kR of total H2 emission. The maximum optical depth of the polar haze layer is found to be located approximately 5 deg equatorward of the auroral emission zone. The haze particles are presumably formed by hydrocarbon aerosols initiated by H2+ auroral production. In this case, the observed haze optical depth requires an efficiency of aerosol formation of about 6 percent, indicating that auroral production of hydrocarbon aerosols is a viable source of high-latitude haze.
HUBBLE FINDS A BARE BLACK HOLE POURING OUT LIGHT
NASA Technical Reports Server (NTRS)
2002-01-01
NASA's Hubble Space Telescope has provided a never-before-seen view of a warped disk flooded with a torrent of ultraviolet light from hot gas trapped around a suspected massive black hole. [Right] This composite image of the core of the galaxy was constructed by combining a visible light image taken with Hubble's Wide Field Planetary Camera 2 (WFPC2), with a separate image taken in ultraviolet light with the Faint Object Camera (FOC). While the visible light image shows a dark dust disk, the ultraviolet image (color-coded blue) shows a bright feature along one side of the disk. Because Hubble sees ultraviolet light reflected from only one side of the disk, astronomers conclude the disk must be warped like the brim of a hat. The bright white spot at the image's center is light from the vicinity of the black hole which is illuminating the disk. [Left] A ground-based telescopic view of the core of the elliptical galaxy NGC 6251. The inset box shows Hubble Space Telescope's field of view. The galaxy is 300 million light-years away in the constellation Ursa Minor. Photo Credit: Philippe Crane (European Southern Observatory), and NASA
STS-109 Mission Highlights Resource Tape
NASA Astrophysics Data System (ADS)
2002-05-01
This video, Part 3 of 4, shows the activities of the STS-109 crew (Scott Altman, Commander; Duane Carey, Pilot; John Grunsfeld, Payload Commander; Nancy Currie, James Newman, Richard Linnehan, Michael Massimino, Mission Specialists) during flight days 6 and 7. The activities from other flight days can be seen on 'STS-109 Mission Highlights Resource Tape' Part 1 of 4 (internal ID 2002139471), 'STS-109 Mission Highlights Resource Tape' Part 2 of 4 (internal ID 2002137664), and 'STS-109 Mission Highlights Resource Tape' Part 4 of 4 (internal ID 2002137577). Flight day 6 features a very complicated EVA (extravehicular activity) to service the HST (Hubble Space Telescope). Astronauts Grunsfeld and Linnehan replace the HST's power control unit, disconnecting and reconnecting 36 tiny connectors. The procedure includes the HST's first ever power down. The cleanup of spilled water from the coollant system in Grunsfeld's suit is shown. The pistol grip tool, and two other space tools are also shown. On flight day 7, Newman and Massimino conduct an EVA. They replace the HST's FOC (Faint Object Camera) with the ACS (Advanced Camera for Surveys). The video ends with crew members playing in the shuttle's cabin with a model of the HST.
Having influence: faculty of color having influence in schools of nursing.
Hassouneh, Dena; Lutz, Kristin F
2013-01-01
Faculty of color (FOC) play an important role in mentoring students and other FOC in schools of nursing. However, the unique nature of mentoring that FOC provide, which includes transmission of expert knowledge of the operations of racism in nursing academe, is not well understood. Furthermore, the influence FOC have on school cultures has not been well documented. To address this gap in knowledge we conducted a critical grounded theory study with 23 FOC in predominately Euro-American schools of nursing. Findings indicate that FOC Having Influence is a key process that explicates the influence FOC wield, exposing their work, which is often taken for granted, hidden, and, unacknowledged. FOC Having Influence occurred in two areas: 1) the survival and success of students and FOC and 2) shaping practices in schools of nursing and impacting health in communities. Implications for educational practice and future research are presented, based on study findings. Copyright © 2013 Elsevier Inc. All rights reserved.
Effects of continuous midwifery labour support for women with severe fear of childbirth.
Sydsjö, Gunilla; Blomberg, Marie; Palmquist, Sofie; Angerbjörn, Louise; Bladh, Marie; Josefsson, Ann
2015-05-15
Continuous support by a midwife during childbirth has shown positive effects on the duration of active labour, use of pain relief and frequency of caesarean section (CS) in women without fear of childbirth (FOC). We have evaluated how continuous support by a specially assigned midwife during childbirth affects birth outcome and the subjective experience of women with severe FOC. A case-control pilot study with an index group of 14 women with severe FOC and a reference group of 28 women without FOC giving birth. In this study the index group received continuous support during childbirth. The women with severe FOC more often had an induction of labour. The parous women with severe FOC had a shorter duration of active labour compared to the parous reference women (p = 0.047). There was no difference in caesarean section frequency between the two groups. Women with severe FOC experienced a very high anxiety level during childbirth (OR = 20.000, 95% CI: 3.036-131.731). Women with severe FOC might benefit from continuous support by a midwife during childbirth. Midwives should acknowledge the importance of continuous support in order to enhance the experience of childbirth in women with severe FOC.
Changes in the Proteome of Xylem Sap in Brassica oleracea in Response to Fusarium oxysporum Stress
Pu, Zijing; Ino, Yoko; Kimura, Yayoi; Tago, Asumi; Shimizu, Motoki; Natsume, Satoshi; Sano, Yoshitaka; Fujimoto, Ryo; Kaneko, Kentaro; Shea, Daniel J.; Fukai, Eigo; Fuji, Shin-Ichi; Hirano, Hisashi; Okazaki, Keiichi
2016-01-01
Fusarium oxysporum f.sp. conlutinans (Foc) is a serious root-invading and xylem-colonizing fungus that causes yellowing in Brassica oleracea. To comprehensively understand the interaction between F. oxysporum and B. oleracea, composition of the xylem sap proteome of the non-infected and Foc-infected plants was investigated in both resistant and susceptible cultivars using liquid chromatography-tandem mass spectrometry (LC-MS/MS) after in-solution digestion of xylem sap proteins. Whole genome sequencing of Foc was carried out and generated a predicted Foc protein database. The predicted Foc protein database was then combined with the public B. oleracea and B. rapa protein databases downloaded from Uniprot and used for protein identification. About 200 plant proteins were identified in the xylem sap of susceptible and resistant plants. Comparison between the non-infected and Foc-infected samples revealed that Foc infection causes changes to the protein composition in B. oleracea xylem sap where repressed proteins accounted for a greater proportion than those of induced in both the susceptible and resistant reactions. The analysis on the proteins with concentration change > = 2-fold indicated a large portion of up- and down-regulated proteins were those acting on carbohydrates. Proteins with leucine-rich repeats and legume lectin domains were mainly induced in both resistant and susceptible system, so was the case of thaumatins. Twenty-five Foc proteins were identified in the infected xylem sap and 10 of them were cysteine-containing secreted small proteins that are good candidates for virulence and/or avirulence effectors. The findings of differential response of protein contents in the xylem sap between the non-infected and Foc-infected samples as well as the Foc candidate effectors secreted in xylem provide valuable insights into B. oleracea-Foc interactions. PMID:26870056
Qin, Shiwen; Ji, Chunyan; Li, Yunfeng; Wang, Zhenzhong
2017-01-01
The fungal pathogen Fusarium oxysporum f. sp. cubense causes Fusarium wilt, one of the most destructive diseases in banana and plantain cultivars. Pathogenic race 1 attacks the “Gros Michel” banana cultivar, and race 4 is pathogenic to the Cavendish banana cultivar and those cultivars that are susceptible to Foc1. To understand the divergence in gene expression modules between the two races during degradation of the host cell wall, we performed RNA sequencing to compare the genome-wide transcriptional profiles of the two races grown in media containing banana cell wall, pectin, or glucose as the sole carbon source. Overall, the gene expression profiles of Foc1 and Foc4 in response to host cell wall or pectin appeared remarkably different. When grown with host cell wall, a much larger number of genes showed altered levels of expression in Foc4 in comparison with Foc1, including genes encoding carbohydrate-active enzymes (CAZymes) and other virulence-related genes. Additionally, the levels of gene expression were higher in Foc4 than in Foc1 when grown with host cell wall or pectin. Furthermore, a great majority of genes were differentially expressed in a variety-specific manner when induced by host cell wall or pectin. More specific CAZymes and other pathogenesis-related genes were expressed in Foc4 than in Foc1 when grown with host cell wall. The first transcriptome profiles obtained for Foc during degradation of the host cell wall may provide new insights into the mechanism of banana cell wall polysaccharide decomposition and the genetic basis of Foc host specificity. PMID:28468818
Changes in the Proteome of Xylem Sap in Brassica oleracea in Response to Fusarium oxysporum Stress.
Pu, Zijing; Ino, Yoko; Kimura, Yayoi; Tago, Asumi; Shimizu, Motoki; Natsume, Satoshi; Sano, Yoshitaka; Fujimoto, Ryo; Kaneko, Kentaro; Shea, Daniel J; Fukai, Eigo; Fuji, Shin-Ichi; Hirano, Hisashi; Okazaki, Keiichi
2016-01-01
Fusarium oxysporum f.sp. conlutinans (Foc) is a serious root-invading and xylem-colonizing fungus that causes yellowing in Brassica oleracea. To comprehensively understand the interaction between F. oxysporum and B. oleracea, composition of the xylem sap proteome of the non-infected and Foc-infected plants was investigated in both resistant and susceptible cultivars using liquid chromatography-tandem mass spectrometry (LC-MS/MS) after in-solution digestion of xylem sap proteins. Whole genome sequencing of Foc was carried out and generated a predicted Foc protein database. The predicted Foc protein database was then combined with the public B. oleracea and B. rapa protein databases downloaded from Uniprot and used for protein identification. About 200 plant proteins were identified in the xylem sap of susceptible and resistant plants. Comparison between the non-infected and Foc-infected samples revealed that Foc infection causes changes to the protein composition in B. oleracea xylem sap where repressed proteins accounted for a greater proportion than those of induced in both the susceptible and resistant reactions. The analysis on the proteins with concentration change > = 2-fold indicated a large portion of up- and down-regulated proteins were those acting on carbohydrates. Proteins with leucine-rich repeats and legume lectin domains were mainly induced in both resistant and susceptible system, so was the case of thaumatins. Twenty-five Foc proteins were identified in the infected xylem sap and 10 of them were cysteine-containing secreted small proteins that are good candidates for virulence and/or avirulence effectors. The findings of differential response of protein contents in the xylem sap between the non-infected and Foc-infected samples as well as the Foc candidate effectors secreted in xylem provide valuable insights into B. oleracea-Foc interactions.
Qin, Shiwen; Ji, Chunyan; Li, Yunfeng; Wang, Zhenzhong
2017-07-05
The fungal pathogen Fusarium oxysporum f. sp. cubense causes Fusarium wilt, one of the most destructive diseases in banana and plantain cultivars. Pathogenic race 1 attacks the "Gros Michel" banana cultivar, and race 4 is pathogenic to the Cavendish banana cultivar and those cultivars that are susceptible to Foc1. To understand the divergence in gene expression modules between the two races during degradation of the host cell wall, we performed RNA sequencing to compare the genome-wide transcriptional profiles of the two races grown in media containing banana cell wall, pectin, or glucose as the sole carbon source. Overall, the gene expression profiles of Foc1 and Foc4 in response to host cell wall or pectin appeared remarkably different. When grown with host cell wall, a much larger number of genes showed altered levels of expression in Foc4 in comparison with Foc1, including genes encoding carbohydrate-active enzymes (CAZymes) and other virulence-related genes. Additionally, the levels of gene expression were higher in Foc4 than in Foc1 when grown with host cell wall or pectin. Furthermore, a great majority of genes were differentially expressed in a variety-specific manner when induced by host cell wall or pectin. More specific CAZymes and other pathogenesis-related genes were expressed in Foc4 than in Foc1 when grown with host cell wall. The first transcriptome profiles obtained for Foc during degradation of the host cell wall may provide new insights into the mechanism of banana cell wall polysaccharide decomposition and the genetic basis of Foc host specificity. Copyright © 2017 Qin et al.
NASA Astrophysics Data System (ADS)
Nunez, Jorge; Llacer, Jorge
1993-10-01
This paper describes a general Bayesian iterative algorithm with entropy prior for image reconstruction. It solves the cases of both pure Poisson data and Poisson data with Gaussian readout noise. The algorithm maintains positivity of the solution; it includes case-specific prior information (default map) and flatfield corrections; it removes background and can be accelerated to be faster than the Richardson-Lucy algorithm. In order to determine the hyperparameter that balances the entropy and liklihood terms in the Bayesian approach, we have used a liklihood cross-validation technique. Cross-validation is more robust than other methods because it is less demanding in terms of the knowledge of exact data characteristics and of the point-spread function. We have used the algorithm to reconstruct successfully images obtained in different space-and ground-based imaging situations. It has been possible to recover most of the original intended capabilities of the Hubble Space Telescope (HST) wide field and planetary camera (WFPC) and faint object camera (FOC) from images obtained in their present state. Semireal simulations for the future wide field planetary camera 2 show that even after the repair of the spherical abberration problem, image reconstruction can play a key role in improving the resolution of the cameras, well beyond the design of the Hubble instruments. We also show that ground-based images can be reconstructed successfully with the algorithm. A technique which consists of dividing the CCD observations into two frames, with one-half the exposure time each, emerges as a recommended procedure for the utilization of the described algorithms. We have compared our technique with two commonly used reconstruction algorithms: the Richardson-Lucy and the Cambridge maximum entropy algorithms.
Baas, M. A. M.; Stramrood, C. A. I.; Dijksman, L. M.; de Jongh, A.; van Pampus, M. G.
2017-01-01
ABSTRACT Background: Approximately 3% of women develop posttraumatic stress disorder (PTSD) after giving birth, and 7.5% of pregnant women show a pathological fear of childbirth (FoC). FoC or childbirth-related PTSD during (a subsequent) pregnancy can lead to a request for an elective caesarean section as well as adverse obstetrical and neonatal outcomes. For PTSD in general, and several subtypes of specific phobia, eye movement desensitization and reprocessing (EMDR) therapy has been proven effective, but little is known about the effects of applying EMDR during pregnancy. Objective: To describe the protocol of the OptiMUM-study. The main aim of the study is to determine whether EMDR therapy is an effective and safe treatment for pregnant women with childbirth-related PTSD or FoC. In addition, the cost-effectiveness of this approach will be analysed. Method: The single-blind OptiMUM-study consists of two two-armed randomized controlled trials (RCTs) with overlapping design. In several hospitals and community midwifery practices in Amsterdam, the Netherlands, all eligible pregnant women with a gestational age between eight and 20 weeks will be administered the Wijma delivery expectations questionnaire (WDEQ) to asses FoC. Multiparous women will also receive the PTSD checklist for DSM-5 (PCL-5) to screen for possible PTSD. The clinician administered PTSD scale (CAPS-5) will be used for assessing PTSD according to DSM-5 in women scoring above the PCL-5 cut-off value. Fifty women with childbirth-related PTSD and 120 women with FoC will be randomly allocated to either EMDR therapy carried out by a psychologist or care-as-usual. Women currently undergoing psychological treatment or women younger than 18 years will not be included. Primary outcome measures are severity of childbirth-related PTSD or FoC symptoms. Secondary outcomes are percentage of PTSD diagnoses, percentage caesarean sections, subjective childbirth experience, obstetrical and neonatal complications, and health care costs. Results: The results are meant to provide more insight about the safety and possible effectiveness of EMDR therapy during pregnancy for women with PTSD or FoC. Conclusion: This study is the first RCT studying efficacy and safety of EMDR in pregnant women with PTSD after childbirth or Fear of Childbirth. PMID:28348720
Sharma, Mamta; Nagavardhini, Avuthu; Thudi, Mahendar; Ghosh, Raju; Pande, Suresh; Varshney, Rajeev K
2014-06-10
Fusarium oxysporum f. sp. ciceris (Foc), the causal agent of Fusarium wilt of chickpea is highly variable and frequent recurrence of virulent forms have affected chickpea production and exhausted valuable genetic resources. The severity and yield losses of Fusarium wilt differ from place to place owing to existence of physiological races among isolates. Diversity study of fungal population associated with a disease plays a major role in understanding and devising better disease control strategies. The advantages of using molecular markers to understand the distribution of genetic diversity in Foc populations is well understood. The recent development of Diversity Arrays Technology (DArT) offers new possibilities to study the diversity in pathogen population. In this study, we developed DArT markers for Foc population, analysed the genetic diversity existing within and among Foc isolates, compared the genotypic and phenotypic diversity and infer the race scenario of Foc in India. We report the successful development of DArT markers for Foc and their utility in genotyping of Foc collections representing five chickpea growing agro-ecological zones of India. The DArT arrays revealed a total 1,813 polymorphic markers with an average genotyping call rate of 91.16% and a scoring reproducibility of 100%. Cluster analysis, principal coordinate analysis and population structure indicated that the different isolates of Foc were partially classified based on geographical source. Diversity in Foc population was compared with the phenotypic variability and it was found that DArT markers were able to group the isolates consistent with its virulence group. A number of race-specific unique and rare alleles were also detected. The present study generated significant information in terms of pathogenic and genetic diversity of Foc which could be used further for development and deployment of region-specific resistant cultivars of chickpea. The DArT markers were proved to be a powerful diagnostic tool to study the genotypic diversity in Foc. The high number of DArT markers allowed a greater resolution of genetic differences among isolates and enabled us to examine the extent of diversity in the Foc population present in India, as well as provided support to know the changing race scenario in Foc population.
Dong, Zhangyong; Wang, Zhenzhong
2015-04-03
Fusarium wilt (Panama disease) caused by Fusarium oxysporum f. sp. cubense (FOC) represents a significant threat to banana (Musa spp.) production. Musa AAB is susceptible to Race 1 (FOC1) and Race 4 (FOC4), while Cavendish Musa AAA is found to be resistant to FOC1 but still susceptible to Race 4. A polygalacturonase (PGC3) was purified from the supernatant of Fusarium oxysporum f. sp. cubense race 4 (FOC4), which is the pathogen of Fusarium wilt. PGC3 had an apparent molecular weight of 45 kDa according to SDS-PAGE. The enzyme hydrolyzed polygalacturonic acid in an exo-manner, as demonstrated by analysis of degradation products. The Km and Vmax values of PGC3 from FOC4 were determined to be 0.70 mg·mL-1 and 101.01 Units·mg·protein-1·min-1, respectively. Two pgc3 genes encoding PGC3 from FOC4 and FOC1, both genes of 1368 bp in length encode 456 amino-acid residues with a predicted signal peptide sequence of 21 amino acids. There are 16 nucleotide sites difference between FOC4-pgc3 and FOC1-pgc3, only leading to four amino acid residues difference. In order to obtain adequate amounts of protein required for functional studies, two genes were cloned into the expression vector pPICZaA and then expressed in Pichia pastoris strains of SMD1168. The recombinant PGC3, r-FOC1-PGC3 and r-FOC4-PGC3, were expressed and purified as active proteins. The optimal PGC3 activity was observed at 50 °C and pH 4.5. Both recombinant PGC3 retained >40% activity at pH 3-7 and >50% activity in 10-50 °C. Both recombinant PGC3 proteins could induce a response but with different levels of tissue maceration and necrosis in banana plants. In sum, our results indicate that PGC3 is an exo-PG and can be produced with full function in P. pastoris.
Beyer, Lydia; Doberenz, Claudia; Falke, Dörte; Hunger, Doreen; Suppmann, Bernhard
2013-01-01
Enterobacteria such as Escherichia coli generate formate, lactate, acetate, and succinate as major acidic fermentation products. Accumulation of these products in the cytoplasm would lead to uncoupling of the membrane potential, and therefore they must be either metabolized rapidly or exported from the cell. E. coli has three membrane-localized formate dehydrogenases (FDHs) that oxidize formate. Two of these have their respective active sites facing the periplasm, and the other is in the cytoplasm. The bidirectional FocA channel translocates formate across the membrane delivering substrate to these FDHs. FocA synthesis is tightly coupled to synthesis of pyruvate formate-lyase (PflB), which generates formate. In this study, we analyze the consequences on the fermentation product spectrum of altering FocA levels, uncoupling FocA from PflB synthesis or blocking formate metabolism. Changing the focA translation initiation codon from GUG to AUG resulted in a 20-fold increase in FocA during fermentation and an ∼3-fold increase in PflB. Nevertheless, the fermentation product spectrum throughout the growth phase remained similar to that of the wild type. Formate, acetate, and succinate were exported, but only formate was reimported by these cells. Lactate accumulated in the growth medium only in mutants lacking FocA, despite retaining active PflB, or when formate could not be metabolized intracellularly. Together, these results indicate that FocA has a strong preference for formate as a substrate in vivo and not other acidic fermentation products. The tight coupling between FocA and PflB synthesis ensures adequate substrate delivery to the appropriate FDH. PMID:23335413
Wang, Wei; Hu, Yulin; Sun, Dequan; Staehelin, Christian; Xin, Dawei; Xie, Jianghui
2012-01-01
Fusarium wilt caused by the fungus Fusarium oxysporum f. sp. cubense race 4 (FOC4) results in vascular tissue damage and ultimately death of banana (Musa spp.) plants. Somaclonal variants of in vitro micropropagated banana can hamper success in propagation of genotypes resistant to FOC4. Early identification of FOC4 resistance in micropropagated banana plantlets is difficult, however. In this study, we identified sequence-characterized amplified region (SCAR) markers of banana associated with resistance to FOC4. Using pooled DNA from resistant or susceptible genotypes and 500 arbitrary 10-mer oligonucleotide primers, 24 random amplified polymorphic DNA (RAPD) products were identified. Two of these RAPD markers were successfully converted to SCAR markers, called ScaU1001 (GenBank accession number HQ613949) and ScaS0901 (GenBank accession number HQ613950). ScaS0901 and ScaU1001 could be amplified in FOC4-resistant banana genotypes ("Williams 8818-1" and Goldfinger), but not in five tested banana cultivars susceptible to FOC4. The two SCAR markers were then used to identify a somaclonal variant of the genotype "Williams 8818-1", which lost resistance to FOC4. Hence, the identified SCAR markers can be applied for a rapid quality control of FOC4-resistant banana plantlets immediately after the in vitro micropropagation stage. Furthermore, ScaU1001 and ScaS0901 will facilitate marker-assisted selection of new banana cultivars resistant to FOC4.
Doberenz, Claudia; Zorn, Michael; Falke, Dörte; Nannemann, David; Hunger, Doreen; Beyer, Lydia; Ihling, Christian H; Meiler, Jens; Sinz, Andrea; Sawers, R Gary
2014-07-29
The FNT (formate-nitrite transporters) form a superfamily of pentameric membrane channels that translocate monovalent anions across biological membranes. FocA (formate channel A) translocates formate bidirectionally but the mechanism underlying how translocation of formate is controlled and what governs substrate specificity remains unclear. Here we demonstrate that the normally soluble dimeric enzyme pyruvate formate-lyase (PflB), which is responsible for intracellular formate generation in enterobacteria and other microbes, interacts specifically with FocA. Association of PflB with the cytoplasmic membrane was shown to be FocA dependent and purified, Strep-tagged FocA specifically retrieved PflB from Escherichia coli crude extracts. Using a bacterial two-hybrid system, it could be shown that the N-terminus of FocA and the central domain of PflB were involved in the interaction. This finding was confirmed by chemical cross-linking experiments. Using constraints imposed by the amino acid residues identified in the cross-linking study, we provide for the first time a model for the FocA-PflB complex. The model suggests that the N-terminus of FocA is important for interaction with PflB. An in vivo assay developed to monitor changes in formate levels in the cytoplasm revealed the importance of the interaction with PflB for optimal translocation of formate by FocA. This system represents a paradigm for the control of activity of FNT channel proteins. Copyright © 2014 Elsevier Ltd. All rights reserved.
HST observations of globular clusters in M 31. 1: Surface photometry of 13 objects
NASA Technical Reports Server (NTRS)
Pecci, F. Fusi; Battistini, P.; Bendinelli, O.; Bonoli, F.; Cacciari, C.; Djorgovski, S.; Federici, L.; Ferraro, F. R.; Parmeggiani, G.; Weir, N.
1994-01-01
We present the initial results of a study of globular clusters in M 31, using the Faint Object Camera (FOC) on the Hubble Space Telescope (HST). The sample of objects consists of 13 clusters spanning a range of properties. Three independent image deconvolution techniques were used in order to compensate for the optical problems of the HST, leading to mutually fully consistent results. We present detailed tests and comparisons to determine the reliability and limits of these deconvolution methods, and conclude that high-quality surface photometry of M 31 globulars is possible with the HST data. Surface brightness profiles have been extracted, and core radii, half-light radii, and central surface brightness values have been measured for all of the clusters in the sample. Their comparison with the values from ground-based observations indicates the later to be systematically and strongly biased by the seeing effects, as it may be expected. A comparison of the structural parameters with those of the Galactic globulars shows that the structural properties of the M 31 globulars are very similar to those of their Galactic counterparts. A candidate for a post-core-collapse cluster, Bo 343 = G 105, has been already identified from these data; this is the first such detection in the M 31 globular cluster system.
Zhang, Fengge; Yang, Xingming; Ran, Wei; Shen, Qirong
2014-10-01
Trichoderma species have been used widely as biocontrol agents for the suppression of soil-borne pathogens. However, some antagonistic mechanisms of Trichoderma are not well characterized. In this study, a series of laboratory experiments were designed to characterize the importance of mycoparasitism, exoenzymes, and volatile organic compounds (VOCs) by Trichoderma harzianum T-E5 for the control of Fusarium oxysporum f. sp. cucumerinum (FOC). We further tested whether these mechanisms were inducible and upregulated in presence of FOC. The results were as follows: T-E5 heavily parasitized FOC by coiling and twisting the entire mycelium of the pathogen in dual cultures. T-E5 growing medium conditioned with deactivated FOC (T2) showed more proteins and higher cell wall-degrading enzyme activities than T1, suggesting that FOC could induce the upregulation of exoenzymes. The presence of deactivated FOC (T2') also resulted in the upregulation of VOCs that five and eight different types T-E5-derived VOCs were identified from T1' and T2', respectively. Further, the excreted VOCs in T2' showed significantly higher antifungal activities against FOC than T1'. In conclusion, mycoparasitism of T-E5 against FOC involved mycelium contact and the production of complex extracellular substances. Together, these data provide clues to help further clarify the interactions between these fungi. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
Chand, Subodh K; Nanda, Satyabrata; Joshi, Raj K
2016-01-01
MicroRNAs (miRNAs) are a class of post-transcriptional regulators that negatively regulate gene expression through target mRNA cleavage or translational inhibition and play important roles in plant development and stress response. In the present study, six conserved miRNAs from garlic (Allium sativum L.) were analyzed to identify differentially expressed miRNAs in response to Fusarium oxysporum f. sp. cepae (FOC) infection. Stem-loop RT-PCR revealed that miR394 is significantly induced in garlic seedlings post-treatment with FOC for 72 h. The induction of miR394 expression during FOC infection was restricted to the basal stem plate tissue, the primary site of infection. Garlic miR394 was also upregulated by exogenous application of jasmonic acid. Two putative targets of miR394 encoding F-box domain and cytochrome P450 (CYP450) family proteins were predicted and verified using 5' RLM-RACE (RNA ligase mediated rapid amplification of cDNA ends) assay. Quantitative RT-PCR showed that the transcript levels of the predicted targets were significantly reduced in garlic plants exposed to FOC. When garlic cultivars with variable sensitivity to FOC were exposed to the pathogen, an upregulation of miR394 and down regulation of the targets were observed in both varieties. However, the expression pattern was delayed in the resistant genotypes. These results suggest that miR394 functions in negative modulation of FOC resistance and the difference in timing and levels of expression in variable genotypes could be examined as markers for selection of FOC resistant garlic cultivars.
First-order and higher order sequence learning in specific language impairment.
Clark, Gillian M; Lum, Jarrad A G
2017-02-01
A core claim of the procedural deficit hypothesis of specific language impairment (SLI) is that the disorder is associated with poor implicit sequence learning. This study investigated whether implicit sequence learning problems in SLI are present for first-order conditional (FOC) and higher order conditional (HOC) sequences. Twenty-five children with SLI and 27 age-matched, nonlanguage-impaired children completed 2 serial reaction time tasks. On 1 version, the sequence to be implicitly learnt comprised a FOC sequence and on the other a HOC sequence. Results showed that the SLI group learned the HOC sequence (η p ² = .285, p = .005) but not the FOC sequence (η p ² = .099, p = .118). The control group learned both sequences (FOC η p ² = .497, HOC η p 2= .465, ps < .001). The SLI group's difficulty learning the FOC sequence is consistent with the procedural deficit hypothesis. However, the study provides new evidence that multiple mechanisms may underpin the learning of FOC and HOC sequences. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Imaging the Surfaces of Stars from Space
NASA Astrophysics Data System (ADS)
Carpenter, Kenneth; Rau, Gioia
2018-04-01
Imaging of Stellar Surfacess has been dominated to-date by ground-based observations, but space-based facilities offer tremendous potential for extending the wavelength coverage and ultimately the resolution of such efforts. We review the imaging accomplished so far from space and then talk about exciting future prospects. The earliest attempts from space indirectly produced surface maps via the Doppler Imaging Technique, using UV spectra obtained with the International Ultraviolet Explorer (IUE). Later, the first direct UV images were obtained with the Hubble Space Telescope (HST), of Mira and Betelgeuse, using the Faint Object Camera (FOC). We will show this work and then investigate prospects for IR imaging with the James Webb Space Telescope (JWST). The real potential of space-based Imaging of Stellar Surfacess, however, lies in the future, when large-baseline Fizeau interferometers, such as the UV-optical Stellar Imager (SI) Vision Mission, with a 30-element array and 500m max baseline, are flown. We describe SI and its science goals, which include 0.1 milli-arcsec spectral Imaging of Stellar Surfacess and the probing of internal structure and flows via asteroseismology.
Nilsson, Christina; Lundgren, Ingela; Karlström, Annika; Hildingsson, Ingegerd
2012-09-01
To explore fear of childbirth (FOC) during pregnancy and one year after birth and its association to birth experience and mode of delivery. A longitudinal population-based study. Pregnant women who were listed for a routine ultrasound at three hospitals in the middle-north part of Sweden. Differences between women who reported FOC and who did not were calculated using risk ratios with a 95% confidence interval. In order to explain which factors were most strongly associated to suffer from FOC during pregnancy and one year after childbirth, multivariate logistic regression analyses were used. FOC during pregnancy in multiparous women was associated with a previous negative birth experience (RR 5.1, CI 2.5-10.4) and a previous emergency caesarean section (RR 2.5, CI 1.2-5.4). Associated factors for FOC one year after childbirth were: a negative birth experience (RR 10.3, CI 5.1-20.7), fear of childbirth during pregnancy (RR 7.1, CI 4.4-11.7), emergency caesarean section (RR 2.4, CI 1.2-4.5) and primiparity (RR 1.9, CI 1.2-3.1). FOC was associated with negative birth experiences. Women still perceived the birth experience as negative a year after the event. Women's perception of the overall birth experience as negative seems to be more important for explaining subsequent FOC than mode of delivery. Maternity care should focus on women's experiences of childbirth. Staff at antenatal clinics should ask multiparous women about their previous experience of childbirth. So that FOC is minimized, research on factors that create a positive birth experience for women is required. Copyright © 2011 Australian College of Midwives. Published by Elsevier Ltd. All rights reserved.
Chand, Subodh K.; Nanda, Satyabrata; Joshi, Raj K.
2016-01-01
MicroRNAs (miRNAs) are a class of post-transcriptional regulators that negatively regulate gene expression through target mRNA cleavage or translational inhibition and play important roles in plant development and stress response. In the present study, six conserved miRNAs from garlic (Allium sativum L.) were analyzed to identify differentially expressed miRNAs in response to Fusarium oxysporum f. sp. cepae (FOC) infection. Stem-loop RT-PCR revealed that miR394 is significantly induced in garlic seedlings post-treatment with FOC for 72 h. The induction of miR394 expression during FOC infection was restricted to the basal stem plate tissue, the primary site of infection. Garlic miR394 was also upregulated by exogenous application of jasmonic acid. Two putative targets of miR394 encoding F-box domain and cytochrome P450 (CYP450) family proteins were predicted and verified using 5′ RLM-RACE (RNA ligase mediated rapid amplification of cDNA ends) assay. Quantitative RT-PCR showed that the transcript levels of the predicted targets were significantly reduced in garlic plants exposed to FOC. When garlic cultivars with variable sensitivity to FOC were exposed to the pathogen, an upregulation of miR394 and down regulation of the targets were observed in both varieties. However, the expression pattern was delayed in the resistant genotypes. These results suggest that miR394 functions in negative modulation of FOC resistance and the difference in timing and levels of expression in variable genotypes could be examined as markers for selection of FOC resistant garlic cultivars. PMID:26973694
Sericea lespdeza as an aid in the control of Emeria spp. in lambs.
Burke, J M; Miller, J E; Terrill, T H; Orlik, S T; Acharya, M; Garza, J J; Mosjidis, J A
2013-03-31
The objective was to examine the effects of feeding sericea lespedeza leaf meal (SL) on control of coccidiosis in lambs. In Exp. 1, naturally infected lambs (n=76) were weaned (102.7±1.4 d of age) in May (spring) and randomly assigned in a 2×2 factorial design to receive 2% of BW/d of alfalfa pellets (control) or SL with or without amprolium added to drinking water (n=38/level or 19/treatment). Fecal oocyst counts (FOC), egg counts (FEC), and fecal score (1=solid pellets; 5=slurry) were determined every 7d between weaning and 21 d post-weaning. In Exp. 2, twin rearing ewes were randomly assigned to two groups, and their naturally infected lambs were fed a control creep supplement (16% CP; n=40) or SL pellets (14% CP; n=32) 30 d before weaning. Intake of SL was initially low (100g/lamb daily) and increased to 454 g/lamb daily after weaning. Lambs were weaned at 103.6±0.9 d of age and moved to semi-confinement. The FEC, FOC, packed cell volume (PCV), fecal score, and dag score (soiling around rear of lamb; 1=no soiling; 5=heavy soiling) were determined at d -14, 0 (weaning), 7, 14, and 21. In Exp. 3, lambs were randomly assigned to a control or SL diet (n=12/diet) fed at 1.4 kg/d for 22d and inoculated with 50,000 sporulated oocysts on d 8, 11, and 13. The FEC, FOC, and fecal score were determined every 2 to 3d between d 1 and 29 (d 0=first day of dietary treatment). Data on all experiments were analyzed using mixed models. The FOC and FEC data were log transformed. Chi squared analysis was used to determine differences in incidence of treatment (sulfadimethoxine) for coccidiosis in Exp. 1 and 2. In Exp. 1, FOC and FEC were similar between dietary groups, and FOC declined more rapidly in amprolium treated lambs following weaning (P<0.001). Fecal score was higher in the control compared with the SL fed lambs (P=0.05), suggesting more signs of coccidiosis in control lambs. In Exp. 2, FOC was similar initially but was reduced in SL fed lambs by weaning and remained lower thereafter (P=0.004). Dag (P=0.01) and fecal (P=0.001) scores were similar before weaning, but lower in SL fed lambs by weaning and remained lower thereafter. No SL lambs required treatment for coccidiosis, whereas 33% of control lambs required treatment (P<0.001). Fecal egg counts were similar before weaning but were reduced in SL compared with control fed lambs after weaning (P<0.001). In Exp. 3, FOC (P<0.001) and FEC (P<0.001) were reduced in SL compared with control fed lambs. Sericea lespedeza was effective in the prevention and control of coccidiosis as well as in reducing GIN infection. Use of SL could reduce lamb loss post-weaning, reduce the need to treat for coccidiosis, and create a significant economic benefit to livestock producers. Published by Elsevier B.V.
ERIC Educational Resources Information Center
Lutz, Kristin F.; Hassouneh, Dena; Akeroyd, Jen; Beckett, Ann K.
2013-01-01
This report of findings from a grounded theory study conducted with 23 faculty of color (FOC) in predominately Euro American schools of nursing presents the central process used by FOC as they navigate academic careers as persons of color. As FOC struggled to progress in their careers and influence their academic environments they engaged in a…
NASA Astrophysics Data System (ADS)
Tremoulet, P. C.
The author describes a number of maintenance improvements in the Fiber Optic Cable System (FOCS). They were achieved during a production phase pilot concurrent engineering program. Listed in order of importance (saved maintenance time and material) by maintenance level, they are: (1) organizational level: improved fiber optic converter (FOC) BITE; (2) Intermediate level: reduced FOC adjustments from 20 to 2; partitioned FOC into electrical and optical parts; developed cost-effective fault isolation test points and test using standard test equipment; improved FOC chassis to have lower mean time to repair; and (3) depot level: revised test requirements documents (TRDs) for common automatic test equipment and incorporated ATE testability into circuit and assemblies and application-specific integrated circuits. These improvements met this contract's tailored logistics MIL-STD 1388-1A requirements of monitoring the design for supportability and determining the most effective support equipment. Important logistics lessons learned while accomplishing these maintainability and supportability improvements on the pilot concurrent engineering program are also discussed.
Zhou, Jinyan; Wang, Min; Sun, Yuming; Gu, Zechen; Wang, Ruirui; Saydin, Asanjan; Shen, Qirong; Guo, Shiwei
2017-03-11
Cucumber Fusarium wilt, induced by Fusarium oxysporum f. sp. cucumerinum (FOC), causes severe losses in cucumber yield and quality. Nitrogen (N), as the most important mineral nutrient for plants, plays a critical role in plant-pathogen interactions. Hydroponic assays were conducted to investigate the effects of different N forms (NH₄⁺ vs. NO₃ ‒ ) and supply levels (low, 1 mM; high, 5 mM) on cucumber Fusarium wilt. The NO₃ ‒ -fed cucumber plants were more tolerant to Fusarium wilt compared with NH₄⁺-fed plants, and accompanied by lower leaf temperature after FOC infection. The disease index decreased as the NO₃ ‒ supply increased but increased with the NH₄⁺ level supplied. Although the FOC grew better under high NO₃ - in vitro, FOC colonization and fusaric acid (FA) production decreased in cucumber plants under high NO₃ - supply, associated with lower leaf membrane injury. There was a positive correlation between the FA content and the FOC number or relative membrane injury. After the exogenous application of FA, less FA accumulated in the leaves under NO₃ - feeding, accompanied with a lower leaf membrane injury. In conclusion, higher NO₃ - supply protected cucumber plants against Fusarium wilt by suppressing FOC colonization and FA production in plants, and increasing the plant tolerance to FA.
Kumar, Yashwant; Zhang, Limin; Panigrahi, Priyabrata; Dholakia, Bhushan B; Dewangan, Veena; Chavan, Sachin G; Kunjir, Shrikant M; Wu, Xiangyu; Li, Ning; Rajmohanan, Pattuparambil R; Kadoo, Narendra Y; Giri, Ashok P; Tang, Huiru; Gupta, Vidya S
2016-07-01
Molecular changes elicited by plants in response to fungal attack and how this affects plant-pathogen interaction, including susceptibility or resistance, remain elusive. We studied the dynamics in root metabolism during compatible and incompatible interactions between chickpea and Fusarium oxysporum f. sp. ciceri (Foc), using quantitative label-free proteomics and NMR-based metabolomics. Results demonstrated differential expression of proteins and metabolites upon Foc inoculations in the resistant plants compared with the susceptible ones. Additionally, expression analysis of candidate genes supported the proteomic and metabolic variations in the chickpea roots upon Foc inoculation. In particular, we found that the resistant plants revealed significant increase in the carbon and nitrogen metabolism; generation of reactive oxygen species (ROS), lignification and phytoalexins. The levels of some of the pathogenesis-related proteins were significantly higher upon Foc inoculation in the resistant plant. Interestingly, results also exhibited the crucial role of altered Yang cycle, which contributed in different methylation reactions and unfolded protein response in the chickpea roots against Foc. Overall, the observed modulations in the metabolic flux as outcome of several orchestrated molecular events are determinant of plant's role in chickpea-Foc interactions. © 2016 The Authors. Plant Biotechnology Journal published by Society for Experimental Biology and The Association of Applied Biologists and John Wiley & Sons Ltd.
Treatment of nulliparous women with severe fear of childbirth via the Internet: a feasibility study.
Nieminen, Katri; Andersson, Gerhard; Wijma, Barbro; Ryding, Elsa-Lena; Wijma, Klaas
2016-01-01
The aim of the present study was to test the feasibility of Internet interventions among nulliparous women suffering from severe fear of childbirth (FOC) by means of an Internet-delivered therapist-supported self-help program based on cognitive behavioral therapy (ICBT). Prospective, longitudinal cohort study. A feasibility study of an ICBT program for the treatment of severe FOC in pregnant women. Twenty-eight Swedish-speaking nulliparous women with severe FOC recruited via a project home page from January 2012 to December 2013. The main components of the ICBT program for the treatment of severe FOC comprised psycho-education, breathing retraining, cognitive restructuring, imaginary exposure, in vivo exposure and relapse prevention. The study participants were anonymously self-recruited over the Internet, interviewed by telephone and then enrolled. All participants were offered 8 weeks of treatment via the Internet. Participants reported their homework weekly, submitted measurements of their fear and received feedback from a therapist via a secure online contact management system. Level of FOC measured with the Wijma Delivery Expectancy/Experience Questionnaire (W-DEQ A) during screening at enrollment and weekly during the treatment (W-DEQ version A), and after the delivery (W-DEQ version B). A statistically significant (p < 0.0005) decrease of FOC [W-DEQ sum score decreased pre to post-therapy, with a large effect size (Cohen's d = 0.95)]. The results of this feasibility study suggest that ICBT has potential in the treatment of severe FOC during pregnancy in motivated nulliparous women. The results need to be confirmed by randomized controlled studies.
Zhang, Xin; Zhang, He; Pu, Jinji; Qi, Yanxiang; Yu, Qunfang; Xie, Yixian; Peng, Jun
2013-01-01
Fusarium oxysporum f. sp. cubense (Foc), the causal agent of Fusarium wilt (Panama disease), is one of the most devastating diseases of banana (Musa spp.). The Foc tropical race 4 (TR4) is currently known as a major concern in global banana production. No effective resistance is known in Musa to Foc, and no effective measures for controlling Foc once banana plants have been infected in place. Early and accurate detection of Foc TR4 is essential to protect banana industry and guide banana planting. A real-time fluorescence loop-mediated isothermal amplification assay (RealAmp) was developed for the rapid and quantitative detection of Foc TR4 in soil. The detection limit of the RealAmp assay was approximately 0.4 pg/µl plasmid DNA when mixed with extracted soil DNA or 10(3) spores/g of artificial infested soil, and no cross-reaction with other relative pathogens were observed. The RealAmp assay for quantifying genomic DNA of TR4 was confirmed by testing both artificially and naturally infested samples. Quantification of the soil-borne pathogen DNA of Foc TR4 in naturally infested samples was no significant difference compared to classic real-time PCR (P>0.05). Additionally, RealAmp assay was visual with an improved closed-tube visual detection system by adding SYBR Green I fluorescent dye to the inside of the lid prior to amplification, which avoided the inhibitory effects of the stain on DNA amplification and makes the assay more convenient in the field and could thus become a simple, rapid and effective technique that has potential as an alternative tool for the detection and monitoring of Foc TR4 in field, which would be a routine DNA-based testing service for the soil-borne pathogen in South China.
[Survey of occupational health practices of foreign-owned companies].
Nakamura, Saki; Maruyama, Takashi; Hasegawa, Kumi; Nagata, Tomohisa; Mori, Koji
2011-01-01
We conducted a survey to clarify the present state of the occupational health practices (OHPs) of foreign-owned companies (FOCs) in Japan. The results reveal more strategic OHPs of FOCs located in Japan as local subsidiaries. Furthermore, the results should contribute to smoother global development of OHPs for international corporations with headquarters (HQs) in Japan. A total of 1,220 FOCs in Japan with at least 50 employees that are listed in Gaishikeikigyo-Soran (Overview of FOCs) 2009 published by Toyo Keizai, Inc. were targeted in our survey. A questionnaire with items concerning the (1) present situation of global and local OHP standards, (2) relationships with overseas HQ, and (3) impressions regarding daily OHPs was sent to a high-ranking person engaged in OHPs at each FOC. We ask about renkei-kan (sense of cooperation with overseas HQ), a positive Japanese word, in order to evaluate preferable relationships between FOCs and their HQs. There were 123 valid responses. Of these, only 50 had indicated the implementation of global standards (GS). Of the OHPs that were mentioned in GS, responses mainly included risk management for occupational diseases. With respect to local standards (LS), responses indicated that individual approaches toward each worker were an area of particular focus. Satisfaction with staff numbers and budget was high, although HQ involvement in staff numbers and budget control was low. Furthermore, 71.5% of respondents had low renkei-kan. We also found correlations among: renkei-kan, GS availability, frequency of reporting to overseas superiors, audit interval, and understanding of OHP organization at HQs. We found FOCs established OHPs independently of HQs and that they were satisfied with the present situation. On the other hand, there are many respondents who do not have positive feelings, renkei-kan, toward their relationships with HQs. OHP staff of FOCs can enhance renkei-kan by making use of GS, identifying key HQ personnel, and implementing understanding for them and their organization through daily reports and regular audits.
Tunable fractional-order capacitor using layered ferroelectric polymers
NASA Astrophysics Data System (ADS)
Agambayev, Agamyrat; Patole, Shashikant; Bagci, Hakan; Salama, Khaled N.
2017-09-01
Pairs of various Polyvinylidene fluoride P(VDF)-based polymers are used for fabricating bilayer fractional order capacitors (FOCs). The polymer layers are constructed using a simple drop casting approach. The resulting FOC has two advantages: It can be easily integrated with printed circuit boards, and its constant phase angle (CPA) can be tuned by changing the thickness ratio of the layers. Indeed, our experiments show that the CPA of the fabricated FOCs can be tuned within the range from -83° to -65° in the frequency band changing from 150 kHz to 10 MHz. Additionally, we provide an empirical formula describing the relationship between the thickness ratio and the CPA, which is highly useful for designing FOCs with the desired CPA.
Nitrate Protects Cucumber Plants Against Fusarium oxysporum by Regulating Citrate Exudation.
Wang, Min; Sun, Yuming; Gu, Zechen; Wang, Ruirui; Sun, Guomei; Zhu, Chen; Guo, Shiwei; Shen, Qirong
2016-09-01
Fusarium wilt causes severe yield losses in cash crops. Nitrogen plays a critical role in the management of plant disease; however, the regulating mechanism is poorly understood. Using biochemical, physiological, bioinformatic and transcriptome approaches, we analyzed how nitrogen forms regulate the interactions between cucumber plants and Fusarium oxysporum f. sp. cucumerinum (FOC). Nitrate significantly suppressed Fusarium wilt compared with ammonium in both pot and hydroponic experiments. Fewer FOC colonized the roots and stems under nitrate compared with ammonium supply. Cucumber grown with nitrate accumulated less fusaric acid (FA) after FOC infection and exhibited increased tolerance to chemical FA by decreasing FA absorption and transportation in shoots. A lower citrate concentration was observed in nitrate-grown cucumbers, which was associated with lower MATE (multidrug and toxin compound extrusion) family gene and citrate synthase (CS) gene expression, as well as lower CS activity. Citrate enhanced FOC spore germination and infection, and increased disease incidence and the FOC population in ammonium-treated plants. Our study provides evidence that nitrate protects cucumber plants against F. oxysporum by decreasing root citrate exudation and FOC infection. Citrate exudation is essential for regulating disease development of Fusarium wilt in cucumber plants. © The Author 2016. Published by Oxford University Press on behalf of Japanese Society of Plant Physiologists. All rights reserved. For permissions, please email: journals.permissions@oup.com.
Zhou, Jinyan; Wang, Min; Sun, Yuming; Gu, Zechen; Wang, Ruirui; Saydin, Asanjan; Shen, Qirong; Guo, Shiwei
2017-01-01
Cucumber Fusarium wilt, induced by Fusarium oxysporum f. sp. cucumerinum (FOC), causes severe losses in cucumber yield and quality. Nitrogen (N), as the most important mineral nutrient for plants, plays a critical role in plant–pathogen interactions. Hydroponic assays were conducted to investigate the effects of different N forms (NH4+ vs. NO3‒) and supply levels (low, 1 mM; high, 5 mM) on cucumber Fusarium wilt. The NO3‒-fed cucumber plants were more tolerant to Fusarium wilt compared with NH4+-fed plants, and accompanied by lower leaf temperature after FOC infection. The disease index decreased as the NO3‒ supply increased but increased with the NH4+ level supplied. Although the FOC grew better under high NO3− in vitro, FOC colonization and fusaric acid (FA) production decreased in cucumber plants under high NO3− supply, associated with lower leaf membrane injury. There was a positive correlation between the FA content and the FOC number or relative membrane injury. After the exogenous application of FA, less FA accumulated in the leaves under NO3− feeding, accompanied with a lower leaf membrane injury. In conclusion, higher NO3− supply protected cucumber plants against Fusarium wilt by suppressing FOC colonization and FA production in plants, and increasing the plant tolerance to FA. PMID:28287458
Veringa, Irena K; de Bruin, Esther I; Bardacke, Nancy; Duncan, Larissa G; van Steensel, Francisca J A; Dirksen, Carmen D; Bögels, Susan M
2016-11-07
Approximately 25 % of pregnant women suffer from a high level of Fear of Childbirth (FoC), as assessed by the Wijma Delivery Expectancy Questionnaire (W-DEQ-A, score ≥66). FoC negatively affects pregnant women's mental health and adaptation to the perinatal period. Mindfulness-Based Childbirth and Parenting (MBCP) seems to be potentially effective in decreasing pregnancy-related anxiety and stress. We propose a theoretical model of Avoidance and Participation in Pregnancy, Birth and the Postpartum Period in order to explore FoC and to evaluate the underlying mechanisms of change of MBCP. The 'I've Changed My Mind' study is a quasi-experimental controlled trial among 128 pregnant women (week 16-26) with a high level of FoC, and their partners. Women will be allocated to MBCP (intervention group) or to Fear of Childbirth Consultation (FoCC; comparison group). Primary outcomes are FoC, labour pain, and willingness to accept obstetrical interventions. Secondary outcomes are anxiety, depression, general stress, parental stress, quality of life, sleep quality, fatigue, satisfaction with childbirth, birth outcome, breastfeeding self-efficacy and cost-effectiveness. The total study duration for women is six months with four assessment waves: pre- and post-intervention, following the birth and closing the maternity leave period. Given the high prevalence and severe negative impact of FoC this study can be of major importance if statistically and clinically meaningful benefits are found. Among the strengths of this study are the clinical-based experimental design, the extensive cognitive-emotional and behavioural measurements in pregnant women and their partners during the entire perinatal period, and the representativeness of study sample as well as generalizability of the study's results. The complex and innovative measurements of FoC in this study are an important strength in clinical research on FoC not only in pregnant women but also in their partners. Dutch Trial Register (NTR): NTR4302 , registration date the 3rd of December 2013.
Bai, Ting-Ting; Xie, Wan-Bin; Zhou, Ping-Ping; Wu, Zi-Lin; Xiao, Wen-Chao; Zhou, Ling; Sun, Jie; Ruan, Xiao-Lei; Li, Hua-Ping
2013-01-01
Banana wilt disease, caused by the fungal pathogen Fusarium oxysporum f. sp. cubense 4 (Foc4), is regarded as one of the most devastating diseases worldwide. Cavendish cultivar ‘Yueyoukang 1’ was shown to have significantly lower disease severity and incidence compared with susceptible cultivar ‘Brazilian’ in greenhouse and field trials. De novo sequencing technology was previously performed to investigate defense mechanism in middle resistant ‘Nongke No 1’ banana, but not in highly resistant cultivar ‘Yueyoukang 1’. To gain more insights into the resistance mechanism in banana against Foc4, Illumina Solexa sequencing technology was utilized to perform transcriptome sequencing of ‘Yueyoukang 1’ and ‘Brazilian’ and characterize gene expression profile changes in the both two cultivars at days 0.5, 1, 3, 5 and 10 after infection with Foc4. The results showed that more massive transcriptional reprogramming occurs due to Foc4 treatment in ‘Yueyoukang 1’ than ‘Brazilian’, especially at the first three time points, which suggested that ‘Yueyoukang 1’ had much faster defense response against Foc4 infection than ‘Brazilian’. Expression patterns of genes involved in ‘Plant-pathogen interaction’ and ‘Plant hormone signal transduction’ pathways were analyzed and compared between the two cultivars. Defense genes associated with CEBiP, BAK1, NB-LRR proteins, PR proteins, transcription factor and cell wall lignification were expressed stronger in ‘Yueyoukang 1’ than ‘Brazilian’, indicating that these genes play important roles in banana against Foc4 infection. However, genes related to hypersensitive reaction (HR) and senescence were up-regulated in ‘Brazilian’ but down-regulated in ‘Yueyoukang 1’, which suggested that HR and senescence may contribute to Foc4 infection. In addition, the resistance mechanism in highly resistant ‘Yueyoukang 1’ was found to differ from that in middle resistant ‘Nongke No 1’ banana. These results explain the resistance in the highly resistant cultivar and provide more insights in understanding the compatible and incompatible interactions between banana and Foc4. PMID:24086302
Abdelrahman, Mostafa; Abdel-Motaal, Fatma; El-Sayed, Magdi; Jogaiah, Sudisha; Shigyo, Masayoshi; Ito, Shin-Ichi; Tran, Lam-Son Phan
2016-05-01
Trichoderma spp. are versatile opportunistic plant symbionts that can cause substantial changes in the metabolism of host plants, thereby increasing plant growth and activating plant defense to various diseases. Target metabolite profiling approach was selected to demonstrate that Trichoderma longibrachiatum isolated from desert soil can confer beneficial agronomic traits to onion and induce defense mechanism against Fusarium oxysporum f. sp. cepa (FOC), through triggering a number of primary and secondary metabolite pathways. Onion seeds primed with Trichoderma T1 strain displayed early seedling emergence and enhanced growth compared with Trichoderma T2-treatment and untreated control. Therefore, T1 was selected for further investigations under greenhouse conditions, which revealed remarkable improvement in the onion bulb growth parameters and resistance against FOC. The metabolite platform of T1-primed onion (T1) and T1-primed onion challenged with FOC (T1+FOC) displayed significant accumulation of 25 abiotic and biotic stress-responsive metabolites, representing carbohydrate, phenylpropanoid and sulfur assimilation metabolic pathways. In addition, T1- and T1+FOC-treated onion plants showed discrete antioxidant capacity against 1,1-diphenyl-2-picrylhydrazyl (DPPH) compared with control. Our findings demonstrated the contribution of T. longibrachiatum to the accumulation of key metabolites, which subsequently leads to the improvement of onion growth, as well as its resistance to oxidative stress and FOC. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Du, Nanshan; Shi, Lu; Yuan, Yinghui; Li, Bin; Shu, Sheng; Sun, Jin; Guo, Shirong
2016-01-01
Plant-growth-promoting rhizobacteria (PGPR) can both improve plant growth and enhance plant resistance against a variety of environmental stresses. To investigate the mechanisms that PGPR use to protect plants under pathogenic attack, transmission electron microscopy analysis and a proteomic approach were designed to test the effects of the new potential PGPR strain Paenibacillus polymyxa NSY50 on cucumber seedling roots after they were inoculated with the destructive phytopathogen Fusarium oxysporum f. sp. cucumerinum (FOC). NSY50 could apparently mitigate the injury caused by the FOC infection and maintain the stability of cell structures. The two-dimensional electrophoresis (2-DE) approach in conjunction with MALDI-TOF/TOF analysis revealed a total of 56 proteins that were differentially expressed in response to NSY50 and/or FOC. The application of NSY50 up-regulated most of the identified proteins that were involved in carbohydrate metabolism and amino acid metabolism under normal conditions, which implied that both energy generation and the production of amino acids were enhanced, thereby ensuring an adequate supply of amino acids for the synthesis of new proteins in cucumber seedlings to promote plant growth. Inoculation with FOC inhibited most of the proteins related to carbohydrate and energy metabolism and to protein metabolism. The combined inoculation treatment (NSY50+FOC) accumulated abundant proteins involved in defense mechanisms against oxidation and detoxification as well as carbohydrate metabolism, which might play important roles in preventing pathogens from attacking. Meanwhile, western blotting was used to analyze the accumulation of enolase (ENO) and S-adenosylmethionine synthase (SAMs). NSY50 further increased the expression of ENO and SAMs under FOC stress. In addition, NSY50 adjusted the transcription levels of genes related to those proteins. Taken together, these results suggest that P. polymyxa NSY50 may promote plant growth and alleviate FOC-induced damage by improving the metabolism and activation of defense-related proteins in cucumber roots. PMID:28018395
Chand, Subodh Kumar; Nanda, Satyabrata; Mishra, Rukmini; Joshi, Raj Kumar
2017-04-01
The basal plate rot fungus, Fusarium oxysporum f. sp. cepae (FOC), is the most devastating pathogen posing a serious threat to garlic (Allium sativum L.) production worldwide. MicroRNAs (miRNAs) are key modulators of gene expression related to development and defense responses in eukaryotes. However, the miRNA species associated with garlic immunity against FOC are yet to be explored. In the present study, a small RNA library developed from FOC infected resistant garlic line was sequenced to identify immune responsive miRNAs. Forty-five miRNAs representing 39 conserved and six novel sequences responsive to FOC were detected. qRT-PCR analyses further classified them into three classes based on their expression patterns in susceptible line CBT-As11 and in the resistant line CBT-As153. North-blot analyses of six selective miRNAs confirmed the qRT-PCR results. Expression studies on a selective set of target genes revealed a negative correlation with the complementary miRNAs. Furthermore, transgenic garlic plant overexpresing miR164a, miR168a and miR393 showed enhanced resistance to FOC, as revealed by decreased fungal growth and up-regulated expression of defense-responsive genes. These results indicate that multiple miRNAs are involved in garlic immunity against FOC and that the overexpression of miR164a, miR168a and miR393 can augment garlic resistance to Fusarium basal rot infection. Copyright © 2017 Elsevier B.V. All rights reserved.
Mostert, Diane; Molina, Agustin B; Daniells, Jeff; Fourie, Gerda; Hermanto, Catur; Chao, Chih-Ping; Fabregar, Emily; Sinohin, Vida G; Masdek, Nik; Thangavelu, Raman; Li, Chunyu; Yi, Ganyun; Mostert, Lizel; Viljoen, Altus
2017-01-01
Fusarium oxysporum formae specialis cubense (Foc) is a soil-borne fungus that causes Fusarium wilt, which is considered to be the most destructive disease of bananas. The fungus is believed to have evolved with its host in the Indo-Malayan region, and from there it was spread to other banana-growing areas with infected planting material. The diversity and distribution of Foc in Asia was investigated. A total of 594 F. oxysporum isolates collected in ten Asian countries were identified by vegetative compatibility groups (VCGs) analysis. To simplify the identification process, the isolates were first divided into DNA lineages using PCR-RFLP analysis. Six lineages and 14 VCGs, representing three Foc races, were identified in this study. The VCG complex 0124/5 was most common in the Indian subcontinent, Vietnam and Cambodia; whereas the VCG complex 01213/16 dominated in the rest of Asia. Sixty-nine F. oxysporum isolates in this study did not match any of the known VCG tester strains. In this study, Foc VCG diversity in Bangladesh, Cambodia and Sri Lanka was determined for the first time and VCGs 01221 and 01222 were first reported from Cambodia and Vietnam. New associations of Foc VCGs and banana cultivars were recorded in all the countries where the fungus was collected. Information obtained in this study could help Asian countries to develop and implement regulatory measures to prevent the incursion of Foc into areas where it does not yet occur. It could also facilitate the deployment of disease resistant banana varieties in infested areas.
Molina, Agustin B.; Daniells, Jeff; Fourie, Gerda; Hermanto, Catur; Chao, Chih-Ping; Fabregar, Emily; Sinohin, Vida G.; Masdek, Nik; Thangavelu, Raman; Li, Chunyu; Yi, Ganyun; Mostert, Lizel; Viljoen, Altus
2017-01-01
Fusarium oxysporum formae specialis cubense (Foc) is a soil-borne fungus that causes Fusarium wilt, which is considered to be the most destructive disease of bananas. The fungus is believed to have evolved with its host in the Indo-Malayan region, and from there it was spread to other banana-growing areas with infected planting material. The diversity and distribution of Foc in Asia was investigated. A total of 594 F. oxysporum isolates collected in ten Asian countries were identified by vegetative compatibility groups (VCGs) analysis. To simplify the identification process, the isolates were first divided into DNA lineages using PCR-RFLP analysis. Six lineages and 14 VCGs, representing three Foc races, were identified in this study. The VCG complex 0124/5 was most common in the Indian subcontinent, Vietnam and Cambodia; whereas the VCG complex 01213/16 dominated in the rest of Asia. Sixty-nine F. oxysporum isolates in this study did not match any of the known VCG tester strains. In this study, Foc VCG diversity in Bangladesh, Cambodia and Sri Lanka was determined for the first time and VCGs 01221 and 01222 were first reported from Cambodia and Vietnam. New associations of Foc VCGs and banana cultivars were recorded in all the countries where the fungus was collected. Information obtained in this study could help Asian countries to develop and implement regulatory measures to prevent the incursion of Foc into areas where it does not yet occur. It could also facilitate the deployment of disease resistant banana varieties in infested areas. PMID:28719631
NASA Technical Reports Server (NTRS)
1986-01-01
Lockheed Missiles and Space Company's conceptual designs and programmatics for a Space Station Nonhuman Life Sciences Research Facility (LSRF) are presented. Conceptual designs and programmatics encompass an Initial Orbital Capability (IOC) LSRF, a growth or follow-on Orbital Capability (FOC), and the transitional process required to modify the IOC LSFR to the FOC LSFR. The IOC and FOC LSFRs correspond to missions SAAX0307 and SAAX0302 of the Space Station Mission Requirements Database, respectively.
Sparse Method for Direction of Arrival Estimation Using Denoised Fourth-Order Cumulants Vector.
Fan, Yangyu; Wang, Jianshu; Du, Rui; Lv, Guoyun
2018-06-04
Fourth-order cumulants (FOCs) vector-based direction of arrival (DOA) estimation methods of non-Gaussian sources may suffer from poor performance for limited snapshots or difficulty in setting parameters. In this paper, a novel FOCs vector-based sparse DOA estimation method is proposed. Firstly, by utilizing the concept of a fourth-order difference co-array (FODCA), an advanced FOCs vector denoising or dimension reduction procedure is presented for arbitrary array geometries. Then, a novel single measurement vector (SMV) model is established by the denoised FOCs vector, and efficiently solved by an off-grid sparse Bayesian inference (OGSBI) method. The estimation errors of FOCs are integrated in the SMV model, and are approximately estimated in a simple way. A necessary condition regarding the number of identifiable sources of our method is presented that, in order to uniquely identify all sources, the number of sources K must fulfill K ≤ ( M 4 - 2 M 3 + 7 M 2 - 6 M ) / 8 . The proposed method suits any geometry, does not need prior knowledge of the number of sources, is insensitive to associated parameters, and has maximum identifiability O ( M 4 ) , where M is the number of sensors in the array. Numerical simulations illustrate the superior performance of the proposed method.
Niu, Yuqing; Hu, Bei; Li, Xiaoquan; Chen, Houbin; Šamaj, Jozef; Xu, Chunxiang
2018-01-01
Banana Fusarium wilt caused by Fusarium oxysporum f. sp. cubense (Foc) is one of the most destructive soil-borne diseases. In this study, young tissue-cultured plantlets of banana (Musa spp. AAA) cultivars differing in Foc susceptibility were used to reveal their differential responses to this pathogen using digital gene expression (DGE). Data were evaluated by various bioinformatic tools (Venn diagrams, gene ontology (GO) annotation and Kyoto encyclopedia of genes and genomes (KEGG) pathway analyses) and immunofluorescence labelling method to support the identification of gene candidates determining the resistance of banana against Foc. Interestingly, we have identified MaWRKY50 as an important gene involved in both constitutive and induced resistance. We also identified new genes involved in the resistance of banana to Foc, including several other transcription factors (TFs), pathogenesis-related (PR) genes and some genes related to the plant cell wall biosynthesis or degradation (e.g., pectinesterases, β-glucosidases, xyloglucan endotransglucosylase/hydrolase and endoglucanase). The resistant banana cultivar shows activation of PR-3 and PR-4 genes as well as formation of different constitutive cell barriers to restrict spreading of the pathogen. These data suggest new mechanisms of banana resistance to Foc. PMID:29364855
Galileo satellite antenna modeling
NASA Astrophysics Data System (ADS)
Steigenberger, Peter; Dach, Rolf; Prange, Lars; Montenbruck, Oliver
2015-04-01
The space segment of the European satellite navigation system Galileo currently consists of six satellites. Four of them belong to the first generation of In-Orbit Validation (IOV) satellites whereas the other two are Full Operational Capability (FOC) satellites. High-precision geodetic applications require detailed knowledge about the actual phase center of the satellite and receiver antenna. The deviation of this actual phase center from a well-defined reference point is described by phase center offsets (PCOs) and phase center variations (PCVs). Unfortunately, no public information is available about the Galileo satellite antenna PCOs and PCVs, neither for the IOV, nor the FOC satellites. Therefore, conventional values for the IOV satellite antenna PCOs have been adopted for the Multi-GNSS experiment (MGEX) of the International GNSS Service (IGS). The effect of the PCVs is currently neglected and no PCOs for the FOC satellites are available yet. To overcome this deficiency in GNSS observation modeling, satellite antenna PCOs and PCVs are estimated for the Galileo IOV satellites based on global GNSS tracking data of the MGEX network and additional stations of the legacy IGS network. Two completely independent solutions are computed with the Bernese and Napeos software packages. The PCO and PCV values of the individual satellites are analyzed and the availability of two different solutions allows for an accuracy assessment. The FOC satellites are built by a different manufacturer and are also equipped with another type of antenna panel compared to the IOV satellites. Signal transmission of the first FOC satellite has started in December 2014 and activation of the second satellite is expected for early 2015. Based on the available observations PCO estimates and, optionally PCVs of the FOC satellites will be presented as well. Finally, the impact of the new antenna model on the precision and accuracy of the Galileo orbit determination is analyzed.
Wang, Zhuo; Jia, Caihong; Li, Jingyang; Huang, Suzhen; Xu, Biyu; Jin, Zhiqiang
2015-01-01
Fusarium wilt caused by the fungus Fusarium oxysporum f. sp. cubens (Foc) is the most serious disease that attacks banana plants. Salicylic acid (SA) can play a key role in plant-microbe interactions. Our study is the first to examine the role of SA in conferring resistance to Foc TR4 in banana (Musa acuminata L. AAA group, cv. Cavendish), which is the greatest commercial importance cultivar in Musa. We used quantitative real-time reverse polymerase chain reaction (qRT-PCR) to analyze the expression profiles of 45 genes related to SA biosynthesis and downstream signaling pathways in a susceptible banana cultivar (cv. Cavendish) and a resistant banana cultivar (cv. Nongke No. 1) inoculated with Foc TR4. The expression of genes involved in SA biosynthesis and downstream signaling pathways was suppressed in a susceptible cultivar and activated in a resistant cultivar. The SA levels in each treatment arm were measured using high-performance liquid chromatography. SA levels were decreased in the susceptible cultivar and increased in the resistant cultivar. Finally, we examined the contribution of exogenous SA to Foc TR4 resistance in susceptible banana plants. The expression of genes involved in SA biosynthesis and signal transduction pathways as well as SA levels were significantly increased. The results suggest that one reason for banana susceptibility to Foc TR4 is that expression of genes involved in SA biosynthesis and SA levels are suppressed and that the induced resistance observed in banana against Foc TR4 might be a case of salicylic acid-dependent systemic acquired resistance.
Samuel, Nir; Hirschhorn, Gil; Chen, Jacob; Steiner, Ivan P; Shavit, Itai
2013-03-01
In Israel, the Airborne Rescue and Evacuation Unit (AREU) provides prehospital trauma care in times of peace and during times of armed conflict. In peacetime, the AREU transports children who were involved in motor vehicle collisions (MVC) and those who fall off cliffs (FOC). During armed conflict, the AREU evacuates children who sustain firearm injuries (FI) from the fighting zones. To report on prehospital injury severity of children who were evacuated by the AREU from combat zones. A retrospective comparative analysis was conducted on indicators of prehospital injury severity for patients who had MVC, FOC, and FI. It included the National Advisory Committee for Aeronautics (NACA) score, the Glasgow Coma Scale (GCS) score on scene, and the number of procedures performed by emergency medical personnel and by the AREU air-crew. From January 2003 to December 2009, 36 MVC, 25 FOC, and 17 FI children were transported from the scene by the AREU. Five patients were dead at the scene: 1 (2.8%) MVC, 1 (4%) FOC, and 3 (17.6%) FI. Two (11.7%) FI patients were dead on arrival at the hospital. MVC, FOC, and FI patients had mean (±SD) NACA scores of 4.4 ± 1.2, 3.6 ± 1.2, and 5 ± 0.7, respectively. Mean (±SD) GCS scores were 8.9 ± 5.6, 13.6 ± 4, and 6.9 ± 5.3, respectively. Life support interventions were required by 29 (80.6%) MVC, 3 (12%) FOC, and 15 (88.2%) FI patients. In the prehospital setting, children evacuated from combat zones were more severely injured than children who were transported from the scene during peacetime. Copyright © 2013 Elsevier Inc. All rights reserved.
Aguayo, Jaime; Mostert, Diane; Fourrier-Jeandel, Céline; Cerf-Wendling, Isabelle; Hostachy, Bruno; Viljoen, Altus; Ioos, Renaud
2017-01-01
Fusarium oxysporum f. sp. cubense (Foc) is one of the most important threats to global banana production. Strategies to control the pathogen are lacking, with plant resistance offering the only long-term solution, if sources of resistance are available. Prevention of introduction of Foc into disease-free areas thus remains a key strategy to continue sustainable banana production. In recent years, strains of Foc affecting Cavendish bananas have destroyed plantations in a number of countries in Asia and in the Middle East, and one African country. One vegetative compatibility group (VCG), 01213/16, is considered the major threat to bananas in tropical and subtropical climatic conditions. However, other genetically related VCGs, such as 0121, may potentially jeopardize banana cultures if they were introduced into disease-free areas. To prevent the introduction of these VCGs into disease-free Cavendish banana-growing countries, a real-time PCR test was developed to accurately detect both VCGs. A previously described putative virulence gene was used to develop a specific combination of hydrolysis probe/primers for the detection of tropical Foc race 4 strains. The real-time PCR parameters were optimized by following a statistical approach relying on orthogonal arrays and the Taguchi method in an attempt to enhance sensitivity and ensure high specificity of the assay. This study also assessed critical performance criteria, such as repeatability, reproducibility, robustness, and specificity, with a large including set of 136 F. oxysporum isolates, including 73 Foc pathogenic strains representing 24 VCGs. The validation data demonstrated that the new assay could be used for regulatory testing applications on banana plant material and can contribute to preventing the introduction and spread of Foc strains affecting Cavendish bananas in the tropics.
Cerf-Wendling, Isabelle; Hostachy, Bruno; Viljoen, Altus; Ioos, Renaud
2017-01-01
Fusarium oxysporum f. sp. cubense (Foc) is one of the most important threats to global banana production. Strategies to control the pathogen are lacking, with plant resistance offering the only long-term solution, if sources of resistance are available. Prevention of introduction of Foc into disease-free areas thus remains a key strategy to continue sustainable banana production. In recent years, strains of Foc affecting Cavendish bananas have destroyed plantations in a number of countries in Asia and in the Middle East, and one African country. One vegetative compatibility group (VCG), 01213/16, is considered the major threat to bananas in tropical and subtropical climatic conditions. However, other genetically related VCGs, such as 0121, may potentially jeopardize banana cultures if they were introduced into disease-free areas. To prevent the introduction of these VCGs into disease-free Cavendish banana-growing countries, a real-time PCR test was developed to accurately detect both VCGs. A previously described putative virulence gene was used to develop a specific combination of hydrolysis probe/primers for the detection of tropical Foc race 4 strains. The real-time PCR parameters were optimized by following a statistical approach relying on orthogonal arrays and the Taguchi method in an attempt to enhance sensitivity and ensure high specificity of the assay. This study also assessed critical performance criteria, such as repeatability, reproducibility, robustness, and specificity, with a large including set of 136 F. oxysporum isolates, including 73 Foc pathogenic strains representing 24 VCGs. The validation data demonstrated that the new assay could be used for regulatory testing applications on banana plant material and can contribute to preventing the introduction and spread of Foc strains affecting Cavendish bananas in the tropics. PMID:28178348
Exploring the pH-Dependent Substrate Transport Mechanism of FocA Using Molecular Dynamics Simulation
Lv, Xiaoying; Liu, Huihui; Ke, Meng; Gong, Haipeng
2013-01-01
FocA belongs to the formate-nitrate transporter family and plays an essential role in the export and uptake of formate in organisms. According to the available crystal structures, the N-terminal residues of FocA are structurally featureless at physiological conditions but at reduced pH form helices to harbor the cytoplasmic entrance of the substrate permeation pathway, which apparently explains the cessation of electrical signal observed in electrophysiological experiments. In this work, we found by structural analysis and molecular dynamics simulations that those N-terminal helices cannot effectively preclude the substrate permeation. Equilibrium simulations and thermodynamic calculations suggest that FocA is permeable to both formate and formic acid, the latter of which is transparent to electrophysiological studies as an electrically neutral species. Hence, the cease of electrical current at acidic pH may be caused by the change of the transported substrate from formate to formic acid. In addition, the mechanism of formate export at physiological pH is discussed. PMID:24359743
First uncertainty evaluation of the FoCS-2 primary frequency standard
NASA Astrophysics Data System (ADS)
Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L. G.; Schenker, D.; Thomann, P.; Südmeyer, T.
2018-06-01
We report the uncertainty evaluation of the Swiss continuous primary frequency standard FoCS-2 (Fontaine Continue Suisse). Unlike other primary frequency standards which are working with clouds of cold atoms, this fountain uses a continuous beam of cold caesium atoms bringing a series of metrological advantages and specific techniques for the evaluation of the uncertainty budget. Recent improvements of FoCS-2 have made possible the evaluation of the frequency shifts and of their uncertainties in the order of . When operating in an optimal regime a relative frequency instability of is obtained. The relative standard uncertainty reported in this article, , is strongly dominated by the statistics of the frequency measurements.
A Vision for Future Virtual Training
2006-06-15
Future Virtual Training. In Virtual Media for Military Applications (pp. KN2-1 – KN2-12). Meeting Proceedings RTO-MP-HFM-136, Keynote 2. Neuilly-sur...Spin Out. By 2017 , the FCS program will meet Full Operation Capability (FOC). The force structure of the Army at this time will include two BCTs...training environment, allowing them to meet preparatory training proficiency objectives virtually while minimizing the use of costly live ammunition. In
Fundamentals of Orthodox Culture (FOC): A New Subject in Russia's Schools
ERIC Educational Resources Information Center
Willems, Joachim
2007-01-01
The question of religious education is one of the most controversial questions in the current discussions on religion and politics in Russia. Most notably a new subject, Fundamentals of Orthodox Culture (FOC), is of interest because it differs markedly from Western European approaches to religious education. Referring to "Culturology"…
"Foundations of Orthodox Culture" in Russia: Confessional or Nonconfessional Religious Education?
ERIC Educational Resources Information Center
Willems, Joachim
2012-01-01
In April 2010 a new school subject group called "Foundations of Religious Cultures and Secular Ethics" (FRCSE) was introduced as an experiment in selected regions of Russia. It consists of six subjects, or "modules." One module is "Foundations of Orthodox Culture" (FOC). This article examines FOC within the context of…
Time-Domain Evaluation of Fractional Order Controllers’ Direct Discretization Methods
NASA Astrophysics Data System (ADS)
Ma, Chengbin; Hori, Yoichi
Fractional Order Control (FOC), in which the controlled systems and/or controllers are described by fractional order differential equations, has been applied to various control problems. Though it is not difficult to understand FOC’s theoretical superiority, realization issue keeps being somewhat problematic. Since the fractional order systems have an infinite dimension, proper approximation by finite difference equation is needed to realize the designed fractional order controllers. In this paper, the existing direct discretization methods are evaluated by their convergences and time-domain comparison with the baseline case. Proposed sampling time scaling property is used to calculate the baseline case with full memory length. This novel discretization method is based on the classical trapezoidal rule but with scaled sampling time. Comparative studies show good performance and simple algorithm make the Short Memory Principle method most practically superior. The FOC research is still at its primary stage. But its applications in modeling and robustness against non-linearities reveal the promising aspects. Parallel to the development of FOC theories, applying FOC to various control problems is also crucially important and one of top priority issues.
Studies of Solar EUV Irradiance from SOHO
NASA Technical Reports Server (NTRS)
Floyd, Linton
2002-01-01
The Extreme Ultraviolet (EUV) irradiance central and first order channel time series (COC and FOC) from the Solar EUV Monitor aboard the Solar and Heliospheric observatory (SOHO) issued in early 2002 covering the time period 1/1/96-31/1201 were analyzed in terms of other solar measurements and indices. A significant solar proton effect in the first order irradiance was found and characterized. When this effect is removed, the two irradiance time series are almost perfectly correlated. Earlier studies have shown good correlation between the FOC and the Hall core-to-wing ratio and likewise, it was the strongest component of the COC. Analysis of the FOC showed dependence on the F10.7 radio flux. Analysis of the CDC signals showed additional dependences on F10.7 and the GOES x-ray fluxes. The SEM FOC was also well correlated with thein 30.4 nm channel of the SOHO EUV Imaging Telescope (EIT). The irradiance derived from all four EIT channels (30.4 nm, 17.1 nm, 28.4 nm, and 19.5 nm) showed better correlation with MgII than F10.7.
Wei, Yunxie; Liu, Wen; Hu, Wei; Liu, Guoyin; Wu, Chunjie; Liu, Wei; Zeng, Hongqiu; He, Chaozu; Shi, Haitao
2017-08-01
MaATG8s play important roles in hypersensitive-like cell death and immune response, and autophagy is essential for disease resistance against Foc in banana. Autophagy is responsible for the degradation of damaged cytoplasmic constituents in the lysosomes or vacuoles. Although the effects of autophagy have been extensively revealed in model plants, the possible roles of autophagy-related gene in banana remain unknown. In this study, 32 MaATGs were identified in the draft genome, and the profiles of several MaATGs in response to fungal pathogen Fusarium oxysporum f. sp. cubense (Foc) were also revealled. We found that seven MaATG8s were commonly regulated by Foc. Through transient expression in Nicotiana benthamiana leaves, we highlight the novel roles of MaATG8s in conferring hypersensitive-like cell death, and MaATG8s-mediated hypersensitive response-like cell death is dependent on autophagy. Notablly, autophagy inhibitor 3-methyladenine (3-MA) treatment resulted in decreased disease resistance in response to Foc4, and the effect of 3-MA treatment could be rescued by exogenous salicylic acid, jasmonic acid and ethylene, indicating the involvement of autophagy-mediated plant hormones in banana resistance to Fusarium wilt. Taken together, this study may extend our understanding the putative role of MaATG8s in hypersensitive-like cell death and the essential role of autophagy in immune response against Foc in banana.
Fermentation of Foc TR4-infected bananas and Trichoderma spp.
Yang, J; Li, B; Liu, S W; Biswas, M K; Liu, S; Wei, Y R; Zuo, C W; Deng, G M; Kuang, R B; Hu, C H; Yi, G J; Li, C Y
2016-10-17
Fusarium wilt (also known as Panama disease) is one of the most destructive banana diseases, and greatly hampers the global production of bananas. Consequently, it has been very detrimental to the Chinese banana industry. An infected plant is one of the major causes of the spread of Fusarium wilt to nearby regions. It is essential to develop an efficient and environmentally sustainable disease control method to restrict the spread of Fusarium wilt. We isolated Trichoderma spp from the rhizosphere soil, roots, and pseudostems of banana plants that showed Fusarium wilt symptoms in the infected areas. Their cellulase activities were measured by endoglucanase activity, β-glucosidase activity, and filter paper activity assays. Safety analyses of the Trichoderma isolates were conducted by inoculating them into banana plantlets. The antagonistic effects of the Trichoderma spp on the Fusarium pathogen Foc tropical Race 4 (Foc TR4) were tested by the dual culture technique. Four isolates that had high cellulase activity, no observable pathogenicity to banana plants, and high antagonistic capability were identified. The isolates were used to biodegrade diseased banana plants infected with GFP-tagged Foc TR4, and the compost was tested for biological control of the infectious agent; the results showed that the fermentation suppressed the incidence of wilt and killed the pathogen. This study indicates that Trichoderma isolates have the potential to eliminate the transmission of Foc TR4, and may be developed into an environmentally sustainable treatment for controlling Fusarium wilt in banana plants.
Shi, Lu; Du, Nanshan; Yuan, Yinghui; Shu, Sheng; Sun, Jin; Guo, Shirong
2016-09-01
Fusarium wilt caused by the fungus Fusarium oxysporum f. sp. cucumerinum (FOC) is the most severe soil-borne disease attacking cucumber. To assess the positive effects of vinegar residue substrate (VRS) on the growth and incidence of Fusarium wilt on cucumber, we determined the cucumber growth parameters, disease severity, defense-related enzyme and pathogenesis-related (PR) protein activities, and stress-related gene expression levels. In in vitro and pot experiments, we demonstrated the following results: (i) the VRS extract exhibited a higher biocontrol activity than that of peat against FOC, and significantly improved the growth inhibition of FOC, with values of 48.3 %; (ii) in response to a FOC challenge, antioxidant enzymes and the key enzymes of phenylpropanoid metabolic activities, as well as the PR protein activities in the roots of cucumber, were significantly increased. Moreover, the activities of these proteins were higher in VRS than in peat; (iii) the expression levels of stress-related genes (including glu, pal, and ethylene receptor) elicited responses to the pathogens inoculated in cucumber leaves; and (iv) the FOC treatment significantly inhibited the growth of cucumber seedlings. Moreover, all of the growth indices of plants grown in VRS were significantly higher than those grown in peat. These results offer a new strategy to control cucumber Fusarium wilt, by upregulating the activity levels of defense-related enzymes and PR proteins and adjusting gene expression levels. They also provide a theoretical basis for VRS applications.
Zheng, Si-Jun; García-Bastidas, Fernando A; Li, Xundong; Zeng, Li; Bai, Tingting; Xu, Shengtao; Yin, Kesuo; Li, Hongxiang; Fu, Gang; Yu, Yanchun; Yang, Liu; Nguyen, Huy Chung; Douangboupha, Bounneuang; Khaing, Aye Aye; Drenth, Andre; Seidl, Michael F; Meijer, Harold J G; Kema, Gert H J
2018-01-01
Banana is the most popular and most exported fruit and also a major food crop for millions of people around the world. Despite its importance and the presence of serious disease threats, research into this crop is limited. One of those is Panama disease or Fusarium wilt. In the previous century Fusarium wilt wiped out the "Gros Michel" based banana industry in Central America. The epidemic was eventually quenched by planting "Cavendish" bananas. However, 50 years ago the disease recurred, but now on "Cavendish" bananas. Since then the disease has spread across South-East Asia, to the Middle-East and the Indian subcontinent and leaped into Africa. Here, we report the presence of Fusarium oxysporum f.sp. cubense Tropical Race 4 (Foc TR4) in "Cavendish" plantations in Laos, Myanmar, and Vietnam. A combination of classical morphology, DNA sequencing, and phenotyping assays revealed a very close relationship between the Foc TR4 strains in the entire Greater Mekong Subregion (GMS), which is increasingly prone to intensive banana production. Analyses of single-nucleotide polymorphisms enabled us to initiate a phylogeography of Foc TR4 across three geographical areas-GMS, Indian subcontinent, and the Middle East revealing three distinct Foc TR4 sub-lineages. Collectively, our data place these new incursions in a broader agroecological context and underscore the need for awareness campaigns and the implementation of validated quarantine measures to prevent further international dissemination of Foc TR4.
First report of Fusarium redolens causing Fusarium yellowing and wilt of chickpea in Tunisia
USDA-ARS?s Scientific Manuscript database
Chickpea plants showing wilt symptoms in Tunisia have been attributed solely to race 0 of Fusarium oxysporum f. sp. ciceris (Foc) in the past. However, chickpea cultivars known to be resistant to race 0 of Foc recently also showed the wilting symptoms. To ascertain the race or species identities re...
VizieR Online Data Catalog: Solar neighborhood. XXXVII. RVs for M dwarfs (Benedict+, 2016)
NASA Astrophysics Data System (ADS)
Benedict, G. F.; Henry, T. J.; Franz, O. G.; McArthur, B. E.; Wasserman, L. H.; Jao, W.-C.; Cargile, P. A.; Dieterich, S. B.; Bradley, A. J.; Nelan, E. P.; Whipple, A. L.
2017-05-01
During this project we observed with two Fine Guidance Sensor (FGS) units: FGS 3 from 1992 to 2000, and FGS 1r from 2000 to 2009. FGS 1r replaced the original FGS 1 during Hubble Space Telescope (HST) Servicing Mission 3A in late 1999. We included visual, photographic, and CCD observations of separations and position angles from Geyer et al. 1988AJ.....95.1841G for our analysis of GJ 65 AB. We include a single observation of G 193-027 AB from Beuzit et al. 2004A&A...425..997B, who used the Adaptive Optics Bonnette system on the Canada-France-Hawaii Telescope (CFHT). For GJ 65 AB we include five Very Large Telescope/NAos-COnica (VLT/NACO) measures of position angle and separation (Kervella et al. 2016A&A...593A.127K). For our analysis of GJ 623 AB, we included astrometric observations (Martinache et al. 2007ApJ...661..496M) performed with the Palomar High Angular Resolution Observer (PHARO) instrument on the Palomar 200in (5m) telescope and with the Near InfraRed Camera 2 (NIRC2) instrument on the Keck II telescope. Separations have typical errors of 2mas. Position angle errors average 0.5°. Measurements are included for GJ 22 AC from McCarthy et al. 1991AJ....101..214M and for GJ 473 AB from Henry et al. 1992AJ....103.1369H and Torres et al. 1999AJ....117..562T, who used a two-dimensional infrared speckle camera containing a 58*62 pixel InSb array on the Steward Observatory 90in telescope. We also include infrared speckle observations by Woitas et al. 2003A&A...406..293W, who obtained fourteen separation and position angle measurements for GJ 22 AC with the near-infrared cameras MAGIC and OMEGA Cass at the 3.5m telescope on Calar Alto. We also include a few speckle observations at optical wavelengths from the Special Astrophysical Observatory 6m Bolshoi Azimuth Telescope (BTA) and 1m Zeiss (Balega et al. 1994, Cat. J/A+AS/105/503), from the CFHT (Blazit et al. 1987) and from the Differential Speckle Survey Instrument (DSSI) on the Wisconsin, Indiana, Yale, National optical astronomy observatory (WIYN) 3.5m (Horch et al. 2012, Cat. J/AJ/143/10). Where available, we use astrometric observations from HST instruments other than the FGSs, including the Faint Object Camera (FOC; Barbieri et al. 1996A&A...315..418B), the Faint Object Spectrograph (FOS; Schultz et al. 1998PASP..110...31S), the Near-Infrared Camera and Multi-Object Spectrometer (NICMOS; Golimowski et al. 2004AJ....128.1733G), and the Wide-Field Planetary Camera 2 (WFPC2; Schroeder et al. 2000AJ....119..906S; Dieterich et al. 2012, Cat. J/AJ/144/64). Our radial velocity measurements, listed in table3, are from two sources. We obtained most radial velocity data with the McDonald 2.1m Struve telescope and the Sandiford Cassegrain Echelle spectrograph, hereafter CE. The CE delivers a dispersion equivalent to 2.5km/s/pix (R=λ/Δλ=60000) with a wavelength range of 5500{<=}λ{<=}6700Å spread across 26 orders (apertures). The McDonald data were collected during 33 observing runs from 1995 to 2009. Some GJ 623 AB velocities came from the Hobby-Eberly Telescope (HET) using the Tull Spectrograph. (3 data files).
Zheng, Si-Jun; García-Bastidas, Fernando A.; Li, Xundong; Zeng, Li; Bai, Tingting; Xu, Shengtao; Yin, Kesuo; Li, Hongxiang; Fu, Gang; Yu, Yanchun; Yang, Liu; Nguyen, Huy Chung; Douangboupha, Bounneuang; Khaing, Aye Aye; Drenth, Andre; Seidl, Michael F.; Meijer, Harold J. G.; Kema, Gert H. J.
2018-01-01
Banana is the most popular and most exported fruit and also a major food crop for millions of people around the world. Despite its importance and the presence of serious disease threats, research into this crop is limited. One of those is Panama disease or Fusarium wilt. In the previous century Fusarium wilt wiped out the “Gros Michel” based banana industry in Central America. The epidemic was eventually quenched by planting “Cavendish” bananas. However, 50 years ago the disease recurred, but now on “Cavendish” bananas. Since then the disease has spread across South-East Asia, to the Middle-East and the Indian subcontinent and leaped into Africa. Here, we report the presence of Fusarium oxysporum f.sp. cubense Tropical Race 4 (Foc TR4) in “Cavendish” plantations in Laos, Myanmar, and Vietnam. A combination of classical morphology, DNA sequencing, and phenotyping assays revealed a very close relationship between the Foc TR4 strains in the entire Greater Mekong Subregion (GMS), which is increasingly prone to intensive banana production. Analyses of single-nucleotide polymorphisms enabled us to initiate a phylogeography of Foc TR4 across three geographical areas—GMS, Indian subcontinent, and the Middle East revealing three distinct Foc TR4 sub-lineages. Collectively, our data place these new incursions in a broader agroecological context and underscore the need for awareness campaigns and the implementation of validated quarantine measures to prevent further international dissemination of Foc TR4. PMID:29686692
Detection of an ultraviolet and visible counterpart of the NGC 6624 X-ray burster
NASA Technical Reports Server (NTRS)
King, I. R.; Stanford, S. A.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Disney, M. J.; Deharveng, J. M.; Jakobsen, P.
1993-01-01
We have detected, in images taken with the HST FOC, the UV and optical counterpart of the X-ray source 4U 1820-30 in the globular cluster NGC 6624. Astrometric measurements place this object 2 sigma from the X-ray position of 4U 1820-30. The source dominates a far-UV FOC image and has the same flux at 1400 A as was seen through the large IUE aperture by Rich et al. (1993). It has a B magnitude of 18.7 but is not detected in V. It is 0.66 arcsec from the center of NGC 6624, a fact that may change the interpretation of the P-average of the 11 minute binary orbit. The flux drops between 1400 and 4300 A at a rate that is nearly as steep as that of a Rayleigh-Jeans curve. The flux is far too large to come from the neutron star directly but could accord with radiation from a heated accretion disk and/or the heated side of the companion star.
A Helicopter submarine Search Game
1988-09-01
active sonai dips within the expanding farthest-on-cirl0e(FOC). The term FOC is a commonly used search term which describes all the possible positions...CunstdkLL Ufn alv0 J. 1ý%LfL&’. A *413 1.3t. -, ,1 to time taken for the sonai device to be winched in and out of the watcr and time for the signal
8. VIEW FORWARD IN CREW'S QUARTERS (FOC'S'LE) SHOWING DOUBLE TIER ...
8. VIEW FORWARD IN CREW'S QUARTERS (FOC'S'LE) SHOWING DOUBLE TIER OF BUNKS IN THE EVELINA M. GOULART. KINGPOST IS AT CENTER OF PHOTOGRAPH WITH FORE PEAK IN BACKGROUND. A FOLDING MESS TABLE IS AT LOWER LEFT OF PHOTOGRAPH. NOTE BENCH SEAT BELOW LOWEST TIER OF BUNKS. - Auxiliary Fishing Schooner "Evelina M. Goulart", Essex Shipbuilding Museum, 66 Main Street, Essex, Essex County, MA
Sun, Yuming; Wang, Min; Li, Yingrui; Gu, Zechen; Ling, Ning; Shen, Qirong; Guo, Shiwei
2017-09-01
Fusarium wilt is primarily a soil-borne disease and results in yield loss and quality decline in cucumber (Cucumis sativus). The main symptom of fusarium wilt is the wilting of entire plant, which could be caused by a fungal toxin(s) or blockage of water transport. To investigate whether this wilt arises from water shortage, the physiological responses of hydroponically grown cucumber plants subjected to water stress using polyethylene glycol (PEG, 6000) were compared with those of plants infected with Fusarium oxysporum f. sp. cucumerinum (FOC). Parameters reflecting plant water status were measured 8d after the start of treatment. Leaf gas exchange parameters and temperature were measured with a LI-COR portable open photosynthesis system and by thermal imaging. Chlorophyll fluorescence and chloroplast structures were assessed by imaging pulse amplitude-modulated fluorometry and transmission electron microscopy, respectively. Cucumber water balance was altered after FOC infection, with decreased water absorption and hydraulic conductivity. However, the responses of cucumber leaves to FOC and PEG differed in leaf regions. Under water stress, measures of lipid peroxidation (malondialdehyde) and chlorophyll fluorescence indicated that the leaf edge was more seriously injured, with a higher leaf temperature and disrupted leaf water status compared with the centre. Here, abscisic acid (ABA) and proline were negatively correlated with water potential. In contrast, under FOC infection, membrane damage and a higher temperature were observed in the leaf centre while ABA and proline did not vary with water potential. Cytologically, FOC-infected cucumber leaves exhibited circular chloroplasts and swelled starch grains in the leaf centre, in which they again differed from PEG-stressed cucumber leaves. This study illustrates the non-causal relationship between fusarium wilt and water transport blockage. Although leaf wilt occurred in both water stress and FOC infection, the physiological responses were different, especially in leaf spatial distribution. © The Author 2017. Published by Oxford University Press on behalf of the Annals of Botany Company. All rights reserved. For permissions, please email: journals.permissions@oup.com
Morphology and time variation of the Jovian Far UV aurora: Hubble Space Telescope observations
NASA Technical Reports Server (NTRS)
Gerard, Jean-Claude; Dols, Vincent; Paresce, Francesco; Prange, Renee
1993-01-01
High spatial resolution images of the north polar region of Jupiter have been obtained with the Faint Object Camera (FOC) on board the Hubble Space Telescope (HST). The first set of two images collected 87 min apart in February 1992 shows a bright (approximately or equal to 180 kR) emission superimposed on the background in rotation with the planet. Both Ly alpha images show common regions of enhanced emission but differences are also observed, possibly due to temporal variations. The second group of images obtained on June 23 and 26, 1992 isolates a spectral region near 153 nm dominated by the H2 Lyman bands and continuum. Both pictures exhibit a narrow arc structure fitting the L = 30 magnetotail field line footprint in the morning sector and a broader diffuse aurora in the afternoon. They show no indication of an evening twilight enhancement. Although the central meridian longitudes were similar, significant differences are seen in the two exposures, especially in the region of diffuse emission, and interpreted as signatures of temporal variations. The total power radiated in the H2 bands is approximately or equal to 2 x 10(exp 12) W, in agreement with previous UV spectrometer observations. The high local H2 emission rates (approximately 450 kR) imply a particle precipitation carrying an energy flux of about 5 x 10(exp -2) W/sq m.
HUBBLE SEES CHANGES IN GAS SHELL AROUND NOVA CYGNI 1992
NASA Technical Reports Server (NTRS)
2002-01-01
The European Space Agency's ESA Faint Object Camera utilizing the corrective optics provided by NASA's COSTAR (Corrective Optics Space Telescope Axial Replacement), has given astronomers their best look yet at a rapidly ballooning bubble of gas blasted off a star. The shell surrounds Nova Cygni 1992, which erupted on February 19, 1992. A nova is a thermonuclear explosion that occurs on the surface of a white dwarf star in a double star system. The new HST image [right] reveals an elliptical and slightly lumpy ring-like structure. The ring is the edge of a bubble of hot gas blasted into space by the nova. The shell is so thin that the FOC does not resolve its true thickness, even with HST's restored vision. An HST image taken on May 31 1993, [left] 467 days after the explosion, provided the first glimpse of the ring and a mysterious bar-like structure. But the image interpretation was severely hampered by HST's optical aberration, that scattered light from the central star which contaminated the ring's image. A comparison of the pre and post COSTAR/FOC images reveals that the ring has evolved in the seven months that have elapsed between the two observations. The ring has expanded from a diameter of approximately 74 to 96 billion miles. The bar-like structure seen in the earlier HST image has disappear. These changes might confirm theories that the bar was produced by a dense layer of gas thrown off in the orbital plane of the double star system. The gas has subsequently grown more tenuous and so the bar has faded. The ring has also grown noticeably more oblong since the earlier image. This suggests the hot gas is escaping more rapidly above and below the system's orbital plane. As the gas continues escaping the ring should grow increasingly egg-shaped in the coming years. HST's newly improved sensitivity and high resolution provides a unique opportunity to understand the novae by resolving the effects of the explosion long before they can be resolved in ground based telescopes. Nova Cygni is 10,430 light years away (as measured directly from the ring's diameter), and located in the summer constellation Cygnus the Swan. Credit: F. Paresce, R. Jedrzejewski (STScI) NASA/ESA PHOTO RELEASE NO.: STScI-PR94-06
Miyaji, Naomi; Shimizu, Motoki; Miyazaki, Junji; Osabe, Kenji; Sato, Maho; Ebe, Yusuke; Takada, Satoko; Kaji, Makoto; Dennis, Elizabeth S; Fujimoto, Ryo; Okazaki, Keiichi
2017-12-01
Resistant and susceptible lines in Brassica rapa have different immune responses against Fusarium oxysporum inoculation. Fusarium yellows caused by Fusarium oxysporum f. sp. conglutinans (Foc) is an important disease of Brassicaceae; however, the mechanism of how host plants respond to Foc is still unknown. By comparing with and without Foc inoculation in both resistant and susceptible lines of Chinese cabbage (Brassica rapa var. pekinensis), we identified differentially expressed genes (DEGs) between the bulked inoculated (6, 12, 24, and 72 h after inoculation (HAI)) and non-inoculated samples. Most of the DEGs were up-regulated by Foc inoculation. Quantitative real-time RT-PCR showed that most up-regulated genes increased their expression levels from 24 HAI. An independent transcriptome analysis at 24 and 72 HAI was performed in resistant and susceptible lines. GO analysis using up-regulated genes at 24 HAI indicated that Foc inoculation activated systemic acquired resistance (SAR) in resistant lines and tryptophan biosynthetic process and responses to chitin and ethylene in susceptible lines. By contrast, GO analysis using up-regulated genes at 72 HAI showed the overrepresentation of some categories for the defense response in susceptible lines but not in the resistant lines. We also compared DEGs between B. rapa and Arabidopsis thaliana after F. oxysporum inoculation at the same time point, and identified genes related to defense response that were up-regulated in the resistant lines of Chinese cabbage and A. thaliana. Particular genes that changed expression levels overlapped between the two species, suggesting that they are candidates for genes involved in the resistance mechanisms against F. oxysporum.
Wei, Yunxie; Hu, Wei; Wang, Qiannan; Zeng, Hongqiu; Li, Xiaolin; Yan, Yu; Reiter, Russel J; He, Chaozu; Shi, Haitao
2017-01-01
As one popular fresh fruit, banana (Musa acuminata) is cultivated in the world's subtropical and tropical areas. In recent years, pathogen Fusarium oxysporum f. sp. cubense (Foc) has been widely and rapidly spread to banana cultivated areas, causing substantial yield loss. However, the molecular mechanism of banana response to Foc remains unclear, and functional identification of disease-related genes is also very limited. In this study, nine 90 kDa heat-shock proteins (HSP90s) were genomewide identified. Moreover, the expression profile of them in different organs, developmental stages, and in response to abiotic and fungal pathogen Foc were systematically analyzed. Notably, we found that the transcripts of 9 MaHSP90s were commonly regulated by melatonin (N-acetyl-5-methoxytryptamine) and Foc infection. Further studies showed that exogenous application of melatonin improved banana resistance to Fusarium wilt, but the effect was lost when cotreated with HSP90 inhibitor (geldanamycin, GDA). Moreover, melatonin and GDA had opposite effect on auxin level in response to Foc4, while melatonin and GDA cotreated plants had no significant effect, suggesting the involvement of MaHSP90s in the cross talk of melatonin and auxin in response to fungal infection. Taken together, this study demonstrated that MaHSP90s are essential for melatonin-mediated plant response to Fusarium wilt, which extends our understanding the putative roles of MaHSP90s as well as melatonin in the biological control of banana Fusarium wilt. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Identification of pathogenicity‐related genes in Fusarium oxysporum f. sp. cepae
Vágány, Viktória; Jackson, Alison C.; Harrison, Richard J.; Rainoni, Alessandro; Clarkson, John P.
2016-01-01
Summary Pathogenic isolates of Fusarium oxysporum, distinguished as formae speciales (f. spp.) on the basis of their host specificity, cause crown rots, root rots and vascular wilts on many important crops worldwide. Fusarium oxysporum f. sp. cepae (FOC) is particularly problematic to onion growers worldwide and is increasing in prevalence in the UK. We characterized 31 F. oxysporum isolates collected from UK onions using pathogenicity tests, sequencing of housekeeping genes and identification of effectors. In onion seedling and bulb tests, 21 isolates were pathogenic and 10 were non‐pathogenic. The molecular characterization of these isolates, and 21 additional isolates comprising other f. spp. and different Fusarium species, was carried out by sequencing three housekeeping genes. A concatenated tree separated the F. oxysporum isolates into six clades, but did not distinguish between pathogenic and non‐pathogenic isolates. Ten putative effectors were identified within FOC, including seven Secreted In Xylem (SIX) genes first reported in F. oxysporum f. sp. lycopersici. Two highly homologous proteins with signal peptides and RxLR motifs (CRX1/CRX2) and a gene with no previously characterized domains (C5) were also identified. The presence/absence of nine of these genes was strongly related to pathogenicity against onion and all were shown to be expressed in planta. Different SIX gene complements were identified in other f. spp., but none were identified in three other Fusarium species from onion. Although the FOC SIX genes had a high level of homology with other f. spp., there were clear differences in sequences which were unique to FOC, whereas CRX1 and C5 genes appear to be largely FOC specific. PMID:26609905
Identification of pathogenicity-related genes in Fusarium oxysporum f. sp. cepae.
Taylor, Andrew; Vágány, Viktória; Jackson, Alison C; Harrison, Richard J; Rainoni, Alessandro; Clarkson, John P
2016-09-01
Pathogenic isolates of Fusarium oxysporum, distinguished as formae speciales (f. spp.) on the basis of their host specificity, cause crown rots, root rots and vascular wilts on many important crops worldwide. Fusarium oxysporum f. sp. cepae (FOC) is particularly problematic to onion growers worldwide and is increasing in prevalence in the UK. We characterized 31 F. oxysporum isolates collected from UK onions using pathogenicity tests, sequencing of housekeeping genes and identification of effectors. In onion seedling and bulb tests, 21 isolates were pathogenic and 10 were non-pathogenic. The molecular characterization of these isolates, and 21 additional isolates comprising other f. spp. and different Fusarium species, was carried out by sequencing three housekeeping genes. A concatenated tree separated the F. oxysporum isolates into six clades, but did not distinguish between pathogenic and non-pathogenic isolates. Ten putative effectors were identified within FOC, including seven Secreted In Xylem (SIX) genes first reported in F. oxysporum f. sp. lycopersici. Two highly homologous proteins with signal peptides and RxLR motifs (CRX1/CRX2) and a gene with no previously characterized domains (C5) were also identified. The presence/absence of nine of these genes was strongly related to pathogenicity against onion and all were shown to be expressed in planta. Different SIX gene complements were identified in other f. spp., but none were identified in three other Fusarium species from onion. Although the FOC SIX genes had a high level of homology with other f. spp., there were clear differences in sequences which were unique to FOC, whereas CRX1 and C5 genes appear to be largely FOC specific. © 2015 The Authors Molecular Plant Pathology Published by British Society for Plant Pathology and John Wiley & Sons Ltd.
Direct Torque Control of a Three-Phase Voltage Source Inverter-Fed Induction Machine
2013-12-01
factors, FOC acquires all advantages of DC machine control and frees itself from the mechanical commutation drawbacks. Furthermore, FOC leads to high...of three-phase induction motor using microcontroller,” S.R.M Engineering College, Tamil Nadu, India , June/July 2006. [5] Texas Instruments Europe...loop. Direct flux control is possible through the constant magnetic field orientation achieved through commutator action. These two primary factors
The Fundamentals of Care Framework as a Point-of-Care Nursing Theory.
Kitson, Alison L
Nursing theories have attempted to shape the everyday practice of clinical nurses and patient care. However, many theories-because of their level of abstraction and distance from everyday caring activity-have failed to help nurses undertake the routine practical aspects of nursing care in a theoretically informed way. The purpose of the paper is to present a point-of-care theoretical framework, called the fundamentals of care (FOC) framework, which explains, guides, and potentially predicts the quality of care nurses provide to patients, their carers, and family members. The theoretical framework is presented: person-centered fundamental care (PCFC)-the outcome for the patient and the nurse and the goal of the FOC framework are achieved through the active management of the practice process, which involves the nurse and the patient working together to integrate three core dimensions: establishing the nurse-patient relationship, integrating the FOC into the patient's care plan, and ensuring that the setting or context where care is transacted and coordinated is conducive to achieving PCFC outcomes. Each dimension has multiple elements and subelements, which require unique assessment for each nurse-patient encounter. The FOC framework is presented along with two scenarios to demonstrate its usefulness. The dimensions, elements, and subelements are described, and next steps in the development are articulated.
Venkatesh; Krishna, V; Kumar, K Girish; Pradeepa, K; Kumar, S R Santosh; Kumar, R Shashi
2013-07-01
An efficient protocol was standardized for screening of panama wilt resistant Musa paradisiaca cv. Puttabale clones, an endemic cultivar of Karnataka, India. The synergistic effect of 6-benzyleaminopurine (2 to 6 mg/L) and thidiazuron (0.1 to 0.5 mg/L) on MS medium provoked multiple shoot induction from the excised meristem. An average of 30.10 +/- 5.95 shoots was produced per propagule at 4 mg/L 6-benzyleaminopurine and 0.3 mg/L thidiazuron concentrations. Elongation of shoots observed on 5 mg/L BAP augmented medium with a mean length of 8.38 +/- 0.30 shoots per propagule. For screening of disease resistant clones, multiple shoot buds were mutated with 0.4% ethyl-methane-sulfonate and cultured on MS medium supplemented with Fusarium oxysporum f. sp. cubense (FOC) culture filtrate (5-15%). Two month old co-cultivated secondary hardened plants were used for screening of disease resistance against FOC by the determination of biochemical markers such as total phenol, phenylalanine ammonia lyase, oxidative enzymes like peroxidase, polyphenol oxidase, catalase and PR-proteins like chitinase, beta-1-3 glucanase activities. The mutated clones of M. paradisiaca cv. Puttabale cultured on FOC culture filtrate showed significant increase in the levels of biochemical markers as an indicative of acquiring disease resistant characteristics to FOC wilt.
11. VIEW FROM JUST AFT OF THE KING POST IN ...
11. VIEW FROM JUST AFT OF THE KING POST IN THE FOC'S'LE OF THE EVELINA M. GOULART. FIRE EXTINGUISHER IS MOUNTED ON STUB OF FOREMAST. OBJECT AT LOWER LEFT IS A FOLDING MESS TABLE. LADDER LEADS TO DECK. CABINET AT RIGHT CENTER HOUSED SINK FOR CLEAN-UP AND COOKING. A SMALL CHINA SINK AT RIGHT CENTER SERVED FOR PERSONAL CLEAN-UP AND SHAVING. - Auxiliary Fishing Schooner "Evelina M. Goulart", Essex Shipbuilding Museum, 66 Main Street, Essex, Essex County, MA
NASA Astrophysics Data System (ADS)
Wolterbeek, Tim; van Noort, Reinier; Spiers, Chris
2017-04-01
When chemical reactions that involve an increase in solid volume proceed in a confined space, this may under certain conditions lead to the development of a so-called force of crystallisation (FoC). In other words, reaction can result in stress being exerted on the confining boundaries of the system. In principle, any thermodynamic driving force that is able to produce a supersaturation with respect to a solid product can generate a FoC, as long as precipitation can occur under confined conditions, i.e. within load-bearing grain contacts. Well-known examples of such reactions include salt damage, where supersaturation is caused by evaporation and surface curvature effects, and a wide range of mineral reactions where the solid products comprise a larger volume than the solid reactants. Frost heave, where crystallisation is driven by fluid under-cooling, i.e. temperature change, is a similar process. In a geological context, FoC-development is widely considered to play an important role in pseudomorphic replacement, vein formation, and reaction-driven fracturing. Chemical reactions capable of producing a FoC such as the hydration of CaO (lime), which is thermodynamically capable of producing stresses in the GPa range, also offer obvious engineering potential. Despite this, relatively few studies have been conducted where the magnitude of the FoC is determined directly. Indeed, the maximum stress obtainable by CaO hydration has not been validated or determined experimentally. Here we report uni-axial compaction/expansion experiments performed in an oedometer-type apparatus on pre-compacted CaO powder, at 65 °C and at atmospheric pore fluid pressure. Using this set-up, the FoC generated during CaO hydration could be measured directly. Our results show FoC-induced stresses reaching up to 153 MPa, with the hydration reaction stopping or slowing down significantly before completion. Failure to achieve the GPa stresses predicted by thermodynamic theory is attributed to competition between FoC development and its inhibiting effect on reaction progress. Our microstructural observations indicate that hydration-induced stresses caused the shut-down of pathways for water into the sample, thereby hampering ongoing reaction and limiting the magnitude of stress build-up to the values observed.
2012-01-01
Background Fusarium wilt, caused by the fungal pathogen Fusarium oxysporum f. sp. cubense tropical race 4 (Foc TR4), is considered the most lethal disease of Cavendish bananas in the world. The disease can be managed in the field by planting resistant Cavendish plants generated by somaclonal variation. However, little information is available on the genetic basis of plant resistance to Foc TR4. To a better understand the defense response of resistant banana plants to the Fusarium wilt pathogen, the transcriptome profiles in roots of resistant and susceptible Cavendish banana challenged with Foc TR4 were compared. Results RNA-seq analysis generated more than 103 million 90-bp clean pair end (PE) reads, which were assembled into 88,161 unigenes (mean size = 554 bp). Based on sequence similarity searches, 61,706 (69.99%) genes were identified, among which 21,273 and 50,410 unigenes were assigned to gene ontology (GO) categories and clusters of orthologous groups (COG), respectively. Searches in the Kyoto Encyclopedia of Genes and Genomes Pathway database (KEGG) mapped 33,243 (37.71%) unigenes to 119 KEGG pathways. A total of 5,008 genes were assigned to plant-pathogen interactions, including disease defense and signal transduction. Digital gene expression (DGE) analysis revealed large differences in the transcriptome profiles of the Foc TR4-resistant somaclonal variant and its susceptible wild-type. Expression patterns of genes involved in pathogen-associated molecular pattern (PAMP) recognition, activation of effector-triggered immunity (ETI), ion influx, and biosynthesis of hormones as well as pathogenesis-related (PR) genes, transcription factors, signaling/regulatory genes, cell wall modification genes and genes with other functions were analyzed and compared. The results indicated that basal defense mechanisms are involved in the recognition of PAMPs, and that high levels of defense-related transcripts may contribute to Foc TR4 resistance in banana. Conclusions This study generated a substantial amount of banana transcript sequences and compared the defense responses against Foc TR4 between resistant and susceptible Cavendish bananas. The results contribute to the identification of candidate genes related to plant resistance in a non-model organism, banana, and help to improve the current understanding of host-pathogen interactions. PMID:22863187
Sluijs, Anne-Marie; Cleiren, Marc P H D; Scherjon, Sicco A; Wijma, Klaas
2015-12-01
It is a generally accepted idea that women who give birth at home are less fearful of giving birth than women who give birth in a hospital. We explored fear of childbirth (FOC) in relation to preferred and actual place of birth. Since the Netherlands has a long history of home birthing, we also examined how the place where a pregnant woman׳s mother or sisters gave birth related to the preferred place of birth. A prospective cohort study. Five midwifery practises in the region Leiden/Haarlem, the Netherlands. 104 low risk nulliparous and parous women. Questionnaires were completed in gestation week 30 (T1) and six weeks post partum (T2). No significant differences were found in antepartum FOC between those who preferred a home or a hospital birth. Women with a strong preference for either home or hospital had lower FOC (mean W-DEQ=60.3) than those with a weak preference (mean W-DEQ=71.0), t (102)=-2.60, p=0.01. The place of birth of close family members predicted a higher chance (OR 3.8) of the same place being preferred by the pregnant woman. Pre- to postpartum FOC increased in women preferring home- but having hospital birth. The idea that FOC is related to the choice of place of birth was not true for this low risk cohort. Women in both preference groups (home and hospital) made their decisions based on negative and positive motivations. Mentally adjusting to a different environment than that preferred, apart from the medical complications, can cause more FOC post partum. The decreasing number of home births in the Netherlands will probably be a self-reinforcing effect, so in future, pregnant women will be less likely to feel supported by their family or society to give birth at home. Special attention should be given to the psychological condition of women who were referred to a place of birth and caregiver they did not prefer, by means of evaluation of the delivery and being alert to anxiety or other stress symptoms after childbirth. These women have higher chance of fear post partum which is related to a higher risk of psychiatric problems. Copyright © 2015 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
1986-01-01
The conceptual designs and programmatics for a Space Station Nonhuman Life Sciences Research Facility (LSRF) are highlighted. Conceptual designs and programmatics encompass an Initial Orbital Capability (IOC) LSRF, a growth or Follow-on Orbital Capability (FOC), and the transitional process required to modify the IOC LSRF to the FOC LSRF.
FOC Imaging of the Dusty Envelopes of Mass-Losing Supergiants
NASA Astrophysics Data System (ADS)
Kastner, Joel
1996-07-01
Stars more massive than 10 M_odot are destined to explode as supernovae {SN}. Pre-SN mass loss can prolong core buildup, and the rate and duration of mass loss therefore largely determines a massive star's post-main sequence evolution and its position in the H-R diagram prior to SN detonation. The envelope ejected by a mass-losing supergiant also plays an important role in the formation and evolution of a SN remnant. We propose to investigate these processes with HST. We will use the FOC to image two massive stars that are in different stages of post-main sequence evolution: VY CMa, the prototype for a class of heavily mass-losing OH/IR supergiants, and HD 179821, a post-red supergiant that is likely in transition to the Wolf-Rayet phase. Both are known to possess compact reflection nebulae, but ground-based techniques are unable to separate the inner nebulosities from the PSF of the central stars. We will use the unparalleled resolution of the FOC to probe the structure of these nebulae at subarcsecond scales. These data will yield the mass loss histories of the central stars and will demonstrate the presence or absence of axisymmetric mass loss and circumstellar disks. In so doing, our HST/FOC program will define the role of mass loss in determining the fates of SN progenitors and SN remnants.
Ginsbach, Jake W; Killops, Kato L; Olsen, Robert M; Peterson, Brittney; Dunnivant, Frank M
2010-05-01
The resuspension of large volumes of sediments that are contaminated with chlorinated pollutants continues to threaten environmental quality and human health. Whereas kinetic models are more accurate for estimating the environmental impact of these events, their widespread use is substantially hampered by the need for costly, time-consuming, site-specific kinetics experiments. The present study investigated the development of a predictive model for desorption rates from easily measurable sorbent and pollutant properties by examining the relationship between the fraction of organic carbon (fOC) and labile release rates. Duplicate desorption measurements were performed on 46 unique combinations of pollutants and sorbents with fOC values ranging from 0.001 to 0.150. Labile desorption rate constants indicate that release rates predominantly depend upon the fOC in the geosorbent. Previous theoretical models, such as the macro-mesopore and organic matter (MOM) diffusion model, have predicted such a relationship but could not accurately predict the experimental rate constants collected in the present study. An empirical model was successfully developed to correlate the labile desorption rate constant (krap) to the fraction of organic material where log(krap)=0.291-0.785 . log(fOC). These results provide the first experimental evidence that kinetic pollution releases during resuspension events are governed by the fOC content in natural geosorbents. Copyright (c) 2010 SETAC.
NASA Astrophysics Data System (ADS)
Agambayev, Agamyrat; Rajab, Karam H.; Hassan, Ali H.; Farhat, Mohamed; Bagci, Hakan; Salama, Khaled N.
2018-02-01
In this study, multi-walled carbon nanotube (MWCNT) filled polyevinelidenefluoride-trifluoroethylene-chlorofluoroethylene composites are used to realize fractional-order capacitors (FOCs). A solution-mixing and drop-casting approach is used to fabricate the composite. Due to the high aspect ratio of MWCNTs, percolation regime starts at a small weight percentage (wt%), 1.00%.The distributed MWCNTs inside the polymer act as an electrical network of micro-capacitors and micro-resistors, which, in effect, behaves like a FOC. The resulting FOCs’ constant phase angle (CPA) can be tuned from -65{\\hspace{0pt}}^\\circ to -7{\\hspace{0pt}}^\\circ by changing the wt% of the MWCNTs. This is the largest dynamic range reported so far at the frequency range from 150 kHz to 2 MHz for an FOC. Furthermore, the CPA and pseudo-capacitance are shown to be practically stable (with less than 1% variation) when the applied voltage is, changed between 500 µV and 5 V. For a fixed value of CPA, the pseudo-capacitance can be tuned by changing the thickness of the composite, which can be done in a straightforward manner via the solution-mixing and drop-casting fabrication approach. Finally, it is shown that the frequency of a Hartley oscillator built using an FOC is almost 15 times higher than that of a Hartley oscillator built using a conventional capacitor.
Liu, Yunpeng; Zhang, Nan; Qiu, Meihua; Feng, Haichao; Vivanco, Jorge M; Shen, Qirong; Zhang, Ruifu
2014-04-01
Root exudates play important roles in root-soil microorganism interactions and can mediate tripartite interactions of beneficial microorganisms-plant-pathogen in the rhizosphere. However, the roles of organic acid components in this process have not been well studied. In this study the colonization of a plant growth-promoting rhizobacterium, Bacillus amyloliquefaciens SQR9, on cucumber root infected by Fusarium oxysporum f. sp. cucumerinum J. H. Owen (FOC) was investigated. Chemotaxis and biofilm formation response of SQR9 to root exudates and their organic acid components were analysed. Infection of FOC on cucumber had a positive effect (3.30-fold increase) on the root colonization of SQR9 compared with controls. Root secretion of citric acid (2.3 ± 0.2 μM) and fumaric acid (5.7 ± 0.5 μM) was enhanced in FOC-infected cucumber plants. Bacillus amyloliquefaciens SQR9 exhibited enhanced chemotaxis to root exudates of FOC-infected cucumber seedlings. Further experiments demonstrated that citric acid acts as a chemoattractant and fumaric acid as a stimulator of biofilm formation in this process. These results suggest that root exudates mediate the interaction of cucumber root and rhizosphere strain B. amyloliquefaciens SQR9 and enhance its root colonization. © 2014 Federation of European Microbiological Societies. Published by John Wiley & Sons Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sośnica, Krzysztof; Prange, Lars; Kaźmierski, Kamil; Bury, Grzegorz; Drożdżewski, Mateusz; Zajdel, Radosław; Hadas, Tomasz
2018-02-01
The space segment of the European Global Navigation Satellite System (GNSS) Galileo consists of In-Orbit Validation (IOV) and Full Operational Capability (FOC) spacecraft. The first pair of FOC satellites was launched into an incorrect, highly eccentric orbital plane with a lower than nominal inclination angle. All Galileo satellites are equipped with satellite laser ranging (SLR) retroreflectors which allow, for example, for the assessment of the orbit quality or for the SLR-GNSS co-location in space. The number of SLR observations to Galileo satellites has been continuously increasing thanks to a series of intensive campaigns devoted to SLR tracking of GNSS satellites initiated by the International Laser Ranging Service. This paper assesses systematic effects and quality of Galileo orbits using SLR data with a main focus on Galileo satellites launched into incorrect orbits. We compare the SLR observations with respect to microwave-based Galileo orbits generated by the Center for Orbit Determination in Europe (CODE) in the framework of the International GNSS Service Multi-GNSS Experiment for the period 2014.0-2016.5. We analyze the SLR signature effect, which is characterized by the dependency of SLR residuals with respect to various incidence angles of laser beams for stations equipped with single-photon and multi-photon detectors. Surprisingly, the CODE orbit quality of satellites in the incorrect orbital planes is not worse than that of nominal FOC and IOV orbits. The RMS of SLR residuals is even lower by 5.0 and 1.5 mm for satellites in the incorrect orbital planes than for FOC and IOV satellites, respectively. The mean SLR offsets equal -44.9, -35.0, and -22.4 mm for IOV, FOC, and satellites in the incorrect orbital plane. Finally, we found that the empirical orbit models, which were originally designed for precise orbit determination of GNSS satellites in circular orbits, provide fully appropriate results also for highly eccentric orbits with variable linear and angular velocities.
Lemoine, S; Zhu, L; Buléon, C; Massetti, M; Gérard, J-L; Galera, P; Hanouz, J-L
2011-10-01
Desflurane triggers post-conditioning in the diabetic human myocardium. We determined whether protein kinase C (PKC), mitochondrial adenosine triphosphate-sensitive potassium (mitoK(ATP)) channels, Akt, and glycogen synthase kinase-3β (GSK-3β) were involved in the in vitro desflurane-induced post-conditioning of human myocardium from patients with type 2 diabetes. The isometric force of contraction (FoC) of human right atrial trabeculae obtained from patients with type 2 diabetes was recorded during 30 min of hypoxia followed by 60 min of reoxygenation. Desflurane (6%) was administered during the first 5 min of reoxygenation either alone or in the presence of calphostin C (PKC inhibitor) or 5-hydroxydecanoate (5-HD) (mitoK(ATP) channel antagonist). Phorbol 12-myristate 13-acetate (PKC activator) and diazoxide (a mitoK(ATP) channel opener) were superfused during early reoxygenation. The FoC at the end of the 60 min reoxygenation period was compared among treatment groups (FoC(60); mean and sd). The phosphorylation of Akt and GSK-3β was studied using western blotting. Desflurane enhanced the recovery of force [FoC(60): 79 (3)% of baseline] after 60 min of reoxygenation when compared with the control group (P>0.0001). Calphostin C and 5-HD abolished the beneficial effect of desflurane-induced post-conditioning (both P<0.0001). Phorbol 12-myristate 13-acetate and diazoxide enhanced the FoC(60) when compared with the control group (both P<0.0001). Desflurane increased the level of phosphorylation of Akt and GSK-3β (P<0.0001). Desflurane-induced post-conditioning in human myocardium from patients with type 2 diabetes was mediated by the activation of PKC, the opening of the mitoK(ATP) channels, and the phosphorylation of Akt and GSK-3β.
Inotropic effects of diadenosine tetraphosphate (AP4A) in human and animal cardiac preparations.
Vahlensieck, U; Bokník, P; Gombosová, I; Huke, S; Knapp, J; Linck, B; Lüss, H; Müller, F U; Neumann, J; Deng, M C; Scheld, H H; Jankowski, H; Schlüter, H; Zidek, W; Zimmermann, N; Schmitz, W
1999-02-01
Diadenosine tetraphosphate (AP4A) is an endogenous compound and exerts diverse physiological effects in animal systems. However, the effects of AP4A on inotropy in ventricular cardiac preparations have not yet been studied. The effects of AP4A on force of contraction (FOC) were studied in isolated electrically driven guinea pig and human cardiac preparations. Furthermore, the effects of AP4A on L-type calcium current and [Ca]i were studied in isolated guinea pig ventricular myocytes. In guinea pig left atria, AP4A (0.1-100 microM) reduced FOC maximally by 36.5 +/- 4.3%. In guinea pig papillary muscles, AP4A (100 microM) alone was ineffective, but reduced isoproterenol-stimulated FOC maximally by 29.3 +/- 3.4%. The negative inotropic effects of AP4A in atria and papillary muscles were abolished by the A1-adenosine receptor antagonist 1, 3-dipropyl-cyclopentylxanthine. In guinea pig ventricular myocytes, AP4A (100 microM) attenuated isoproterenol-stimulated L-type calcium current and [Ca]i. In human atrial and ventricular preparations, AP4A (100 microM) alone increased FOC to 158.3 +/- 12.4% and 167.5 +/- 25.1%, respectively. These positive inotropic effects were abolished by the P2-purinoceptor antagonist suramin. On the other hand, AP4A (100 microM) reduced FOC by 27.2 +/- 7.4% in isoproterenol-stimulated human ventricular trabeculae. The latter effect was abolished by 1,3-dipropyl-cyclopentylxanthine. In summary, after beta adrenergic stimulation AP4A exerts negative inotropic effects in animal and human ventricular preparations via stimulation of A1-adenosine receptors. In contrast, AP4A alone can exert positive inotropic effects via P2-purinoceptors in human ventricular myocardium. Thus, P2-purinoceptor stimulation might be a new positive inotropic principle in the human myocardium.
DISCOVERY OF A DARK AURORAL OVAL ON SATURN
NASA Technical Reports Server (NTRS)
2002-01-01
The ultraviolet image was obtained by the NASA/ESA Hubble Space Telescope with the European Faint Object Camera (FOC) on June 1992. It represents the sunlight reflected by the planet in the near UV (220 nm). * The image reveals a dark oval encircling the north magnetic pole of Saturn. This auroral oval is the first ever observed for Saturn, and its darkness is unique in the solar system (L. Ben-Jaffel, V. Leers, B. Sandel, Science, Vol. 269, p. 951, August 18, 1995). The structure represents an excess of absorption of the sunlight at 220 nm by atmospheric particles that are the product of the auroral activity itself. The large tilt of the northern pole of Saturn at the time of observation, and the almost perfect symmetry of the planet's magnetic field, made this observation unique as even the far side of the dark oval across the pole is visible! * Auroral activity is usually characterized by light emitted around the poles. The dark oval observed for Saturn is a STUNNING VISUAL PROOF that transport of energy and charged particles from the magnetosphere to the atmosphere of the planet at high latitudes induces an auroral activity that not only produces auroral LIGHT but also UV-DARK material near the poles: auroral electrons are probably initiating hydrocarbon polymer formation in these regions. Credits: L. Ben Jaffel, Institut d'Astrophysique de Paris-CNRS, France, B. Sandel (Univ. of Arizona), NASA/ESA, and Science (magazine).
Status and prospect of the Swiss continuous Cs fountain FoCS-2
NASA Astrophysics Data System (ADS)
Jallageas, A.; Devenoges, L.; Petersen, M.; Morel, J.; Bernier, L.-G.; Thomann, P.; Südmeyer, T.
2016-06-01
The continuous cesium fountain clock FoCS-2 at METAS presents many unique characteristics and challenges in comparison with standard pulsed fountain clocks. For several years FoCS-2 was limited by an unexplained frequency sensitivity on the velocity of the atoms, in the range of 140 • 10-15. Recent experiments allowed us to identify the origin of this problem as undesirable microwave surface currents circulating on the shield of the coaxial cables that feed the microwave cavity. A strong reduction of this effect was obtained by adding microwave absorbing coatings on the coaxial cables and absorbers inside of the vacuum chamber. This breakthrough opens the door to a true metrological validation of the fountain. A series of simulation tools have already been developed and proved their efficiency in the evaluation of some of the uncertainties of the continuous fountain. With these recent improvements, we are confident in the future demonstration of an uncertainty budget at the 10-15 level and below.
Lucas, Aaron S; Swecker, William S; Lindsay, David S; Scaglia, Guillermo; Neel, James P S; Elvinger, Francois C; Zajac, Anne M
2014-05-28
There is little information available on the species dynamics of eimerian parasites in grazing cattle in the central Appalachian region of the United States. Therefore, the objective of this study was to describe the level of infection and species dynamics of Eimeria spp. in grazing beef cattle of various age groups over the course of a year in the central Appalachian region. Rectal fecal samples were collected from male and female calves (n=72) monthly from May through October 2005, heifers only (n=36) monthly from November 2005 to April 2006, and cows (n=72) in May, July, and September, 2005. Eimeria spp. oocysts were seen in 399 of 414 (96%) fecal samples collected from the calves from May through October. Fecal oocysts counts (FOC) in the calves were lower (P<0.05) in May than all other months and no significant differences were detected from June through September. Eimeria spp. oocysts were detected in 198 of 213 (92%) of fecal samples collected from the 36 replacement heifers monthly from November to April and monthly mean FOC did not differ during this time period. The prevalence of oocyst shedding increased to 100% in calves in September and remained near 100% in the replacement heifers during the sampling period. Eimeria spp. oocysts were also detected in 150 of 200 (75%) samples collected in May, July, and September from the cows and mean FOC did not differ significantly over the sampling period. Eimeria spp. composition was dominated by Eimeria bovis in fecal samples collected from calves, replacement heifers and cows. Mixed Eimeria spp. infections were, however, common in all groups and 13 Eimeria spp. oocysts were identified throughout the sampling period. Copyright © 2014 Elsevier B.V. All rights reserved.
Ramu, Venkatesh; Venkatarangaiah, Krishna; Krishnappa, Pradeepa; Shimoga Rajanna, Santosh Kumar; Deeplanaik, Nagaraja; Chandra Pal, Anup; Kini, Kukkundoor Ramachandra
2016-02-24
Panama wilt caused by Fusarium oxysporum f. sp. cubense (Foc) is one of the major disease constraints of banana production. Previously, we reported the disease resistance Musa paradisiaca cv. puttabale clones developed from Ethylmethanesulfonate and Foc culture filtrate against Foc inoculation. Here, the same resistant clones and susceptible clones were used for the study of protein accumulation against Foc inoculation by two-dimensional gel electrophoresis (2-DE), their expression pattern and an in silico approach. The present investigation revealed mass-spectrometry identified 16 proteins that were over accumulated and 5 proteins that were under accumulated as compared to the control. The polyphosphoinositide binding protein ssh2p (PBPssh2p) and Indoleacetic acid-induced-like (IAA) protein showed significant up-regulation and down-regulation. The docking of the pathogenesis-related protein (PR) with the fungal protein endopolygalacturonase (PG) exemplify the three ionic interactions and seven hydrophobic residues that tends to good interaction at the active site of PG with free energy of assembly dissociation (1.5 kcal/mol). The protein-ligand docking of the Peptide methionine sulfoxide reductase chloroplastic-like protein (PMSRc) with the ligand β-1,3 glucan showed minimum binding energy (-6.48 kcal/mol) and docking energy (-8.2 kcal/mol) with an interaction of nine amino-acid residues. These explorations accelerate the research in designing the host pathogen interaction studies for the better management of diseases.
Ramu, Venkatesh; Venkatarangaiah, Krishna; Krishnappa, Pradeepa; Shimoga Rajanna, Santosh Kumar; Deeplanaik, Nagaraja; Chandra Pal, Anup; Kini, Kukkundoor Ramachandra
2016-01-01
Panama wilt caused by Fusarium oxysporum f. sp. cubense (Foc) is one of the major disease constraints of banana production. Previously, we reported the disease resistance Musa paradisiaca cv. puttabale clones developed from Ethylmethanesulfonate and Foc culture filtrate against Foc inoculation. Here, the same resistant clones and susceptible clones were used for the study of protein accumulation against Foc inoculation by two-dimensional gel electrophoresis (2-DE), their expression pattern and an in silico approach. The present investigation revealed mass-spectrometry identified 16 proteins that were over accumulated and 5 proteins that were under accumulated as compared to the control. The polyphosphoinositide binding protein ssh2p (PBPssh2p) and Indoleacetic acid-induced-like (IAA) protein showed significant up-regulation and down-regulation. The docking of the pathogenesis-related protein (PR) with the fungal protein endopolygalacturonase (PG) exemplify the three ionic interactions and seven hydrophobic residues that tends to good interaction at the active site of PG with free energy of assembly dissociation (1.5 kcal/mol). The protein-ligand docking of the Peptide methionine sulfoxide reductase chloroplastic-like protein (PMSRc) with the ligand β-1,3 glucan showed minimum binding energy (−6.48 kcal/mol) and docking energy (−8.2 kcal/mol) with an interaction of nine amino-acid residues. These explorations accelerate the research in designing the host pathogen interaction studies for the better management of diseases. PMID:28248219
Barlas, Zeynep; Hockley, William E; Obhi, Sukhvinder S
2017-10-01
Previous research showed that increasing the number of action alternatives enhances the sense of agency (SoA). Here, we investigated whether choice space could affect subjective judgments of mental effort experienced during action selection and examined the link between subjective effort and the SoA. Participants performed freely selected (among two, three, or four options) and instructed actions that produced pleasant or unpleasant tones. We obtained action-effect interval estimates to quantify intentional binding - the perceived interval compression between actions and outcomes and feeling of control (FoC) ratings. Additionally, participants reported the degree of mental effort they experienced during action selection. We found that both binding and FoC were systematically enhanced with increasing choice-level. Outcome valence did not influence binding, while FoC was stronger for pleasant than unpleasant outcomes. Finally, freely chosen actions were associated with low subjective effort and slow responses (i.e., higher reaction times), and instructed actions were associated with high effort and fast responses. Although the conditions that yielded the greatest and least subjective effort also yielded the greatest and least binding and FoC, there was no significant correlation between subjective effort and SoA measures. Overall, our results raise interesting questions about how agency may be influenced by response selection demands (i.e., indexed by speed of responding) and subjective mental effort. Our work also highlights the importance of understanding how subjective mental effort and response speed are related to popular notions of fluency in response selection. Copyright © 2017 Elsevier B.V. All rights reserved.
Measuring Positions of Objects using Two or More Cameras
NASA Technical Reports Server (NTRS)
Klinko, Steve; Lane, John; Nelson, Christopher
2008-01-01
An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.
HUBBLE UNVEILS A GALAXY IN LIVING COLOR
NASA Technical Reports Server (NTRS)
2002-01-01
In this view of the center of the magnificent barred spiral galaxy NGC 1512, NASA Hubble Space Telescope's broad spectral vision reveals the galaxy at all wavelengths from ultraviolet to infrared. The colors (which indicate differences in light intensity) map where newly born star clusters exist in both 'dusty' and 'clean' regions of the galaxy. This color-composite image was created from seven images taken with three different Hubble cameras: the Faint Object Camera (FOC), the Wide Field and Planetary Camera 2 (WFPC2), and the Near Infrared Camera and Multi-Object Spectrometer (NICMOS). NGC 1512 is a barred spiral galaxy in the southern constellation of Horologium. Located 30 million light-years away, relatively 'nearby' as galaxies go, it is bright enough to be seen with amateur telescopes. The galaxy spans 70,000 light-years, nearly as much as our own Milky Way galaxy. The galaxy's core is unique for its stunning 2,400 light-year-wide circle of infant star clusters, called a 'circumnuclear' starburst ring. Starbursts are episodes of vigorous formation of new stars and are found in a variety of galaxy environments. Taking advantage of Hubble's sharp vision, as well as its unique wavelength coverage, a team of Israeli and American astronomers performed one of the broadest and most detailed studies ever of such star-forming regions. The results, which will be published in the June issue of the Astronomical Journal, show that in NGC 1512 newly born star clusters exist in both dusty and clean environments. The clean clusters are readily seen in ultraviolet and visible light, appearing as bright, blue clumps in the image. However, the dusty clusters are revealed only by the glow of the gas clouds in which they are hidden, as detected in red and infrared wavelengths by the Hubble cameras. This glow can be seen as red light permeating the dark, dusty lanes in the ring. 'The dust obscuration of clusters appears to be an on-off phenomenon,' says Dan Maoz, who headed the collaboration. 'The clusters are either completely hidden, enshrouded in their birth clouds, or almost completely exposed.' The scientists believe that stellar winds and powerful radiation from the bright, newly born stars have cleared away the original natal dust cloud in a fast and efficient 'cleansing' process. Aaron Barth, a co-investigator on the team, adds: 'It is remarkable how similar the properties of this starburst are to those of other nearby starbursts that have been studied in detail with Hubble.' This similarity gives the astronomers the hope that, by understanding the processes occurring in nearby galaxies, they can better interpret observations of very distant and faint starburst galaxies. Such distant galaxies formed the first generations of stars, when the universe was a fraction of its current age. Circumstellar star-forming rings are common in the universe. Such rings within barred spiral galaxies may in fact comprise the most numerous class of nearby starburst regions. Astronomers generally believe that the giant bar funnels the gas to the inner ring, where stars are formed within numerous star clusters. Studies like this one emphasize the need to observe at many different wavelengths to get the full picture of the processes taking place.
NASA Astrophysics Data System (ADS)
Gauthier, L. R.; Jansen, M. E.; Meyer, J. R.
2014-09-01
Camera motion is a potential problem when a video camera is used to perform dynamic displacement measurements. If the scene camera moves at the wrong time, the apparent motion of the object under study can easily be confused with the real motion of the object. In some cases, it is practically impossible to prevent camera motion, as for instance, when a camera is used outdoors in windy conditions. A method to address this challenge is described that provides an objective means to measure the displacement of an object of interest in the scene, even when the camera itself is moving in an unpredictable fashion at the same time. The main idea is to synchronously measure the motion of the camera and to use those data ex post facto to subtract out the apparent motion in the scene that is caused by the camera motion. The motion of the scene camera is measured by using a reference camera that is rigidly attached to the scene camera and oriented towards a stationary reference object. For instance, this reference object may be on the ground, which is known to be stationary. It is necessary to calibrate the reference camera by simultaneously measuring the scene images and the reference images at times when it is known that the scene object is stationary and the camera is moving. These data are used to map camera movement data to apparent scene movement data in pixel space and subsequently used to remove the camera movement from the scene measurements.
Object tracking using multiple camera video streams
NASA Astrophysics Data System (ADS)
Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford
2010-05-01
Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.
Sun, Xuepiao; Zheng, Peng; Zhang, Jiaming
2015-01-01
Banana Fusarium wilt (also known as Panama disease) is one of the most disastrous plant diseases. Effective control methods are still under exploring. The endophytic bacterial strain ITBB B5-1 was isolated from the rubber tree, and identified as Serratia marcescens by morphological, biochemical, and phylogenetic analyses. This strain exhibited a high potential for biological control against the banana Fusarium disease. Visual agar plate assay showed that ITBB B5-1 restricted the mycelial growth of the pathogenic fungus Fusarium oxysporum f. sp. cubense race 4 (FOC4). Microscopic observation revealed that the cell wall of the FOC4 mycelium close to the co-cultured bacterium was partially decomposed, and the conidial formation was prohibited. The inhibition ratio of the culture fluid of ITBB B5-1 against the pathogenic fungus was 95.4% as estimated by tip culture assay. Chitinase and glucanase activity was detected in the culture fluid, and the highest activity was obtained at Day 2 and Day 3 of incubation for chitinase and glucanase, respectively. The filtrated cell-free culture fluid degraded the cell wall of FOC4 mycelium. These results indicated that chitinase and glucanase were involved in the antifungal mechanism of ITBB B5-1. The potted banana plants that were inoculated with ITBB B5-1 before infection with FOC4 showed 78.7% reduction in the disease severity index in the green house experiments. In the field trials, ITBB B5-1 showed a control effect of approximately 70.0% against the disease. Therefore, the endophytic bacterial strain ITBB B5-1 could be applied in the biological control of banana Fusarium wilt. PMID:26133557
The partitioning and modelling of pesticide parathion in a surfactant-assisted soil-washing system.
Chu, W; Chan, K H; Choy, W K
2006-07-01
Soil sorption of organic pollutants has long been a problematic in the soil washing process because of its durability and low water solubility. This paper discussed the soil washing phenomena over a wide range of parathion concentrations and several soil samples at various fractions of organic content (foc) levels. When parathion dosage is set below the water solubility, washing performance is stable for surfactant concentrations above critical micelle concentration (cmc) and it is observed that more than 90% of parathion can be washed out when dosage is five times lower than the solubility limit. However, such trends change when non-aqueous phase liquids (NAPL) is present in the system. Parathion extraction depends very much on the surfactant dosage but is not affected by the levels of foc in the system. In between the extreme parathion dosage, a two-stage pattern is observed in these boundary regions. Washing performance is first increased with additional surfactant, but the increase slows down gradually since the sorption sites are believed to be saturated by the huge amount of surfactant in the system. A mathematical model has included foc to demonstrate such behavior and this can be used as a prediction for extraction.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Software-as-a-Service Vendors: Are They Ready to Successfully Deliver?
NASA Astrophysics Data System (ADS)
Heart, Tsipi; Tsur, Noa Shamir; Pliskin, Nava
Software as a service (SaaS) is a software sourcing option that allows organizations to remotely access enterprise applications, without having to install the application in-house. In this work we study vendors' readiness to deliver SaaS, a topic scarcely studied before. The innovation classification (evolutionary vs. revolutionary) and a new, Seven Fundamental Organizational Capabilities (FOCs) Model, are used as the theoretical frameworks. The Seven FOCs model suggests generic yet comprehensive set of capabilities that are required for organizational success: 1) sensing the stakeholders, 2) sensing the business environment, 3) sensing the knowledge environment, 4) process control, 5) process improvement, 6) new process development, and 7) appropriate resolution.
Object recognition through turbulence with a modified plenoptic camera
NASA Astrophysics Data System (ADS)
Wu, Chensheng; Ko, Jonathan; Davis, Christopher
2015-03-01
Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.
Li, Erfeng; Ling, Jian; Wang, Gang; Xiao, Jiling; Yang, Yuhong; Mao, Zhenchuan; Wang, Xuchu; Xie, Bingyan
2015-01-01
Fusarium oxysporum is a soil-inhabiting fungus that induces vascular wilt and root rot in a variety of plants. F. oxysporum f. sp. conglutinans (Foc), which comprises two races, can cause wilt disease in cabbage. Compared with race 1 (52557−TM, R1), race 2 (58385−TM, R2) exhibits much stronger pathogenicity. Here, we provide the first proteome reference maps for Foc mycelium and conidia and identify 145 proteins with different abundances among the two races. Of these proteins, most of the high-abundance proteins in the R2 mycelium and conidia are involved in carbohydrate, amino acid and ion metabolism, which indicates that these proteins may play important roles in isolate R2’s stronger pathogenicity. The expression levels of 20 typical genes demonstrate similarly altered patterns compared to the proteomic analysis. The protein glucanosyltransferase, which is involved in carbohydrate metabolism, was selected for research. We knocked out the corresponding gene (gas1) and found that Foc-∆gas1 significantly reduced growth rate and virulence compared with wild type isolates. These results deepened our understanding of the proteins related to F. oxysporum pathogenicity in cabbage Fusarium wilt and provided new opportunities to control this disease. PMID:26333982
Jung, Jaehoon; Yoon, Inhye; Paik, Joonki
2016-01-01
This paper presents an object occlusion detection algorithm using object depth information that is estimated by automatic camera calibration. The object occlusion problem is a major factor to degrade the performance of object tracking and recognition. To detect an object occlusion, the proposed algorithm consists of three steps: (i) automatic camera calibration using both moving objects and a background structure; (ii) object depth estimation; and (iii) detection of occluded regions. The proposed algorithm estimates the depth of the object without extra sensors but with a generic red, green and blue (RGB) camera. As a result, the proposed algorithm can be applied to improve the performance of object tracking and object recognition algorithms for video surveillance systems. PMID:27347978
Imaging the nuclear environment of NGC 1365 with the Hubble Space Telescope
NASA Astrophysics Data System (ADS)
Kristen, Helmuth; Jorsater, Steven; Lindblad, Per Olof; Boksenberg, Alec
1997-12-01
The region surrounding the active nucleus of the barred spiral galaxy NGC 1365 is observed in the [Oiii] lambda 5007 line and neighbouring continuum using the Faint Object Camera (FOC) aboard the Hubble Space Telescope (HST). In the continuum light numerous bright ``super star clusters'' (SSCs) are seen in the nuclear region. They tend to fall on an elongated ring around the nucleus and contribute about 20 % of the total continuum flux in this wavelength regime. Without applying any extinction correction the brightest SSCs have an absolute luminosity M_B=-14fm1 +/- 0fm3 and are very compact with radii R la 3 pc. Complementary ground-based spectroscopy gives an extinction estimate A_B = 2fm5 +/- 0fm5 towards these regions, indicating a true luminosity M_B = -16fm6 +/- 0fm6 . The bright compact radio source NGC 1365:A is found to coincide spatially with one of the SSCs. We conclude that it is a ``radio supernova''. The HST observations resolve the inner structure of the conical outflow previously seen in the [Oiii] lambda 5007 line in ground-based observations, and reveal a complicated structure of individual emission-line clouds, some of which gather in larger agglomerations. The total luminosity in the [Oiii] line amounts to L_[OIII] =~ 3.7x 10(40) erg s(-1) where about 40 % is emitted by the clouds. Based on observations with the NASA/ESA Hubble Space Telescope, obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Inc., under NASA contract NAS5-26555, and observations at the European Southern Observatory (ESO), La Silla, Chile.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1990-01-01
Improved techniques are provided for alignment of two objects. The present invention is particularly suited for three-dimensional translation and three-dimensional rotational alignment of objects in outer space. A camera 18 is fixedly mounted to one object, such as a remote manipulator arm 10 of the spacecraft, while the planar reflective surface 30 is fixed to the other object, such as a grapple fixture 20. A monitor 50 displays in real-time images from the camera, such that the monitor displays both the reflected image of the camera and visible markings on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm 10 manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
Improved docking alignment system
NASA Technical Reports Server (NTRS)
Monford, Leo G. (Inventor)
1988-01-01
Improved techniques are provided for the alignment of two objects. The present invention is particularly suited for 3-D translation and 3-D rotational alignment of objects in outer space. A camera is affixed to one object, such as a remote manipulator arm of the spacecraft, while the planar reflective surface is affixed to the other object, such as a grapple fixture. A monitor displays in real-time images from the camera such that the monitor displays both the reflected image of the camera and visible marking on the planar reflective surface when the objects are in proper alignment. The monitor may thus be viewed by the operator and the arm manipulated so that the reflective surface is perpendicular to the optical axis of the camera, the roll of the reflective surface is at a selected angle with respect to the camera, and the camera is spaced a pre-selected distance from the reflective surface.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects
Lu, S.Y.
1998-12-22
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yokoyama, Hideshi, E-mail: h-yokoya@u-shizuoka-ken.ac.jp; Tsuruta, Osamu; Akao, Naoya
2012-06-15
Highlights: Black-Right-Pointing-Pointer Structures of a metal-bound Helicobacter pylori neutrophil-activating protein were determined. Black-Right-Pointing-Pointer Two zinc ions were tetrahedrally coordinated by ferroxidase center (FOC) residues. Black-Right-Pointing-Pointer Two cadmium ions were coordinated in a trigonal-bipyramidal and octahedral manner. Black-Right-Pointing-Pointer The second metal ion was more weakly coordinated than the first at the FOC. Black-Right-Pointing-Pointer A zinc ion was found in one negatively-charged pore suitable as an ion path. -- Abstract: Helicobacter pylori neutrophil-activating protein (HP-NAP) is a Dps-like iron storage protein forming a dodecameric shell, and promotes adhesion of neutrophils to endothelial cells. The crystal structure of HP-NAP in a Zn{sup 2+}-more » or Cd{sup 2+}-bound form reveals the binding of two zinc or two cadmium ions and their bridged water molecule at the ferroxidase center (FOC). The two zinc ions are coordinated in a tetrahedral manner to the conserved residues among HP-NAP and Dps proteins. The two cadmium ions are coordinated in a trigonal-bipyramidal and distorted octahedral manner. In both structures, the second ion is more weakly coordinated than the first. Another zinc ion is found inside of the negatively-charged threefold-related pore, which is suitable for metal ions to pass through.« less
Optimization of preparation of NDV F Gene encapsulated in N-2-HACC-CMC nanoparticles
NASA Astrophysics Data System (ADS)
Li, S. S.; Zhang, Y.; Zhao, K.; Wang, X. H.
2018-01-01
In this study, the biodegradable materials N-2-hydroxypropyl trimethyl ammonium chloride chitosan (N-2-HACC) and N, O-carboxymethyl chitosan (CMC) are used as delivery carrier for the pVAX I -F(o)-C3d6. The optimal preparation condition is as follows: concentration of N-2-HACC is 1.0 mg/ml, concentration of CMC is 0.85 mg/ml, concentration of pVAX I -F(o)-C3d6 is 100 μg ml. The results show that the prepared N-2-HACC-CMC/pFDNA NPs have regular round shape, smooth surface and good dispersion, the particle size is 310 nm, Zeta potential is 50 mV, the entrapment efficiency is 92 %, the loading capacity is 51 % (n=3).
Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects
Lu, Shin-Yee
1998-01-01
A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
Moving Object Detection on a Vehicle Mounted Back-Up Camera
Kim, Dong-Sun; Kwon, Jinsan
2015-01-01
In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761
Non-contact measurement of rotation angle with solo camera
NASA Astrophysics Data System (ADS)
Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun
2015-02-01
For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.
2014-06-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. We demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. We discuss some physical experiments, in which a person drinks hot, and warm, and cold water and he eats. After computer processing of images captured by passive THz camera TS4 we may see the pronounced temperature trace on skin of the human body. For proof of validity of our statement we make the similar physical experiment using the IR camera. Our investigation allows to increase field of the passive THz camera using for the detection of objects concealed in the human body because the difference in temperature between object and parts of human body will be reflected on the human skin. However, modern passive THz cameras have not enough resolution in a temperature to see this difference. That is why, we use computer processing to enhance the camera resolution for this application. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp.
Miniature self-contained vacuum compatible electronic imaging microscope
Naulleau, Patrick P.; Batson, Phillip J.; Denham, Paul E.; Jones, Michael S.
2001-01-01
A vacuum compatible CCD-based microscopic camera with an integrated illuminator. The camera can provide video or still feed from the microscope contained within a vacuum chamber. Activation of an optional integral illuminator can provide light to illuminate the microscope subject. The microscope camera comprises a housing with a objective port, modified objective, beam-splitter, CCD camera, and LED illuminator.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2017-05-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. For this purpose, we propose to use THz camera and IR camera. Below we continue a possibility of IR camera using for a detection of temperature trace on a human body. In contrast to passive THz camera using, the IR camera does not allow to see very pronounced the object under clothing. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To find possible ways for this disadvantage overcoming we make some experiments with IR camera, produced by FLIR Company and develop novel approach for computer processing of images captured by IR camera. It allows us to increase a temperature resolution of IR camera as well as human year effective susceptibility enhancing. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments are made with observing of temperature trace from objects placed behind think overall. Demonstrated results are very important for the detection of forbidden objects, concealed inside the human body, by using non-destructive control without using X-rays.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
Bottenus, Nick; D’hooge, Jan; Trahey, Gregg E.
2017-01-01
The transverse oscillation (TO) technique can improve the estimation of tissue motion perpendicular to the ultrasound beam direction. TOs can be introduced using plane wave (PW) insonification and bi-lobed Gaussian apodisation (BA) on receive (abbreviated as PWTO). Furthermore, the TO frequency can be doubled after a heterodyning demodulation process is performed (abbreviated as PWTO*). This study is concerned with identifying the limitations of the PWTO technique in the specific context of myocardial deformation imaging with phased arrays and investigating the conditions in which it remains advantageous over traditional focused (FOC) beamforming. For this purpose, several tissue phantoms were simulated using Field II, undergoing a wide range of displacement magnitudes and modes (lateral, axial and rotational motion). The Cramer-Rao lower bound (CRLB) was used to optimize TO beamforming parameters and theoretically predict the fundamental tracking performance limits associated with the FOC, PWTO and PWTO* beamforming scenarios. This framework was extended to also predict performance for BA functions which are windowed by the physical aperture of the transducer, leading to higher lateral oscillations. It was found that windowed BA functions resulted in lower jitter errors compared to tradional BA functions. PWTO* outperformed FOC at all investigated SNR levels but only up to a certain displacement, with the advantage rapidly decreasing when SNR increased. These results suggest that PWTO* improves lateral tracking performance, but only when inter-frame displacements remain relatively low. The study concludes by translating these findings to a clinical environment by suggesting optimal scanner settings. PMID:27810806
Li, Min-Hui; Xie, Xiao-Ling; Lin, Xian-Feng; Shi, Jin-Xiu; Ding, Zhao-Jian; Ling, Jin-Feng; Xi, Ping-Gen; Zhou, Jia-Nuan; Leng, Yueqiang; Zhong, Shaobin; Jiang, Zi-De
2014-04-01
Fusarium oxysporum f. sp. cubense (FOC) is the causal agent of banana Fusarium wilt and has become one of the most destructive pathogens threatening the banana production worldwide. However, few genes related to morphogenesis and pathogenicity of this fungal pathogen have been functionally characterized. In this study, we identified and characterized the disrupted gene in a T-DNA insertional mutant (L953) of FOC with significantly reduced virulence on banana plants. The gene disrupted by T-DNA insertion in L953 harbors an open reading frame, which encodes a protein with homology to α-1,6-mannosyltransferase (OCH1) in fungi. The deletion mutants (ΔFoOCH1) of the OCH1 orthologue (FoOCH1) in FOC were impaired in fungal growth, exhibited brighter staining with fluorescein isothiocyanate (FITC)-Concanavalin A, had less cell wall proteins and secreted more proteins into liquid media than the wild type. Furthermore, the mutation or deletion of FoOCH1 led to loss of ability to penetrate cellophane membrane and decline in hyphal attachment and colonization as well as virulence to the banana host. The mutant phenotypes were fully restored by complementation with the wild type FoOCH1 gene. Our data provide a first evidence for the critical role of FoOCH1 in maintenance of cell wall integrity and virulence of F. oxysporum f. sp. cubense. Copyright © 2014 Elsevier Inc. All rights reserved.
Holographic motion picture camera with Doppler shift compensation
NASA Technical Reports Server (NTRS)
Kurtz, R. L. (Inventor)
1976-01-01
A holographic motion picture camera is reported for producing three dimensional images by employing an elliptical optical system. There is provided in one of the beam paths (the object or reference beam path) a motion compensator which enables the camera to photograph faster moving objects.
A Comparison of Techniques for Camera Selection and Hand-Off in a Video Network
NASA Astrophysics Data System (ADS)
Li, Yiming; Bhanu, Bir
Video networks are becoming increasingly important for solving many real-world problems. Multiple video sensors require collaboration when performing various tasks. One of the most basic tasks is the tracking of objects, which requires mechanisms to select a camera for a certain object and hand-off this object from one camera to another so as to accomplish seamless tracking. In this chapter, we provide a comprehensive comparison of current and emerging camera selection and hand-off techniques. We consider geometry-, statistics-, and game theory-based approaches and provide both theoretical and experimental comparison using centralized and distributed computational models. We provide simulation and experimental results using real data for various scenarios of a large number of cameras and objects for in-depth understanding of strengths and weaknesses of these techniques.
Systems and methods for maintaining multiple objects within a camera field-of-view
Gans, Nicholas R.; Dixon, Warren
2016-03-15
In one embodiment, a system and method for maintaining objects within a camera field of view include identifying constraints to be enforced, each constraint relating to an attribute of the viewed objects, identifying a priority rank for the constraints such that more important constraints have a higher priority that less important constraints, and determining the set of solutions that satisfy the constraints relative to the order of their priority rank such that solutions that satisfy lower ranking constraints are only considered viable if they also satisfy any higher ranking constraints, each solution providing an indication as to how to control the camera to maintain the objects within the camera field of view.
NASA Astrophysics Data System (ADS)
Kadosh, Itai; Sarusi, Gabby
2017-10-01
The use of dual cameras in parallax in order to detect and create 3-D images in mobile devices has been increasing over the last few years. We propose a concept where the second camera will be operating in the short-wavelength infrared (SWIR-1300 to 1800 nm) and thus have night vision capability while preserving most of the other advantages of dual cameras in terms of depth and 3-D capabilities. In order to maintain commonality of the two cameras, we propose to attach to one of the cameras a SWIR to visible upconversion layer that will convert the SWIR image into a visible image. For this purpose, the fore optics (the objective lenses) should be redesigned for the SWIR spectral range and the additional upconversion layer, whose thickness is <1 μm. Such layer should be attached in close proximity to the mobile device visible range camera sensor (the CMOS sensor). This paper presents such a SWIR objective optical design and optimization that is formed and fit mechanically to the visible objective design but with different lenses in order to maintain the commonality and as a proof-of-concept. Such a SWIR objective design is very challenging since it requires mimicking the original visible mobile camera lenses' sizes and the mechanical housing, so we can adhere to the visible optical and mechanical design. We present in depth a feasibility study and the overall optical system performance of such a SWIR mobile-device camera fore optics design.
Phase-stepped fringe projection by rotation about the camera's perspective center.
Huddart, Y R; Valera, J D; Weston, N J; Featherstone, T C; Moore, A J
2011-09-12
A technique to produce phase steps in a fringe projection system for shape measurement is presented. Phase steps are produced by introducing relative rotation between the object and the fringe projection probe (comprising a projector and camera) about the camera's perspective center. Relative motion of the object in the camera image can be compensated, because it is independent of the distance of the object from the camera, whilst the phase of the projected fringes is stepped due to the motion of the projector with respect to the object. The technique was validated with a static fringe projection system by moving an object on a coordinate measuring machine (CMM). The alternative approach, of rotating a lightweight and robust CMM-mounted fringe projection probe, is discussed. An experimental accuracy of approximately 1.5% of the projected fringe pitch was achieved, limited by the standard phase-stepping algorithms used rather than by the accuracy of the phase steps produced by the new technique.
Radiation camera motion correction system
Hoffer, P.B.
1973-12-18
The device determines the ratio of the intensity of radiation received by a radiation camera from two separate portions of the object. A correction signal is developed to maintain this ratio at a substantially constant value and this correction signal is combined with the camera signal to correct for object motion. (Official Gazette)
Sorption and Transport of Pharmaceutical chemicals in Organic- and Mineral-rich Soils
NASA Astrophysics Data System (ADS)
Vulava, V. M.; Schwindaman, J.; Murphey, V.; Kuzma, S.; Cory, W.
2011-12-01
Pharmaceutical, active ingredients in personal care products (PhACs), and their derivative compounds are increasingly ubiquitous in surface waters across the world. Sorption and transport of four relatively common PhACs (naproxen, ibuprofen, cetirizine, and triclosan) in different natural soils was measured. All of these compounds are relatively hydrophobic (log KOW>2) and have acid/base functional groups, including one compound that is zwitterionic (cetirizine.) The main goal of this study was to correlate organic matter (OM) and clay content in natural soils and sediment with sorption and degradation of PhACs and ultimately their potential for transport within the subsurface environment. A- and B-horizon soils were collected from four sub-regions within a pristine managed forested watershed near Charleston, SC, with no apparent sources of anthropogenic contamination. These four soil series had varying OM content (fOC) between 0.4-9%, clay mineral content between 6-20%, and soil pH between 4.5-6. The A-horizon soils had higher fOC and lower clay content than the B-horizon soils. Sorption isotherms measured from batch sorption experimental data indicated a non-linear sorption relationship in all A- and B-horizon soils - stronger sorption was observed at lower PhAC concentrations and lower sorption at higher concentrations. Three PhACs (naproxen, ibuprofen, and triclosan) sorbed more strongly with higher fOC A-horizon soils compared with the B-horizon soils. These results show that soil OM had a significant role in strongly binding these three PhACs, which had the highest KOW values. In contrast, cetirizine, which is predominantly positively charged at pH below 8, strongly sorbed to soils with higher clay mineral content and least strongly to higher fOC soils. All sorption isotherms fitted well to the Freundlich model. For naproxen, ibuprofen, and triclosan, there was a strong and positive linear correlation between the Freundlich adsorption constant, Kf, and fOC, again indicating that these PhACs preferentially partition into the soil OM. Such a correlation was absent for cetirizine. Breakthrough curves of PhACs measured in homogeneous packed soil columns indicated that PhAC transport was affected by chemical nonequilibrium processes depending on the soil and PhAC chemistry. The shape of the breakthrough curves indicated that there were two distinct sorption sites - OM and clay minerals - which influence nonequilibrium transport of these compounds. The retardation factor estimated using the distribution coefficient, Kd, measured from the sorption experiments was very similar to the measured value. While the sorption and transport data do not provide mechanistic information regarding the nature of PhAC interaction with chemical reactive components within geological materials, they do provide important information regarding potential fate of such compounds in the environment. The results also show the role that soil OM and mineral surfaces play in sequestering or transporting these chemicals. These insights have implications to the quality of the water resources in our communities.
Stereo depth distortions in teleoperation
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Vonsydow, Marika
1988-01-01
In teleoperation, a typical application of stereo vision is to view a work space located short distances (1 to 3m) in front of the cameras. The work presented here treats converged camera placement and studies the effects of intercamera distance, camera-to-object viewing distance, and focal length of the camera lenses on both stereo depth resolution and stereo depth distortion. While viewing the fronto-parallel plane 1.4 m in front of the cameras, depth errors are measured on the order of 2cm. A geometric analysis was made of the distortion of the fronto-parallel plane of divergence for stereo TV viewing. The results of the analysis were then verified experimentally. The objective was to determine the optimal camera configuration which gave high stereo depth resolution while minimizing stereo depth distortion. It is found that for converged cameras at a fixed camera-to-object viewing distance, larger intercamera distances allow higher depth resolutions, but cause greater depth distortions. Thus with larger intercamera distances, operators will make greater depth errors (because of the greater distortions), but will be more certain that they are not errors (because of the higher resolution).
Pathways to Firesetting for Mentally Disordered Offenders: A Preliminary Examination.
Tyler, Nichola; Gannon, Theresa A
2017-06-01
The current study aimed to investigate the specific pathways in the offence process for mentally disordered firesetters. In a previous study, an offence chain model was constructed (i.e., the Firesetting Offence Chain for Mentally Disordered Offenders, FOC-MD) using offence descriptions obtained from 23 mentally disordered firesetters, detailing the sequence of contextual, behavioural, affective, and cognitive factors that precipitate an incidence of firesetting for this population. The current study examines the prevalence of the specific pathways to firesetting for the original 23 mentally disordered firesetters and a further sample of 13 mentally disordered firesetters. Three distinct pathways to firesetting are identified within the FOC-MD: fire interest-childhood mental health, no fire interest-adult mental health, fire interest-adult mental health. In this article, we describe these three pathways in detail using illustrative case studies. The practice implications of these identified pathways are also discussed.
LACIE performance predictor FOC users manual
NASA Technical Reports Server (NTRS)
1976-01-01
The LACIE Performance Predictor (LPP) is a computer simulation of the LACIE process for predicting worldwide wheat production. The simulation provides for the introduction of various errors into the system and provides estimates based on these errors, thus allowing the user to determine the impact of selected error sources. The FOC LPP simulates the acquisition of the sample segment data by the LANDSAT Satellite (DAPTS), the classification of the agricultural area within the sample segment (CAMS), the estimation of the wheat yield (YES), and the production estimation and aggregation (CAS). These elements include data acquisition characteristics, environmental conditions, classification algorithms, the LACIE aggregation and data adjustment procedures. The operational structure for simulating these elements consists of the following key programs: (1) LACIE Utility Maintenance Process, (2) System Error Executive, (3) Ephemeris Generator, (4) Access Generator, (5) Acquisition Selector, (6) LACIE Error Model (LEM), and (7) Post Processor.
Performance testing of a high frequency link converter for Space Station power distribution system
NASA Technical Reports Server (NTRS)
Sul, S. K.; Alan, I.; Lipo, T. A.
1989-01-01
The testing of a brassboard version of a 20-kHz high-frequency ac voltage link prototype converter dynamics for Space Station application is presented. The converter is based on a three-phase six-pulse bridge concept. The testing includes details of the operation of the converter when it is driving an induction machine source/load. By adapting a field orientation controller (FOC) to the converter, four-quadrant operation of the induction machine from the converter has been achieved. Circuit modifications carried out to improve the performance of the converter are described. The performance of two 400-Hz induction machines powered by the converter with simple V/f regulation mode is reported. The testing and performance results for the converter utilizing the FOC, which provides the capability for rapid torque changes, speed reversal, and four-quadrant operation, are reported.
Gutierrez-Villalobos, Jose M.; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A.; Martínez-Hernández, Moisés A.
2015-01-01
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor. PMID:26131677
Gutierrez-Villalobos, Jose M; Rodriguez-Resendiz, Juvenal; Rivas-Araiza, Edgar A; Martínez-Hernández, Moisés A
2015-06-29
Three-phase induction motor drive requires high accuracy in high performance processes in industrial applications. Field oriented control, which is one of the most employed control schemes for induction motors, bases its function on the electrical parameter estimation coming from the motor. These parameters make an electrical machine driver work improperly, since these electrical parameter values change at low speeds, temperature changes, and especially with load and duty changes. The focus of this paper is the real-time and on-line electrical parameters with a CMAC-ADALINE block added in the standard FOC scheme to improve the IM driver performance and endure the driver and the induction motor lifetime. Two kinds of neural network structures are used; one to estimate rotor speed and the other one to estimate rotor resistance of an induction motor.
Wessman, F G; Yan Yuegen, E; Zheng, Q; He, G; Welander, T; Rusten, B
2004-01-01
The Kaldnes biomedia K1, which is used in the patented Kaldnes Moving Bed biofilm process, has been tested along with other types of biofilm carriers for biological pretreatment of a complex chemical industry wastewater. The main objective of the test was to find a biofilm carrier that could replace the existing suspended carrier media and at the same time increase the capacity of the existing roughing filter-activated sludge plant by 20% or more. At volumetric organic loads of 7.1 kg COD/m3/d the Kaldnes Moving Bed process achieved much higher removal rates and much lower effluent concentrations than roughing filters using other carriers. The Kaldnes roughing stage achieved more than 85% removal of organic carbon and more than 90% removal of BOD5 at the tested organic load, which was equivalent to a specific biofilm surface area load of 24 g COD/m2/d. Even for the combined roughing filter-activated sludge process, the Kaldnes carriers outperformed the other carriers, with 98% removal of organic carbon and 99.6% removal of BOD5. The Kaldnes train final effluent concentrations were only 22 mg FOC/L and 7 mg BOD5/L. Based on the successful pilot testing, the full-scale plant was upgraded with Kaldnes Moving Bed roughing filters. During normal operation the upgraded plant has easily met the discharge limits of 100 mg COD/L and 50 mg SS/L. For the month of September 2002, with organic loads between 100 and 115% of the design load for the second half of the month, average effluent concentrations were as low as 9 mg FOC/L, 51 mg COD/L and 12 mg SS/L.
Leveraging traffic and surveillance video cameras for urban traffic.
DOT National Transportation Integrated Search
2014-12-01
The objective of this project was to investigate the use of existing video resources, such as traffic : cameras, police cameras, red light cameras, and security cameras for the long-term, real-time : collection of traffic statistics. An additional ob...
NASA Astrophysics Data System (ADS)
Zhang, Bing; Li, Kunyang
2018-02-01
The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Shestakov, Ivan L.; Blednov, Roman G.
2016-09-01
One of urgent security problems is a detection of objects placed inside the human body. Obviously, for safety reasons one cannot use X-rays for such object detection widely and often. Three years ago, we have demonstrated principal possibility to see a temperature trace, induced by food eating or water drinking, on the human body skin by using a passive THz camera. However, this camera is very expensive. Therefore, for practice it will be very convenient if one can use the IR camera for this purpose. In contrast to passive THz camera using, the IR camera does not allow to see the object under clothing, if an image, produced by this camera, is used directly. Of course, this is a big disadvantage for a security problem solution based on the IR camera using. To overcome this disadvantage we develop novel approach for computer processing of IR camera images. It allows us to increase a temperature resolution of IR camera as well as increasing of human year effective susceptibility. As a consequence of this, a possibility for seeing of a human body temperature changing through clothing appears. We analyze IR images of a person, which drinks water and eats chocolate. We follow a temperature trace on human body skin, caused by changing of temperature inside the human body. Some experiments were made with measurements of a body temperature covered by T-shirt. Shown results are very important for the detection of forbidden objects, cancelled inside the human body, by using non-destructive control without using X-rays.
Heyde, Brecht; Bottenus, Nick; D'hooge, Jan; Trahey, Gregg E
2017-02-01
The transverse oscillation (TO) technique can improve the estimation of tissue motion perpendicular to the ultrasound beam direction. TOs can be introduced using plane wave (PW) insonification and bilobed Gaussian apodization (BA) on receive (abbreviated as PWTO). Furthermore, the TO frequency of PWTO can be doubled after a heterodyning demodulation process is performed (abbreviated as PWTO*). This paper is concerned with identifying the limitations of the PWTO technique in the specific context of myocardial deformation imaging with phased arrays and investigating the conditions in which it remains advantageous over traditional focused (FOC) beamforming. For this purpose, several tissue phantoms were simulated using Field II, undergoing a wide range of displacement magnitudes and modes (lateral, axial, and rotational motions). The Cramer-Rao lower bound was used to optimize TO beamforming parameters and theoretically predict the fundamental tracking performance limits associated with the FOC, PWTO, and PWTO* beamforming scenarios. This framework was extended to also predict the performance for BA functions that are windowed by the physical aperture of the transducer, leading to higher lateral oscillations. It was found that windowed BA functions resulted in lower jitter errors compared with traditional BA functions. PWTO* outperformed FOC at all investigated signal-to-noise ratio (SNR) levels but only up to a certain displacement, with the advantage rapidly decreasing when the SNR increased. These results suggest that PWTO* improves lateral tracking performance, but only when interframe displacements remain relatively low. This paper concludes by translating these findings into a clinical environment by suggesting optimal scanner settings.
Chatterjee, Moniya; Das, Sampa
2013-01-01
Reactive oxygen species are known to play pivotal roles in pathogen perception, recognition and downstream defense signaling. But, how these redox alarms coordinate in planta into a defensive network is still intangible. Present study illustrates the role of Fusarium oxysporum f.sp ciceri Race1 (Foc1) induced redox responsive transcripts in regulating downstream defense signaling in chickpea. Confocal microscopic studies highlighted pathogen invasion and colonization accompanied by tissue damage and deposition of callose degraded products at the xylem vessels of infected roots of chickpea plants. Such depositions led to the clogging of xylem vessels in compatible hosts while the resistant plants were devoid of such obstructions. Lipid peroxidation assays also indicated fungal induced membrane injury. Cell shrinkage and gradual nuclear adpression appeared as interesting features marking fungal ingress. Quantitative real time polymerase chain reaction exhibited differential expression patterns of redox regulators, cellular transporters and transcription factors during Foc1 progression. Network analysis showed redox regulators, cellular transporters and transcription factors to coordinate into a well orchestrated defensive network with sugars acting as internal signal modulators. Respiratory burst oxidase homologue, cationic peroxidase, vacuolar sorting receptor, polyol transporter, sucrose synthase, and zinc finger domain containing transcription factor appeared as key molecular candidates controlling important hubs of the defense network. Functional characterization of these hub controllers may prove to be promising in understanding chickpea–Foc1 interaction and developing the case study as a model for looking into the complexities of wilt diseases of other important crop legumes. PMID:24058463
Far ultraviolet wide field imaging and photometry - Spartan-202 Mark II Far Ultraviolet Camera
NASA Technical Reports Server (NTRS)
Carruthers, George R.; Heckathorn, Harry M.; Opal, Chet B.; Witt, Adolf N.; Henize, Karl G.
1988-01-01
The U.S. Naval Research Laboratory' Mark II Far Ultraviolet Camera, which is expected to be a primary scientific instrument aboard the Spartan-202 Space Shuttle mission, is described. This camera is intended to obtain FUV wide-field imagery of stars and extended celestial objects, including diffuse nebulae and nearby galaxies. The observations will support the HST by providing FUV photometry of calibration objects. The Mark II camera is an electrographic Schmidt camera with an aperture of 15 cm, a focal length of 30.5 cm, and sensitivity in the 1230-1600 A wavelength range.
NASA Astrophysics Data System (ADS)
Dong, Shidu; Yang, Xiaofan; He, Bo; Liu, Guojin
2006-11-01
Radiance coming from the interior of an uncooled infrared camera has a significant effect on the measured value of the temperature of the object. This paper presents a three-phase compensation scheme for coping with this effect. The first phase acquires the calibration data and forms the calibration function by least square fitting. Likewise, the second phase obtains the compensation data and builds the compensation function by fitting. With the aid of these functions, the third phase determines the temperature of the object in concern from any given ambient temperature. It is known that acquiring the compensation data of a camera is very time-consuming. For the purpose of getting the compensation data at a reasonable time cost, we propose a transplantable scheme. The idea of this scheme is to calculate the ratio between the central pixel’s responsivity of the child camera to the radiance from the interior and that of the mother camera, followed by determining the compensation data of the child camera using this ratio and the compensation data of the mother camera Experimental results show that either of the child camera and the mother camera can measure the temperature of the object with an error of no more than 2°C.
A Liver-centric Multiscale Modeling Framework for Xenobiotics
We describe a multi-scale framework for modeling acetaminophen-induced liver toxicity. Acetaminophen is a widely used analgesic. Overdose of acetaminophen can result in liver injury via its biotransformation into toxic product, which further induce massive necrosis. Our study foc...
San Francisco urban partnership agreement, national evaluation : environmental data test plan.
DOT National Transportation Integrated Search
2011-06-01
This report presents the test plan for collecting and analyzing environmental data for the San Francisco Urban Partnership Agreement (UPA) under the United States Department of Transportation (U.S. DOT) UPA Program. The San Francisco UPA projects foc...
NASA Astrophysics Data System (ADS)
Kolkoori, S.; Wrobel, N.; Osterloh, K.; Zscherpel, U.; Ewert, U.
2013-09-01
Radiological inspections, in general, are the nondestructive testing (NDT) methods to detect the bulk of explosives in large objects. In contrast to personal luggage, cargo or building components constitute a complexity that may significantly hinder the detection of a threat by conventional X-ray transmission radiography. In this article, a novel X-ray backscatter technique is presented for detecting suspicious objects in a densely packed large object with only a single sided access. It consists of an X-ray backscatter camera with a special twisted slit collimator for imaging backscattering objects. The new X-ray backscatter camera is not only imaging the objects based on their densities but also by including the influences of surrounding objects. This unique feature of the X-ray backscatter camera provides new insights in identifying the internal features of the inspected object. Experimental mock-ups were designed imitating containers with threats among a complex packing as they may be encountered in reality. We investigated the dependence of the quality of the X-ray backscatter image on (a) the exposure time, (b) multiple exposures, (c) the distance between object and slit camera, and (d) the width of the slit. At the end, the significant advantages of the presented X-ray backscatter camera in the context of aviation and port security are discussed.
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
NASA Astrophysics Data System (ADS)
Stavroulakis, Petros I.; Chen, Shuxiao; Sims-Waterhouse, Danny; Piano, Samanta; Southon, Nicholas; Bointon, Patrick; Leach, Richard
2017-06-01
In non-rigid fringe projection 3D measurement systems, where either the camera or projector setup can change significantly between measurements or the object needs to be tracked, self-calibration has to be carried out frequently to keep the measurements accurate1. In fringe projection systems, it is common to use methods developed initially for photogrammetry for the calibration of the camera(s) in the system in terms of extrinsic and intrinsic parameters. To calibrate the projector(s) an extra correspondence between a pre-calibrated camera and an image created by the projector is performed. These recalibration steps are usually time consuming and involve the measurement of calibrated patterns on planes, before the actual object can continue to be measured after a motion of a camera or projector has been introduced in the setup and hence do not facilitate fast 3D measurement of objects when frequent experimental setup changes are necessary. By employing and combining a priori information via inverse rendering, on-board sensors, deep learning and leveraging a graphics processor unit (GPU), we assess a fine camera pose estimation method which is based on optimising the rendering of a model of a scene and the object to match the view from the camera. We find that the success of this calibration pipeline can be greatly improved by using adequate a priori information from the aforementioned sources.
NASA Astrophysics Data System (ADS)
Rieke-Zapp, D.; Tecklenburg, W.; Peipe, J.; Hastedt, H.; Haig, Claudia
Recent tests on the geometric stability of several digital cameras that were not designed for photogrammetric applications have shown that the accomplished accuracies in object space are either limited or that the accuracy potential is not exploited to the fullest extent. A total of 72 calibrations were calculated with four different software products for eleven digital camera models with different hardware setups, some with mechanical fixation of one or more parts. The calibration procedure was chosen in accord to a German guideline for evaluation of optical 3D measuring systems [VDI/VDE, VDI/VDE 2634 Part 1, 2002. Optical 3D Measuring Systems-Imaging Systems with Point-by-point Probing. Beuth Verlag, Berlin]. All images were taken with ringflashes which was considered a standard method for close-range photogrammetry. In cases where the flash was mounted to the lens, the force exerted on the lens tube and the camera mount greatly reduced the accomplished accuracy. Mounting the ringflash to the camera instead resulted in a large improvement of accuracy in object space. For standard calibration best accuracies in object space were accomplished with a Canon EOS 5D and a 35 mm Canon lens where the focusing tube was fixed with epoxy (47 μm maximum absolute length measurement error in object space). The fixation of the Canon lens was fairly easy and inexpensive resulting in a sevenfold increase in accuracy compared with the same lens type without modification. A similar accuracy was accomplished with a Nikon D3 when mounting the ringflash to the camera instead of the lens (52 μm maximum absolute length measurement error in object space). Parameterisation of geometric instabilities by introduction of an image variant interior orientation in the calibration process improved results for most cameras. In this case, a modified Alpa 12 WA yielded the best results (29 μm maximum absolute length measurement error in object space). Extending the parameter model with FiBun software to model not only an image variant interior orientation, but also deformations in the sensor domain of the cameras, showed significant improvements only for a small group of cameras. The Nikon D3 camera yielded the best overall accuracy (25 μm maximum absolute length measurement error in object space) with this calibration procedure indicating at the same time the presence of image invariant error in the sensor domain. Overall, calibration results showed that digital cameras can be applied for an accurate photogrammetric survey and that only a little effort was sufficient to greatly improve the accuracy potential of digital cameras.
D Reconstruction of AN Underwater Archaelogical Site: Comparison Between Low Cost Cameras
NASA Astrophysics Data System (ADS)
Capra, A.; Dubbini, M.; Bertacchini, E.; Castagnetti, C.; Mancini, F.
2015-04-01
The 3D reconstruction with a metric content of a submerged area, where objects and structures of archaeological interest are found, could play an important role in the research and study activities and even in the digitization of the cultural heritage. The reconstruction of 3D object, of interest for archaeologists, constitutes a starting point in the classification and description of object in digital format and for successive fruition by user after delivering through several media. The starting point is a metric evaluation of the site obtained with photogrammetric surveying and appropriate 3D restitution. The authors have been applying the underwater photogrammetric technique since several years using underwater digital cameras and, in this paper, digital low cost cameras (off-the-shelf). Results of tests made on submerged objects with three cameras are presented: Canon Power Shot G12, Intova Sport HD e GoPro HERO 2. The experimentation had the goal to evaluate the precision in self-calibration procedures, essential for multimedia underwater photogrammetry, and to analyze the quality of 3D restitution. Precisions obtained in the calibration and orientation procedures was assessed by using three cameras, and an homogeneous set control points. Data were processed with Agisoft Photoscan. Successively, 3D models were created and the comparison of the models derived from the use of different cameras was performed. Different potentialities of the used cameras are reported in the discussion section. The 3D restitution of objects and structures was integrated with sea bottom floor morphology in order to achieve a comprehensive description of the site. A possible methodology of survey and representation of submerged objects is therefore illustrated, considering an automatic and a semi-automatic approach.
USDA-ARS?s Scientific Manuscript database
This publication is the foreword to a special edition of the journal Yeast that includes selected review manuscripts resulting from presentations at the 13th International Congress on Yeasts that was held in Madison, Wisconsin, USA, Aug. 26-30, 2012. The papers, by various international authors, foc...
DOT National Transportation Integrated Search
2015-09-01
This literature review and reference scanning focuses on the use of driver simulators for semiautonomous (or shared control) vehicle systems (2012present), including related research from other modes of transportation (e.g., rail or aviation). Foc...
Understanding Epileptiform After-Discharges as Rhythmic Oscillatory Transients.
Baier, Gerold; Taylor, Peter N; Wang, Yujiang
2017-01-01
Electro-cortical activity in patients with epilepsy may show abnormal rhythmic transients in response to stimulation. Even when using the same stimulation parameters in the same patient, wide variability in the duration of transient response has been reported. These transients have long been considered important for the mapping of the excitability levels in the epileptic brain but their dynamic mechanism is still not well understood. To investigate the occurrence of abnormal transients dynamically, we use a thalamo-cortical neural population model of epileptic spike-wave activity and study the interaction between slow and fast subsystems. In a reduced version of the thalamo-cortical model, slow wave oscillations arise from a fold of cycles (FoC) bifurcation. This marks the onset of a region of bistability between a high amplitude oscillatory rhythm and the background state. In vicinity of the bistability in parameter space, the model has excitable dynamics, showing prolonged rhythmic transients in response to suprathreshold pulse stimulation. We analyse the state space geometry of the bistable and excitable states, and find that the rhythmic transient arises when the impending FoC bifurcation deforms the state space and creates an area of locally reduced attraction to the fixed point. This area essentially allows trajectories to dwell there before escaping to the stable steady state, thus creating rhythmic transients. In the full thalamo-cortical model, we find a similar FoC bifurcation structure. Based on the analysis, we propose an explanation of why stimulation induced epileptiform activity may vary between trials, and predict how the variability could be related to ongoing oscillatory background activity. We compare our dynamic mechanism with other mechanisms (such as a slow parameter change) to generate excitable transients, and we discuss the proposed excitability mechanism in the context of stimulation responses in the epileptic cortex.
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks.
Su, Po-Chang; Shen, Ju; Xu, Wanxin; Cheung, Sen-Ching S; Luo, Ying
2018-01-15
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds.
A Fast and Robust Extrinsic Calibration for RGB-D Camera Networks †
Shen, Ju; Xu, Wanxin; Luo, Ying
2018-01-01
From object tracking to 3D reconstruction, RGB-Depth (RGB-D) camera networks play an increasingly important role in many vision and graphics applications. Practical applications often use sparsely-placed cameras to maximize visibility, while using as few cameras as possible to minimize cost. In general, it is challenging to calibrate sparse camera networks due to the lack of shared scene features across different camera views. In this paper, we propose a novel algorithm that can accurately and rapidly calibrate the geometric relationships across an arbitrary number of RGB-D cameras on a network. Our work has a number of novel features. First, to cope with the wide separation between different cameras, we establish view correspondences by using a spherical calibration object. We show that this approach outperforms other techniques based on planar calibration objects. Second, instead of modeling camera extrinsic calibration using rigid transformation, which is optimal only for pinhole cameras, we systematically test different view transformation functions including rigid transformation, polynomial transformation and manifold regression to determine the most robust mapping that generalizes well to unseen data. Third, we reformulate the celebrated bundle adjustment procedure to minimize the global 3D reprojection error so as to fine-tune the initial estimates. Finally, our scalable client-server architecture is computationally efficient: the calibration of a five-camera system, including data capture, can be done in minutes using only commodity PCs. Our proposed framework is compared with other state-of-the-arts systems using both quantitative measurements and visual alignment results of the merged point clouds. PMID:29342968
Correction And Use Of Jitter In Television Images
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.
1989-01-01
Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.
Cooperative multisensor system for real-time face detection and tracking in uncontrolled conditions
NASA Astrophysics Data System (ADS)
Marchesotti, Luca; Piva, Stefano; Turolla, Andrea; Minetti, Deborah; Regazzoni, Carlo S.
2005-03-01
The presented work describes an innovative architecture for multi-sensor distributed video surveillance applications. The aim of the system is to track moving objects in outdoor environments with a cooperative strategy exploiting two video cameras. The system also exhibits the capacity of focusing its attention on the faces of detected pedestrians collecting snapshot frames of face images, by segmenting and tracking them over time at different resolution. The system is designed to employ two video cameras in a cooperative client/server structure: the first camera monitors the entire area of interest and detects the moving objects using change detection techniques. The detected objects are tracked over time and their position is indicated on a map representing the monitored area. The objects" coordinates are sent to the server sensor in order to point its zooming optics towards the moving object. The second camera tracks the objects at high resolution. As well as the client camera, this sensor is calibrated and the position of the object detected on the image plane reference system is translated in its coordinates referred to the same area map. In the map common reference system, data fusion techniques are applied to achieve a more precise and robust estimation of the objects" track and to perform face detection and tracking. The work novelties and strength reside in the cooperative multi-sensor approach, in the high resolution long distance tracking and in the automatic collection of biometric data such as a person face clip for recognition purposes.
Membrane Transport Phenomena (MTP)
NASA Technical Reports Server (NTRS)
Mason, Larry W.
1996-01-01
The development of the seal between the membrane and the Fluid Optical Cells (FOC) has been a high priority activity. This seal occurs at an interface in the instrument where three key functions must be realized: (1) physical membrane support, (2) fluid sealing, and (3) unobscured optical transmission.
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-01-01
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm. PMID:29144420
Visual Positioning Indoors: Human Eyes vs. Smartphone Cameras.
Wu, Dewen; Chen, Ruizhi; Chen, Liang
2017-11-16
Artificial Intelligence (AI) technologies and their related applications are now developing at a rapid pace. Indoor positioning will be one of the core technologies that enable AI applications because people spend 80% of their time indoors. Humans can locate themselves related to a visually well-defined object, e.g., a door, based on their visual observations. Can a smartphone camera do a similar job when it points to an object? In this paper, a visual positioning solution was developed based on a single image captured from a smartphone camera pointing to a well-defined object. The smartphone camera simulates the process of human eyes for the purpose of relatively locating themselves against a well-defined object. Extensive experiments were conducted with five types of smartphones on three different indoor settings, including a meeting room, a library, and a reading room. Experimental results shown that the average positioning accuracy of the solution based on five smartphone cameras is 30.6 cm, while that for the human-observed solution with 300 samples from 10 different people is 73.1 cm.
Field oriented control of induction motors
NASA Technical Reports Server (NTRS)
Burrows, Linda M.; Zinger, Don S.; Roth, Mary Ellen
1990-01-01
Induction motors have always been known for their simple rugged construction, but until lately were not suitable for variable speed or servo drives due to the inherent complexity of the controls. With the advent of field oriented control (FOC), however, the induction motor has become an attractive option for these types of drive systems. An FOC system which utilizes the pulse population modulation method to synthesize the motor drive frequencies is examined. This system allows for a variable voltage to frequency ratio and enables the user to have independent control of both the speed and torque of an induction motor. A second generation of the control boards were developed and tested with the next point of focus being the minimization of the size and complexity of these controls. Many options were considered with the best approach being the use of a digital signal processor (DSP) due to its inherent ability to quickly evaluate control algorithms. The present test results of the system and the status of the optimization process using a DSP are discussed.
Geometric database maintenance using CCTV cameras and overlay graphics
NASA Astrophysics Data System (ADS)
Oxenberg, Sheldon C.; Landell, B. Patrick; Kan, Edwin
1988-01-01
An interactive graphics system using closed circuit television (CCTV) cameras for remote verification and maintenance of a geometric world model database has been demonstrated in GE's telerobotics testbed. The database provides geometric models and locations of objects viewed by CCTV cameras and manipulated by telerobots. To update the database, an operator uses the interactive graphics system to superimpose a wireframe line drawing of an object with known dimensions on a live video scene containing that object. The methodology used is multipoint positioning to easily superimpose a wireframe graphic on the CCTV image of an object in the work scene. An enhanced version of GE's interactive graphics system will provide the object designation function for the operator control station of the Jet Propulsion Laboratory's telerobot demonstration system.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
In-flight performance of the Faint Object Camera of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Greenfield, P.; Paresce, F.; Baxter, D.; Hodge, P.; Hook, R.; Jakobsen, P.; Jedrzejewski, R.; Nota, A.; Sparks, W. B.; Towers, N.
1991-01-01
An overview of the Faint Object Camera and its performance to date is presented. In particular, the detector's efficiency, the spatial uniformity of response, distortion characteristics, detector and sky background, detector linearity, spectrography, and operation are discussed. The effect of the severe spherical aberration of the telescope's primary mirror on the camera's point spread function is reviewed, as well as the impact it has on the camera's general performance. The scientific implications of the performance and the spherical aberration are outlined, with emphasis on possible remedies for spherical aberration, hardware remedies, and stellar population studies.
Thermal Imaging with Novel Infrared Focal Plane Arrays and Quantitative Analysis of Thermal Imagery
NASA Technical Reports Server (NTRS)
Gunapala, S. D.; Rafol, S. B.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Soibel, A.; Ting, D. Z.; Tidrow, Meimei
2012-01-01
We have developed a single long-wavelength infrared (LWIR) quantum well infrared photodetector (QWIP) camera for thermography. This camera has been used to measure the temperature profile of patients. A pixel coregistered simultaneously reading mid-wavelength infrared (MWIR)/LWIR dual-band QWIP camera was developed to improve the accuracy of temperature measurements especially with objects with unknown emissivity. Even the dualband measurement can provide inaccurate results due to the fact that emissivity is a function of wavelength. Thus we have been developing a four-band QWIP camera for accurate temperature measurement of remote object.
Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Importance of Dissolved Organic Nitrogen to Water Quality in Narragansett Bay
This preliminary analysis of the importance of the dissolved organic nitrogen (DON) pool in Narragansett Bay is being conducted as part of a five-year study of Narragansett Bay and its watershed. This larger study includes water quality and ecological modeling components that foc...
MATREX Leads the Way in Implementing New DOD VV&A Documentation Standards
2007-05-24
Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems Acquisition Concept...Communications Human Performance Model • C3GRID – Command & Control, Computer GRID • CES – Communications Effects Server • CMS2 – Comprehensive
Report Of The HST Strategy Panel: A Strategy For Recovery
1991-01-01
orbit change out: the Wide Field/Planetary Camera II (WFPC II), the Near-Infrared Camera and Multi- Object Spectrometer (NICMOS) and the Space ...are the Space Telescope Imaging Spectrograph (STB), the Near-Infrared Camera and Multi- Object Spectrom- eter (NICMOS), and the second Wide Field and...expected to fail to lock due to duplicity was 20%; on- orbit data indicates that 10% may be a better estimate, but the guide stars were preselected
System of technical vision for autonomous unmanned aerial vehicles
NASA Astrophysics Data System (ADS)
Bondarchuk, A. S.
2018-05-01
This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.
Neuromorphic Event-Based 3D Pose Estimation
Reverter Valeiras, David; Orchard, Garrick; Ieng, Sio-Hoi; Benosman, Ryad B.
2016-01-01
Pose estimation is a fundamental step in many artificial vision tasks. It consists of estimating the 3D pose of an object with respect to a camera from the object's 2D projection. Current state of the art implementations operate on images. These implementations are computationally expensive, especially for real-time applications. Scenes with fast dynamics exceeding 30–60 Hz can rarely be processed in real-time using conventional hardware. This paper presents a new method for event-based 3D object pose estimation, making full use of the high temporal resolution (1 μs) of asynchronous visual events output from a single neuromorphic camera. Given an initial estimate of the pose, each incoming event is used to update the pose by combining both 3D and 2D criteria. We show that the asynchronous high temporal resolution of the neuromorphic camera allows us to solve the problem in an incremental manner, achieving real-time performance at an update rate of several hundreds kHz on a conventional laptop. We show that the high temporal resolution of neuromorphic cameras is a key feature for performing accurate pose estimation. Experiments are provided showing the performance of the algorithm on real data, including fast moving objects, occlusions, and cases where the neuromorphic camera and the object are both in motion. PMID:26834547
Grubsky, Victor; Romanoov, Volodymyr; Shoemaker, Keith; Patton, Edward Matthew; Jannson, Tomasz
2016-02-02
A Compton tomography system comprises an x-ray source configured to produce a planar x-ray beam. The beam irradiates a slice of an object to be imaged, producing Compton-scattered x-rays. The Compton-scattered x-rays are imaged by an x-ray camera. Translation of the object with respect to the source and camera or vice versa allows three-dimensional object imaging.
Systems and methods for estimating the structure and motion of an object
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dani, Ashwin P; Dixon, Warren
2015-11-03
In one embodiment, the structure and motion of a stationary object are determined using two images and a linear velocity and linear acceleration of a camera. In another embodiment, the structure and motion of a stationary or moving object are determined using an image and linear and angular velocities of a camera.
Feghali, Rosario; Mitiche, Amar
2004-11-01
The purpose of this study is to investigate a method of tracking moving objects with a moving camera. This method estimates simultaneously the motion induced by camera movement. The problem is formulated as a Bayesian motion-based partitioning problem in the spatiotemporal domain of the image quence. An energy functional is derived from the Bayesian formulation. The Euler-Lagrange descent equations determine imultaneously an estimate of the image motion field induced by camera motion and an estimate of the spatiotemporal motion undary surface. The Euler-Lagrange equation corresponding to the surface is expressed as a level-set partial differential equation for topology independence and numerically stable implementation. The method can be initialized simply and can track multiple objects with nonsimultaneous motions. Velocities on motion boundaries can be estimated from geometrical properties of the motion boundary. Several examples of experimental verification are given using synthetic and real-image sequences.
MATREX: A Unifying Modeling and Simulation Architecture for Live-Virtual-Constructive Applications
2007-05-23
Deployment Systems Acquisition Operations & Support B C Sustainment FRP Decision Review FOC LRIP/IOT& ECritical Design Review Pre-Systems...CMS2 – Comprehensive Munitions & Sensor Server • CSAT – C4ISR Static Analysis Tool • C4ISR – Command & Control, Communications, Computers
FACTORS MODULATING THE EPITHELIAL RESPONSE TO TOXICANTS IN TRACHEOBRONCHIAL AIRWAYS. (R827442)
As one of the principal interfaces between the organism and the environment, the respiratory system is a target for a wide variety of toxicants and carcinogens. The cellular and architectural complexity of the respiratory system appears to play a major role in defining the foc...
Expanded opportunities of THz passive camera for the detection of concealed objects
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.; Kuchik, Igor E.
2013-10-01
Among the security problems, the detection of object implanted into either the human body or animal body is the urgent problem. At the present time the main tool for the detection of such object is X-raying only. However, X-ray is the ionized radiation and therefore can not be used often. Other way for the problem solving is passive THz imaging using. In our opinion, using of the passive THz camera may help to detect the object implanted into the human body under certain conditions. The physical reason of such possibility arises from temperature trace on the human skin as a result of the difference in temperature between object and parts of human body. Modern passive THz cameras have not enough resolution in temperature to see this difference. That is why, we use computer processing to enhance the passive THz camera resolution for this application. After computer processing of images captured by passive THz camera TS4, developed by ThruVision Systems Ltd., we may see the pronounced temperature trace on the human body skin from the water, which is drunk by person, or other food eaten by person. Nevertheless, there are many difficulties on the way of full soution of this problem. We illustrate also an improvement of quality of the image captured by comercially available passive THz cameras using computer processing. In some cases, one can fully supress a noise on the image without loss of its quality. Using computer processing of the THz image of objects concealed on the human body, one may improve it many times. Consequently, the instrumental resolution of such device may be increased without any additional engineering efforts.
Active learning in camera calibration through vision measurement application
NASA Astrophysics Data System (ADS)
Li, Xiaoqin; Guo, Jierong; Wang, Xianchun; Liu, Changqing; Cao, Binfang
2017-08-01
Since cameras are increasingly more used in scientific application as well as in the applications requiring precise visual information, effective calibration of such cameras is getting more important. There are many reasons why the measurements of objects are not accurate. The largest reason is that the lens has a distortion. Another detrimental influence on the evaluation accuracy is caused by the perspective distortions in the image. They happen whenever we cannot mount the camera perpendicularly to the objects we want to measure. In overall, it is very important for students to understand how to correct lens distortions, that is camera calibration. If the camera is calibrated, the images are rectificated, and then it is possible to obtain undistorted measurements in world coordinates. This paper presents how the students should develop a sense of active learning for mathematical camera model besides the theoretical scientific basics. The authors will present the theoretical and practical lectures which have the goal of deepening the students understanding of the mathematical models of area scan cameras and building some practical vision measurement process by themselves.
Three-dimensional cinematography with control object of unknown shape.
Dapena, J; Harman, E A; Miller, J A
1982-01-01
A technique for reconstruction of three-dimensional (3D) motion which involves a simple filming procedure but allows the deduction of coordinates in large object volumes was developed. Internal camera parameters are calculated from measurements of the film images of two calibrated crosses while external camera parameters are calculated from the film images of points in a control object of unknown shape but at least one known length. The control object, which includes the volume in which the activity is to take place, is formed by a series of poles placed at unknown locations, each carrying two targets. From the internal and external camera parameters, and from locations of the images of point in the films of the two cameras, 3D coordinates of the point can be calculated. Root mean square errors of the three coordinates of points in a large object volume (5m x 5m x 1.5m) were 15 mm, 13 mm, 13 mm and 6 mm, and relative errors in lengths averaged 0.5%, 0.7% and 0.5%, respectively.
Detecting method of subjects' 3D positions and experimental advanced camera control system
NASA Astrophysics Data System (ADS)
Kato, Daiichiro; Abe, Kazuo; Ishikawa, Akio; Yamada, Mitsuho; Suzuki, Takahito; Kuwashima, Shigesumi
1997-04-01
Steady progress is being made in the development of an intelligent robot camera capable of automatically shooting pictures with a powerful sense of reality or tracking objects whose shooting requires advanced techniques. Currently, only experienced broadcasting cameramen can provide these pictures.TO develop an intelligent robot camera with these abilities, we need to clearly understand how a broadcasting cameraman assesses his shooting situation and how his camera is moved during shooting. We use a real- time analyzer to study a cameraman's work and his gaze movements at studios and during sports broadcasts. This time, we have developed a detecting method of subjects' 3D positions and an experimental camera control system to help us further understand the movements required for an intelligent robot camera. The features are as follows: (1) Two sensor cameras shoot a moving subject and detect colors, producing its 3D coordinates. (2) Capable of driving a camera based on camera movement data obtained by a real-time analyzer. 'Moving shoot' is the name we have given to the object position detection technology on which this system is based. We used it in a soccer game, producing computer graphics showing how players moved. These results will also be reported.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
Multi-camera sensor system for 3D segmentation and localization of multiple mobile robots.
Losada, Cristina; Mazo, Manuel; Palazuelos, Sira; Pizarro, Daniel; Marrón, Marta
2010-01-01
This paper presents a method for obtaining the motion segmentation and 3D localization of multiple mobile robots in an intelligent space using a multi-camera sensor system. The set of calibrated and synchronized cameras are placed in fixed positions within the environment (intelligent space). The proposed algorithm for motion segmentation and 3D localization is based on the minimization of an objective function. This function includes information from all the cameras, and it does not rely on previous knowledge or invasive landmarks on board the robots. The proposed objective function depends on three groups of variables: the segmentation boundaries, the motion parameters and the depth. For the objective function minimization, we use a greedy iterative algorithm with three steps that, after initialization of segmentation boundaries and depth, are repeated until convergence.
Feasibility of Using Video Camera for Automated Enforcement on Red-Light Running and Managed Lanes.
DOT National Transportation Integrated Search
2009-12-25
The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and HOV occupancy requirement using video cameras in Nevada. This objective was a...
NASA Astrophysics Data System (ADS)
Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David
2012-06-01
Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.
NASA Technical Reports Server (NTRS)
Kim, Won S.; Bejczy, Antal K.
1993-01-01
A highly effective predictive/preview display technique for telerobotic servicing in space under several seconds communication time delay has been demonstrated on a large laboratory scale in May 1993, involving the Jet Propulsion Laboratory as the simulated ground control station and, 2500 miles away, the Goddard Space Flight Center as the simulated satellite servicing set-up. The technique is based on a high-fidelity calibration procedure that enables a high-fidelity overlay of 3-D graphics robot arm and object models over given 2-D TV camera images of robot arm and objects. To generate robot arm motions, the operator can confidently interact in real time with the graphics models of the robot arm and objects overlaid on an actual camera view of the remote work site. The technique also enables the operator to generate high-fidelity synthetic TV camera views showing motion events that are hidden in a given TV camera view or for which no TV camera views are available. The positioning accuracy achieved by this technique for a zoomed-in camera setting was about +/-5 mm, well within the allowable +/-12 mm error margin at the insertion of a 45 cm long tool in the servicing task.
Theoretical colours and isochrones for some Hubble Space Telescope colour systems. II
NASA Technical Reports Server (NTRS)
Paltoglou, G.; Bell, R. A.
1991-01-01
A grid of synthetic surface brightness magnitudes for 14 bandpasses of the Hubble Space Telescope Faint Object Camera is presented, as well as a grid of UBV, uvby, and Faint Object Camera surface brightness magnitudes derived from the Gunn-Stryker spectrophotometric atlas. The synthetic colors are used to examine the transformations between the ground-based Johnson UBV and Stromgren uvby systems and the Faint Object Camera UBV and uvby. Two new four-color systems, similar to the Stromgren system, are proposed for the determination of abundance, temperature, and surface gravity. The synthetic colors are also used to calculate color-magnitude isochrones from the list of theoretical tracks provided by VandenBerg and Bell (1990). It is shown that by using the appropriate filters it is possible to minimize the dependence of this color difference on metallicity. The effects of interstellar reddening on various Faint Object Camera colors are analyzed as well as the observational requirements for obtaining data of a given signal-to-noise for each of the 14 bandpasses.
The Last Meter: Blind Visual Guidance to a Target.
Manduchi, Roberto; Coughlan, James M
2014-01-01
Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.
Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System
Lu, Yu; Wang, Keyi; Fan, Gongshu
2016-01-01
A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857
ERIC Educational Resources Information Center
Reynolds, Ronald F.
1984-01-01
Describes the basic components of a space telescope that will be launched during a 1986 space shuttle mission. These components include a wide field/planetary camera, faint object spectroscope, high-resolution spectrograph, high-speed photometer, faint object camera, and fine guidance sensors. Data to be collected from these instruments are…
Video sensor with range measurement capability
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)
2008-01-01
A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.
Measuring Distances Using Digital Cameras
ERIC Educational Resources Information Center
Kendal, Dave
2007-01-01
This paper presents a generic method of calculating accurate horizontal and vertical object distances from digital images taken with any digital camera and lens combination, where the object plane is parallel to the image plane or tilted in the vertical plane. This method was developed for a project investigating the size, density and spatial…
ERIC Educational Resources Information Center
Devine, Kay; Hunter, Karen H.
2017-01-01
This research examines doctoral student perceptions of emotional exhaustion relative to supportive supervision and the use of impression management (IM) and facades of conformity (FOC). Results indicated that supportive supervision significantly reduced emotional exhaustion and the use of self-presentation behaviours, while the use of FOC…
ERIC Educational Resources Information Center
Martinez, Melissa A.; Welton, Anjalé D.
2017-01-01
Drawing on the notions of biculturalism, or double consciousness, and hybridity, this qualitative study explored how 12 pre-tenure faculty of color (FOC) in the field of educational leadership working at universities in the United States negotiated their self-identified cultural identities within their predominantly White departments. Results…
USDA-ARS?s Scientific Manuscript database
Interest in application of phenolic compounds from diet or supplements for prevention of chronic diseases has grown significantly, but efficacy of such approaches in humans is largely dependent on the bioavailability and metabolism of these compounds. While food and dietary factors have been the foc...
Ameid, Tarek; Menacer, Arezki; Talhaoui, Hicham; Azzoug, Youness
2018-05-03
This paper presents a methodology for the broken rotor bars fault detection is considered when the rotor speed varies continuously and the induction machine is controlled by Field-Oriented Control (FOC). The rotor fault detection is obtained by analyzing a several mechanical and electrical quantities (i.e., rotor speed, stator phase current and output signal of the speed regulator) by the Discrete Wavelet Transform (DWT) in variable speed drives. The severity of the fault is obtained by stored energy calculation for active power signal. Hence, it can be a useful solution as fault indicator. The FOC is implemented in order to preserve a good performance speed control; to compensate the broken rotor bars effect in the mechanical speed and to ensure the operation continuity and to investigate the fault effect in the variable speed. The effectiveness of the technique is evaluated in simulation and in a real-time implementation by using Matlab/Simulink with the real-time interface (RTI) based on dSpace 1104 board. Copyright © 2018. Published by Elsevier Ltd.
Genetic algorithm-based improved DOA estimation using fourth-order cumulants
NASA Astrophysics Data System (ADS)
Ahmed, Ammar; Tufail, Muhammad
2017-05-01
Genetic algorithm (GA)-based direction of arrival (DOA) estimation is proposed using fourth-order cumulants (FOC) and ESPRIT principle which results in Multiple Invariance Cumulant ESPRIT algorithm. In the existing FOC ESPRIT formulations, only one invariance is utilised to estimate DOAs. The unused multiple invariances (MIs) must be exploited simultaneously in order to improve the estimation accuracy. In this paper, a fitness function based on a carefully designed cumulant matrix is developed which incorporates MIs present in the sensor array. Better DOA estimation can be achieved by minimising this fitness function. Moreover, the effectiveness of Newton's method as well as GA for this optimisation problem has been illustrated. Simulation results show that the proposed algorithm provides improved estimation accuracy compared to existing algorithms, especially in the case of low SNR, less number of snapshots, closely spaced sources and high signal and noise correlation. Moreover, it is observed that the optimisation using Newton's method is more likely to converge to false local optima resulting in erroneous results. However, GA-based optimisation has been found attractive due to its global optimisation capability.
Calibration of a dual-PTZ camera system for stereo vision
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2010-08-01
In this paper, we propose a calibration process for the intrinsic and extrinsic parameters of dual-PTZ camera systems. The calibration is based on a complete definition of six coordinate systems fixed at the image planes, and the pan and tilt rotation axes of the cameras. Misalignments between estimated and ideal coordinates of image corners are formed into cost values to be solved by the Nelder-Mead simplex optimization method. Experimental results show that the system is able to obtain 3D coordinates of objects with a consistent accuracy of 1 mm when the distance between the dual-PTZ camera set and the objects are from 0.9 to 1.1 meters.
Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs
NASA Astrophysics Data System (ADS)
Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.
2016-06-01
Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.
The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results
NASA Astrophysics Data System (ADS)
Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.
2017-11-01
At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.
Measuring the circular motion of small objects using laser stroboscopic images.
Wang, Hairong; Fu, Y; Du, R
2008-01-01
Measuring the circular motion of a small object, including its displacement, speed, and acceleration, is a challenging task. This paper presents a new method for measuring repetitive and/or nonrepetitive, constant speed and/or variable speed circular motion using laser stroboscopic images. Under stroboscopic illumination, each image taken by an ordinary camera records multioutlines of an object in motion; hence, processing the stroboscopic image will be able to extract the motion information. We built an experiment apparatus consisting of a laser as the light source, a stereomicroscope to magnify the image, and a normal complementary metal oxide semiconductor camera to record the image. As the object is in motion, the stroboscopic illumination generates a speckle pattern on the object that can be recorded by the camera and analyzed by a computer. Experimental results indicate that the stroboscopic imaging is stable under various conditions. Moreover, the characteristics of the motion, including the displacement, the velocity, and the acceleration can be calculated based on the width of speckle marks, the illumination intensity, the duty cycle, and the sampling frequency. Compared with the popular high-speed camera method, the presented method may achieve the same measuring accuracy, but with much reduced cost and complexity.
Focusing and depth of field in photography: application in dermatology practice.
Taheri, Arash; Yentzer, Brad A; Feldman, Steven R
2013-11-01
Conventional photography obtains a sharp image of objects within a given 'depth of field'; objects not within the depth of field are out of focus. In recent years, digital photography revolutionized the way pictures are taken, edited, and stored. However, digital photography does not result in a deeper depth of field or better focusing. In this article, we briefly review the concept of depth of field and focus in photography as well as new technologies in this area. A deep depth of field is used to have more objects in focus; a shallow depth of field can emphasize a subject by blurring the foreground and background objects. The depth of field can be manipulated by adjusting the aperture size of the camera, with smaller apertures increasing the depth of field at the cost of lower levels of light capture. Light-field cameras are a new generation of digital cameras that offer several new features, including the ability to change the focus on any object in the image after taking the photograph. Understanding depth of field and camera technology helps dermatologists to capture their subjects in focus more efficiently. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
A fuzzy automated object classification by infrared laser camera
NASA Astrophysics Data System (ADS)
Kanazawa, Seigo; Taniguchi, Kazuhiko; Asari, Kazunari; Kuramoto, Kei; Kobashi, Syoji; Hata, Yutaka
2011-06-01
Home security in night is very important, and the system that watches a person's movements is useful in the security. This paper describes a classification system of adult, child and the other object from distance distribution measured by an infrared laser camera. This camera radiates near infrared waves and receives reflected ones. Then, it converts the time of flight into distance distribution. Our method consists of 4 steps. First, we do background subtraction and noise rejection in the distance distribution. Second, we do fuzzy clustering in the distance distribution, and form several clusters. Third, we extract features such as the height, thickness, aspect ratio, area ratio of the cluster. Then, we make fuzzy if-then rules from knowledge of adult, child and the other object so as to classify the cluster to one of adult, child and the other object. Here, we made the fuzzy membership function with respect to each features. Finally, we classify the clusters to one with the highest fuzzy degree among adult, child and the other object. In our experiment, we set up the camera in room and tested three cases. The method successfully classified them in real time processing.
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
Tang, Yang; Xiong, Jun; Jiang, Han-Peng; Zheng, Shu-Jian; Feng, Yu-Qi; Yuan, Bi-Feng
2014-08-05
Cytosine methylation (5-methylcytosine, 5-mC) in DNA is an important epigenetic mark that has regulatory roles in various biological processes. In plants, active DNA demethylation can be achieved through direct cleavage by DNA glycosylases, followed by replacement of 5-mC with cytosine by base excision repair (BER) machinery. Recent studies in mammals have demonstrated 5-mC can be sequentially oxidized to 5-hydroxymethylcytosine (5-hmC), 5-formylcytosine (5-foC), and 5-carboxylcytosine (5-caC) by Ten-eleven translocation (TET) proteins. The consecutive oxidations of 5-mC constitute the active DNA demethylation pathway in mammals, which raised the possible presence of oxidation products of 5-mC (5-hmC, 5-foC, and 5-caC) in plant genomes. However, there is no definitive evidence supporting the presence of these modified bases in plant genomic DNA, especially for 5-foC and 5-caC. Here we developed a chemical derivatization strategy combined with liquid chromatography-electrospray ionization tandem mass spectrometry (LC/ESI-MS/MS) method to determine 5-formyl-2'-deoxycytidine (5-fodC) and 5-carboxyl-2'-deoxycytidine (5-cadC). Derivatization of 5-fodC and 5-cadC by Girard's reagents (GirD, GirT, and GirP) significantly increased the detection sensitivities of 5-fodC and 5-cadC by 52-260-fold. Using this method, we demonstrated the widespread existence of 5-fodC and 5-cadC in genomic DNA of various plant tissues, indicating that active DNA demethylation in plants may go through an alternative pathway similar to mammals besides the pathway of direct DNA glycosylases cleavage combined with BER. Moreover, we found that environmental stresses of drought and salinity can change the contents of 5-fodC and 5-cadC in plant genomes, suggesting the functional roles of 5-fodC and 5-cadC in response to environmental stresses.
Traffic monitoring with distributed smart cameras
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert
2012-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.
High-accuracy 3D measurement system based on multi-view and structured light
NASA Astrophysics Data System (ADS)
Li, Mingyue; Weng, Dongdong; Li, Yufeng; Zhang, Longbin; Zhou, Haiyun
2013-12-01
3D surface reconstruction is one of the most important topics in Spatial Augmented Reality (SAR). Using structured light is a simple and rapid method to reconstruct the objects. In order to improve the precision of 3D reconstruction, we present a high-accuracy multi-view 3D measurement system based on Gray-code and Phase-shift. We use a camera and a light projector that casts structured light patterns on the objects. In this system, we use only one camera to take photos on the left and right sides of the object respectively. In addition, we use VisualSFM to process the relationships between each perspective, so the camera calibration can be omitted and the positions to place the camera are no longer limited. We also set appropriate exposure time to make the scenes covered by gray-code patterns more recognizable. All of the points above make the reconstruction more precise. We took experiments on different kinds of objects, and a large number of experimental results verify the feasibility and high accuracy of the system.
Using turbulence scintillation to assist object ranging from a single camera viewpoint.
Wu, Chensheng; Ko, Jonathan; Coffaro, Joseph; Paulson, Daniel A; Rzasa, John R; Andrews, Larry C; Phillips, Ronald L; Crabbs, Robert; Davis, Christopher C
2018-03-20
Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.
SU-E-J-197: Investigation of Microsoft Kinect 2.0 Depth Resolution for Patient Motion Tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Silverstein, E; Snyder, M
2015-06-15
Purpose: Investigate the use of the Kinect 2.0 for patient motion tracking during radiotherapy by studying spatial and depth resolution capabilities. Methods: Using code written in C#, depth map data was abstracted from the Kinect to create an initial depth map template indicative of the initial position of an object to be compared to the depth map of the object over time. To test this process, simple setup was created in which two objects were imaged: a 40 cm × 40 cm board covered in non reflective material and a 15 cm × 26 cm textbook with a slightly reflective,more » glossy cover. Each object, imaged and measured separately, was placed on a movable platform with object to camera distance measured. The object was then moved a specified amount to ascertain whether the Kinect’s depth camera would visualize the difference in position of the object. Results: Initial investigations have shown the Kinect depth resolution is dependent on the object to camera distance. Measurements indicate that movements as small as 1 mm can be visualized for objects as close as 50 cm away. This depth resolution decreases linearly with object to camera distance. At 4 m, the depth resolution had decreased to observe a minimum movement of 1 cm. Conclusion: The improved resolution and advanced hardware of the Kinect 2.0 allows for increase of depth resolution over the Kinect 1.0. Although obvious that the depth resolution should decrease with increasing distance from an object given the decrease in number of pixels representing said object, the depth resolution at large distances indicates its usefulness in a clinical setting.« less
Multi-view video segmentation and tracking for video surveillance
NASA Astrophysics Data System (ADS)
Mohammadi, Gelareh; Dufaux, Frederic; Minh, Thien Ha; Ebrahimi, Touradj
2009-05-01
Tracking moving objects is a critical step for smart video surveillance systems. Despite the complexity increase, multiple camera systems exhibit the undoubted advantages of covering wide areas and handling the occurrence of occlusions by exploiting the different viewpoints. The technical problems in multiple camera systems are several: installation, calibration, objects matching, switching, data fusion, and occlusion handling. In this paper, we address the issue of tracking moving objects in an environment covered by multiple un-calibrated cameras with overlapping fields of view, typical of most surveillance setups. Our main objective is to create a framework that can be used to integrate objecttracking information from multiple video sources. Basically, the proposed technique consists of the following steps. We first perform a single-view tracking algorithm on each camera view, and then apply a consistent object labeling algorithm on all views. In the next step, we verify objects in each view separately for inconsistencies. Correspondent objects are extracted through a Homography transform from one view to the other and vice versa. Having found the correspondent objects of different views, we partition each object into homogeneous regions. In the last step, we apply the Homography transform to find the region map of first view in the second view and vice versa. For each region (in the main frame and mapped frame) a set of descriptors are extracted to find the best match between two views based on region descriptors similarity. This method is able to deal with multiple objects. Track management issues such as occlusion, appearance and disappearance of objects are resolved using information from all views. This method is capable of tracking rigid and deformable objects and this versatility lets it to be suitable for different application scenarios.
Feeling of Certainty: Uncovering a Missing Link between Knowledge and Acceptance of Evolution
ERIC Educational Resources Information Center
Ha, Minsu; Haury, David L.; Nehm, Ross H.
2012-01-01
We propose a new model of the factors influencing acceptance of evolutionary theory that highlights a novel variable unexplored in previous studies: the feeling of certainty (FOC). The model is grounded in an emerging understanding of brain function that acknowledges the contributions of intuitive cognitions in making decisions, such as whether or…
Naval Systems Engineering Guide
2004-10-01
Decision Critical Design Review System Integration Activities IOC FRP Decision Review Production & Deployment Sustainment IOT & FOC Sustainmen...reentered when things change significantly, such as funding, requirements, or schedule. This process must start at the very beginning of a Major...outputs through sub-processes will reveal a number of things : a. Determine the level of process applicability and tailoring required. b. Additional
Acute Alcohol Effects on Repetition Priming and Word Recognition Memory with Equivalent Memory Cues
ERIC Educational Resources Information Center
Ray, Suchismita; Bates, Marsha E.
2006-01-01
Acute alcohol intoxication effects on memory were examined using a recollection-based word recognition memory task and a repetition priming task of memory for the same information without explicit reference to the study context. Memory cues were equivalent across tasks; encoding was manipulated by varying the frequency of occurrence (FOC) of words…
Depth Perception In Remote Stereoscopic Viewing Systems
NASA Technical Reports Server (NTRS)
Diner, Daniel B.; Von Sydow, Marika
1989-01-01
Report describes theoretical and experimental studies of perception of depth by human operators through stereoscopic video systems. Purpose of such studies to optimize dual-camera configurations used to view workspaces of remote manipulators at distances of 1 to 3 m from cameras. According to analysis, static stereoscopic depth distortion decreased, without decreasing stereoscopitc depth resolution, by increasing camera-to-object and intercamera distances and camera focal length. Further predicts dynamic stereoscopic depth distortion reduced by rotating cameras around center of circle passing through point of convergence of viewing axes and first nodal points of two camera lenses.
Real object-based 360-degree integral-floating display using multiple depth camera
NASA Astrophysics Data System (ADS)
Erdenebat, Munkh-Uchral; Dashdavaa, Erkhembaatar; Kwon, Ki-Chul; Wu, Hui-Ying; Yoo, Kwan-Hee; Kim, Young-Seok; Kim, Nam
2015-03-01
A novel 360-degree integral-floating display based on the real object is proposed. The general procedure of the display system is similar with conventional 360-degree integral-floating displays. Unlike previously presented 360-degree displays, the proposed system displays the 3D image generated from the real object in 360-degree viewing zone. In order to display real object in 360-degree viewing zone, multiple depth camera have been utilized to acquire the depth information around the object. Then, the 3D point cloud representations of the real object are reconstructed according to the acquired depth information. By using a special point cloud registration method, the multiple virtual 3D point cloud representations captured by each depth camera are combined as single synthetic 3D point cloud model, and the elemental image arrays are generated for the newly synthesized 3D point cloud model from the given anamorphic optic system's angular step. The theory has been verified experimentally, and it shows that the proposed 360-degree integral-floating display can be an excellent way to display real object in the 360-degree viewing zone.
Temperature resolution enhancing of commercially available IR camera using computer processing
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-09-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Using such THz camera, one can see a temperature difference on the human skin if this difference is caused by different temperatures inside the body. Because the passive THz camera is very expensive, we try to use the IR camera for observing of such phenomenon. We use a computer code that is available for treatment of the images captured by commercially available IR camera, manufactured by Flir Corp. Using this code we demonstrate clearly changing of human body skin temperature induced by water drinking. Nevertheless, in some cases it is necessary to use additional computer processing to show clearly changing of human body temperature. One of these approaches is developed by us. We believe that we increase ten times (or more) the temperature resolution of such camera. Carried out experiments can be used for solving the counter-terrorism problem and for medicine problems solving. Shown phenomenon is very important for the detection of forbidden objects and substances concealed inside the human body using non-destructive control without X-ray application. Early we have demonstrated such possibility using THz radiation.
NASA Astrophysics Data System (ADS)
Zhang, Hua; Zeng, Luan
2017-11-01
Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.
Fusion of thermal- and visible-band video for abandoned object detection
NASA Astrophysics Data System (ADS)
Beyan, Cigdem; Yigit, Ahmet; Temizel, Alptekin
2011-07-01
Timely detection of packages that are left unattended in public spaces is a security concern, and rapid detection is important for prevention of potential threats. Because constant surveillance of such places is challenging and labor intensive, automated abandoned-object-detection systems aiding operators have started to be widely used. In many studies, stationary objects, such as people sitting on a bench, are also detected as suspicious objects due to abandoned items being defined as items newly added to the scene and remained stationary for a predefined time. Therefore, any stationary object results in an alarm causing a high number of false alarms. These false alarms could be prevented by classifying suspicious items as living and nonliving objects. In this study, a system for abandoned object detection that aids operators surveilling indoor environments such as airports, railway or metro stations, is proposed. By analysis of information from a thermal- and visible-band camera, people and the objects left behind can be detected and discriminated as living and nonliving, reducing the false-alarm rate. Experiments demonstrate that using data obtained from a thermal camera in addition to a visible-band camera also increases the true detection rate of abandoned objects.
Combining color and shape information for illumination-viewpoint invariant object recognition.
Diplaros, Aristeidis; Gevers, Theo; Patras, Ioannis
2006-01-01
In this paper, we propose a new scheme that merges color- and shape-invariant information for object recognition. To obtain robustness against photometric changes, color-invariant derivatives are computed first. Color invariance is an important aspect of any object recognition scheme, as color changes considerably with the variation in illumination, object pose, and camera viewpoint. These color invariant derivatives are then used to obtain similarity invariant shape descriptors. Shape invariance is equally important as, under a change in camera viewpoint and object pose, the shape of a rigid object undergoes a perspective projection on the image plane. Then, the color and shape invariants are combined in a multidimensional color-shape context which is subsequently used as an index. As the indexing scheme makes use of a color-shape invariant context, it provides a high-discriminative information cue robust against varying imaging conditions. The matching function of the color-shape context allows for fast recognition, even in the presence of object occlusion and cluttering. From the experimental results, it is shown that the method recognizes rigid objects with high accuracy in 3-D complex scenes and is robust against changing illumination, camera viewpoint, object pose, and noise.
NASA Astrophysics Data System (ADS)
Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.
2013-09-01
Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Santhanam, A; Min, Y; Beron, P
Purpose: Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. Methods: We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter.more » To identify potential patient safety hazards, the treatment room components and the patient’s body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the object’s position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. Results: Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. Conclusion: By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.« less
Science, conservation, and camera traps
Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas
2011-01-01
Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.
Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.
Donné, Simon; Goossens, Bart; Philips, Wilfried
2017-08-23
Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.
The NACA High-Speed Motion-Picture Camera Optical Compensation at 40,000 Photographs Per Second
NASA Technical Reports Server (NTRS)
Miller, Cearcy D
1946-01-01
The principle of operation of the NACA high-speed camera is completely explained. This camera, operating at the rate of 40,000 photographs per second, took the photographs presented in numerous NACA reports concerning combustion, preignition, and knock in the spark-ignition engine. Many design details are presented and discussed, details of an entirely conventional nature are omitted. The inherent aberrations of the camera are discussed and partly evaluated. The focal-plane-shutter effect of the camera is explained. Photographs of the camera are presented. Some high-speed motion pictures of familiar objects -- photoflash bulb, firecrackers, camera shutter -- are reproduced as an illustration of the quality of the photographs taken by the camera.
Adaptive Optics For Imaging Bright Objects Next To Dim Ones
NASA Technical Reports Server (NTRS)
Shao, Michael; Yu, Jeffrey W.; Malbet, Fabien
1996-01-01
Adaptive optics used in imaging optical systems, according to proposal, to enhance high-dynamic-range images (images of bright objects next to dim objects). Designed to alter wavefronts to correct for effects of scattering of light from small bumps on imaging optics. Original intended application of concept in advanced camera installed on Hubble Space Telescope for imaging of such phenomena as large planets near stars other than Sun. Also applicable to other high-quality telescopes and cameras.
USDA-ARS?s Scientific Manuscript database
Consumer-grade cameras are being increasingly used for remote sensing applications in recent years. However, the performance of this type of cameras has not been systematically tested and well documented in the literature. The objective of this research was to evaluate the performance of original an...
Semi-autonomous wheelchair system using stereoscopic cameras.
Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T
2009-01-01
This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.
Finite-time tracking control for multiple non-holonomic mobile robots based on visual servoing
NASA Astrophysics Data System (ADS)
Ou, Meiying; Li, Shihua; Wang, Chaoli
2013-12-01
This paper investigates finite-time tracking control problem of multiple non-holonomic mobile robots via visual servoing. It is assumed that the pinhole camera is fixed to the ceiling, and camera parameters are unknown. The desired reference trajectory is represented by a virtual leader whose states are available to only a subset of the followers, and the followers have only interaction. First, the camera-objective visual kinematic model is introduced by utilising the pinhole camera model for each mobile robot. Second, a unified tracking error system between camera-objective visual servoing model and desired reference trajectory is introduced. Third, based on the neighbour rule and by using finite-time control method, continuous distributed cooperative finite-time tracking control laws are designed for each mobile robot with unknown camera parameters, where the communication topology among the multiple mobile robots is assumed to be a directed graph. Rigorous proof shows that the group of mobile robots converges to the desired reference trajectory in finite time. Simulation example illustrates the effectiveness of our method.
Efficient view based 3-D object retrieval using Hidden Markov Model
NASA Astrophysics Data System (ADS)
Jain, Yogendra Kumar; Singh, Roshan Kumar
2013-12-01
Recent research effort has been dedicated to view based 3-D object retrieval, because of highly discriminative property of 3-D object and has multi view representation. The state-of-art method is highly depending on their own camera array setting for capturing views of 3-D object and use complex Zernike descriptor, HAC for representative view selection which limit their practical application and make it inefficient for retrieval. Therefore, an efficient and effective algorithm is required for 3-D Object Retrieval. In order to move toward a general framework for efficient 3-D object retrieval which is independent of camera array setting and avoidance of representative view selection, we propose an Efficient View Based 3-D Object Retrieval (EVBOR) method using Hidden Markov Model (HMM). In this framework, each object is represented by independent set of view, which means views are captured from any direction without any camera array restriction. In this, views are clustered (including query view) to generate the view cluster, which is then used to build the query model with HMM. In our proposed method, HMM is used in twofold: in the training (i.e. HMM estimate) and in the retrieval (i.e. HMM decode). The query model is trained by using these view clusters. The EVBOR query model is worked on the basis of query model combining with HMM. The proposed approach remove statically camera array setting for view capturing and can be apply for any 3-D object database to retrieve 3-D object efficiently and effectively. Experimental results demonstrate that the proposed scheme has shown better performance than existing methods. [Figure not available: see fulltext.
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
Orientation Modeling for Amateur Cameras by Matching Image Line Features and Building Vector Data
NASA Astrophysics Data System (ADS)
Hung, C. H.; Chang, W. C.; Chen, L. C.
2016-06-01
With the popularity of geospatial applications, database updating is getting important due to the environmental changes over time. Imagery provides a lower cost and efficient way to update the database. Three dimensional objects can be measured by space intersection using conjugate image points and orientation parameters of cameras. However, precise orientation parameters of light amateur cameras are not always available due to their costliness and heaviness of precision GPS and IMU. To automatize data updating, the correspondence of object vector data and image may be built to improve the accuracy of direct georeferencing. This study contains four major parts, (1) back-projection of object vector data, (2) extraction of image feature lines, (3) object-image feature line matching, and (4) line-based orientation modeling. In order to construct the correspondence of features between an image and a building model, the building vector features were back-projected onto the image using the initial camera orientation from GPS and IMU. Image line features were extracted from the imagery. Afterwards, the matching procedure was done by assessing the similarity between the extracted image features and the back-projected ones. Then, the fourth part utilized line features in orientation modeling. The line-based orientation modeling was performed by the integration of line parametric equations into collinearity condition equations. The experiment data included images with 0.06 m resolution acquired by Canon EOS Mark 5D II camera on a Microdrones MD4-1000 UAV. Experimental results indicate that 2.1 pixel accuracy may be reached, which is equivalent to 0.12 m in the object space.
Optical analysis of a compound quasi-microscope for planetary landers
NASA Technical Reports Server (NTRS)
Wall, S. D.; Burcher, E. E.; Huck, F. O.
1974-01-01
A quasi-microscope concept, consisting of facsimile camera augmented with an auxiliary lens as a magnifier, was introduced and analyzed. The performance achievable with this concept was primarily limited by a trade-off between resolution and object field; this approach leads to a limiting resolution of 20 microns when used with the Viking lander camera (which has an angular resolution of 0.04 deg). An optical system is analyzed which includes a field lens between camera and auxiliary lens to overcome this limitation. It is found that this system, referred to as a compound quasi-microscope, can provide improved resolution (to about 2 microns ) and a larger object field. However, this improvement is at the expense of increased complexity, special camera design requirements, and tighter tolerances on the distances between optical components.
Transient full-field vibration measurement using spectroscopical stereo photogrammetry.
Yue, Kaiduan; Li, Zhongke; Zhang, Ming; Chen, Shan
2010-12-20
Contrasted with other vibration measurement methods, a novel spectroscopical photogrammetric approach is proposed. Two colored light filters and a CCD color camera are used to achieve the function of two traditional cameras. Then a new calibration method is presented. It focuses on the vibrating object rather than the camera and has the advantage of more accuracy than traditional camera calibration. The test results have shown an accuracy of 0.02 mm.
Exploring the Universe with the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
1990-01-01
A general overview is given of the operations, engineering challenges, and components of the Hubble Space Telescope. Deployment, checkout and servicing in space are discussed. The optical telescope assembly, focal plane scientific instruments, wide field/planetary camera, faint object spectrograph, faint object camera, Goddard high resolution spectrograph, high speed photometer, fine guidance sensors, second generation technology, and support systems and services are reviewed.
Independent motion detection with a rival penalized adaptive particle filter
NASA Astrophysics Data System (ADS)
Becker, Stefan; Hübner, Wolfgang; Arens, Michael
2014-10-01
Aggregation of pixel based motion detection into regions of interest, which include views of single moving objects in a scene is an essential pre-processing step in many vision systems. Motion events of this type provide significant information about the object type or build the basis for action recognition. Further, motion is an essential saliency measure, which is able to effectively support high level image analysis. When applied to static cameras, background subtraction methods achieve good results. On the other hand, motion aggregation on freely moving cameras is still a widely unsolved problem. The image flow, measured on a freely moving camera is the result from two major motion types. First the ego-motion of the camera and second object motion, that is independent from the camera motion. When capturing a scene with a camera these two motion types are adverse blended together. In this paper, we propose an approach to detect multiple moving objects from a mobile monocular camera system in an outdoor environment. The overall processing pipeline consists of a fast ego-motion compensation algorithm in the preprocessing stage. Real-time performance is achieved by using a sparse optical flow algorithm as an initial processing stage and a densely applied probabilistic filter in the post-processing stage. Thereby, we follow the idea proposed by Jung and Sukhatme. Normalized intensity differences originating from a sequence of ego-motion compensated difference images represent the probability of moving objects. Noise and registration artefacts are filtered out, using a Bayesian formulation. The resulting a posteriori distribution is located on image regions, showing strong amplitudes in the difference image which are in accordance with the motion prediction. In order to effectively estimate the a posteriori distribution, a particle filter is used. In addition to the fast ego-motion compensation, the main contribution of this paper is the design of the probabilistic filter for real-time detection and tracking of independently moving objects. The proposed approach introduces a competition scheme between particles in order to ensure an improved multi-modality. Further, the filter design helps to generate a particle distribution which is homogenous even in the presence of multiple targets showing non-rigid motion patterns. The effectiveness of the method is shown on exemplary outdoor sequences.
Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera
Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Tang, Hongru; Xi, Ning
2016-01-01
Controlling robots by natural language (NL) is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications. PMID:27983604
Analyses of Alternatives: Toward a More Rigorous Determination of Scope
2014-04-30
justification is required to avoid three months of wasted effort, the change is unlikely to happen. Applying a Systems View Systems Thinking and...Defense Acquisition Guidebook para. 10.5.2 IOC FOC TRL 1‐3 TRL 4 TRL 7 TRL 8 TRL 9 Compon‐ ent and/or Bread ‐ board Validation In a Relevant Environ‐ ment
Ambulatory Care Data Base (ACDB) Data Dictionary Sequential Files of Phase 2.
1992-04-01
CEREBRAL ARTERIES 4340 CEREBRAL THROMBOSIS 43491 STROKE , ISCHEMIC 435 TRANSIENT CEREBRAL ISCHEMIA 4359 TRANSIENT ISCHEMIC ATTACK (TIA) 43591 TRANS...HYPERTENSIVE CRISIS 4373 ANEURYSM, CEREBRAL , NONRUPTURED 4374 ARTERITIS, CEREBRAL 4378 CEREBROVASCULAR DISEASE, OTHER ILL-DEFINED 43781 STROKE , LACUNAR...95950 MONITORING FOR LOCALIZATION OF CEREBRAL SEIZURE FOC 95999 OTHER NEUROLOGICAL DIAGNOSTIC PROCEDURES 96500 CHEMO INJ, SINGLE, PRE-MIX, PUSH 96501
Usein, C R; Damian, M; Tatu-Chitoiu, D; Capusa, C; Fagaras, R; Tudorache, D; Nica, M; Le Bouguénec, C
2001-01-01
A total of 78 E. coli strains isolated from adults with different types of urinary tract infections were screened by polymerase chain reaction for prevalence of genetic regions coding for virulence factors. The targeted genetic determinants were those coding for type 1 fimbriae (fimH), pili associated with pyelonephritis (pap), S and F1C fimbriae (sfa and foc), afimbrial adhesins (afa), hemolysin (hly), cytotoxic necrotizing factor (cnf), aerobactin (aer). Among the studied strains, the prevalence of genes coding for fimbrial adhesive systems was 86%, 36%, and 23% for fimH, pap, and sfa/foc,respectively. The operons coding for Afa afimbrial adhesins were identified in 14% of strains. The hly and cnf genes coding for toxins were amplified in 23% and 13% of strains, respectively. A prevalence of 54% was found for the aer gene. The various combinations of detected genes were designated as virulence patterns. The strains isolated from the hospitalized patients displayed a greater number of virulence genes and a diversity of gene associations compared to the strains isolated from the ambulatory subjects. A rapid assessment of the bacterial pathogenicity characteristics may contribute to a better medical approach of the patients with urinary tract infections.
System for critical infrastructure security based on multispectral observation-detection module
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Kastek, Mariusz; Życzkowski, Marek; Dulski, Rafał; Szustakowski, Mieczysław; Ciurapiński, Wiesław; Bareła, Jarosław
2013-10-01
Recent terrorist attacks and possibilities of such actions in future have forced to develop security systems for critical infrastructures that embrace sensors technologies and technical organization of systems. The used till now perimeter protection of stationary objects, based on construction of a ring with two-zone fencing, visual cameras with illumination are efficiently displaced by the systems of the multisensor technology that consists of: visible technology - day/night cameras registering optical contrast of a scene, thermal technology - cheap bolometric cameras recording thermal contrast of a scene and active ground radars - microwave and millimetre wavelengths that record and detect reflected radiation. Merging of these three different technologies into one system requires methodology for selection of technical conditions of installation and parameters of sensors. This procedure enables us to construct a system with correlated range, resolution, field of view and object identification. Important technical problem connected with the multispectral system is its software, which helps couple the radar with the cameras. This software can be used for automatic focusing of cameras, automatic guiding cameras to an object detected by the radar, tracking of the object and localization of the object on the digital map as well as target identification and alerting. Based on "plug and play" architecture, this system provides unmatched flexibility and simplistic integration of sensors and devices in TCP/IP networks. Using a graphical user interface it is possible to control sensors and monitor streaming video and other data over the network, visualize the results of data fusion process and obtain detailed information about detected intruders over a digital map. System provide high-level applications and operator workload reduction with features such as sensor to sensor cueing from detection devices, automatic e-mail notification and alarm triggering. The paper presents a structure and some elements of critical infrastructure protection solution which is based on a modular multisensor security system. System description is focused mainly on methodology of selection of sensors parameters. The results of the tests in real conditions are also presented.
A telephoto camera system with shooting direction control by gaze detection
NASA Astrophysics Data System (ADS)
Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro
2015-05-01
For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.
NASA Technical Reports Server (NTRS)
Diner, Daniel B. (Inventor)
1989-01-01
A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.
Investigation of the influence of spatial degrees of freedom on thermal infrared measurement
NASA Astrophysics Data System (ADS)
Fleuret, Julien R.; Yousefi, Bardia; Lei, Lei; Djupkep Dizeu, Frank Billy; Zhang, Hai; Sfarra, Stefano; Ouellet, Denis; Maldague, Xavier P. V.
2017-05-01
Long Wavelength Infrared (LWIR) cameras can provide a representation of a part of the light spectrum that is sensitive to temperature. These cameras also named Thermal Infrared (TIR) cameras are powerful tools to detect features that cannot be seen by other imaging technologies. For instance they enable defect detection in material, fever and anxiety in mammals and many other features for numerous applications. However, the accuracy of thermal cameras can be affected by many parameters; the most critical involves the relative position of the camera with respect to the object of interest. Several models have been proposed in order to minimize the influence of some of the parameters but they are mostly related to specific applications. Because such models are based on some prior informations related to context, their applicability to other contexts cannot be easily assessed. The few models remaining are mostly associated with a specific device. In this paper the authors studied the influence of the camera position on the measurement accuracy. Modeling of the position of the camera from the object of interest depends on many parameters. In order to propose a study which is as accurate as possible, the position of the camera will be represented as a five dimensions model. The aim of this study is to investigate and attempt to introduce a model which is as independent from the device as possible.
Localization and Mapping Using a Non-Central Catadioptric Camera System
NASA Astrophysics Data System (ADS)
Khurana, M.; Armenakis, C.
2018-05-01
This work details the development of an indoor navigation and mapping system using a non-central catadioptric omnidirectional camera and its implementation for mobile applications. Omnidirectional catadioptric cameras find their use in navigation and mapping of robotic platforms, owing to their wide field of view. Having a wider field of view, or rather a potential 360° field of view, allows the system to "see and move" more freely in the navigation space. A catadioptric camera system is a low cost system which consists of a mirror and a camera. Any perspective camera can be used. A platform was constructed in order to combine the mirror and a camera to build a catadioptric system. A calibration method was developed in order to obtain the relative position and orientation between the two components so that they can be considered as one monolithic system. The mathematical model for localizing the system was determined using conditions based on the reflective properties of the mirror. The obtained platform positions were then used to map the environment using epipolar geometry. Experiments were performed to test the mathematical models and the achieved location and mapping accuracies of the system. An iterative process of positioning and mapping was applied to determine object coordinates of an indoor environment while navigating the mobile platform. Camera localization and 3D coordinates of object points obtained decimetre level accuracies.
Use of camera drive in stereoscopic display of learning contents of introductory physics
NASA Astrophysics Data System (ADS)
Matsuura, Shu
2011-03-01
Simple 3D physics simulations with stereoscopic display were created for a part of introductory physics e-Learning. First, cameras to see the 3D world can be made controllable by the user. This enabled to observe the system and motions of objects from any position in the 3D world. Second, cameras were made attachable to one of the moving object in the simulation so as to observe the relative motion of other objects. By this option, it was found that users perceive the velocity and acceleration more sensibly on stereoscopic display than on non-stereoscopic 3D display. Simulations were made using Adobe Flash ActionScript, and Papervison 3D library was used to render the 3D models in the flash web pages. To display the stereogram, two viewports from virtual cameras were displayed in parallel in the same web page. For observation of stereogram, the images of two viewports were superimposed by using 3D stereogram projection box (T&TS CO., LTD.), and projected on an 80-inch screen. The virtual cameras were controlled by keyboard and also by Nintendo Wii remote controller buttons. In conclusion, stereoscopic display offers learners more opportunities to play with the simulated models, and to perceive the characteristics of motion better.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
Scalar wave-optical reconstruction of plenoptic camera images.
Junker, André; Stenau, Tim; Brenner, Karl-Heinz
2014-09-01
We investigate the reconstruction of plenoptic camera images in a scalar wave-optical framework. Previous publications relating to this topic numerically simulate light propagation on the basis of ray tracing. However, due to continuing miniaturization of hardware components it can be assumed that in combination with low-aperture optical systems this technique may not be generally valid. Therefore, we study the differences between ray- and wave-optical object reconstructions of true plenoptic camera images. For this purpose we present a wave-optical reconstruction algorithm, which can be run on a regular computer. Our findings show that a wave-optical treatment is capable of increasing the detail resolution of reconstructed objects.
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
Selecting a digital camera for telemedicine.
Patricoski, Chris; Ferguson, A Stewart
2009-06-01
The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.
Coaxial fundus camera for opthalmology
NASA Astrophysics Data System (ADS)
de Matos, Luciana; Castro, Guilherme; Castro Neto, Jarbas C.
2015-09-01
A Fundus Camera for ophthalmology is a high definition device which needs to meet low light illumination of the human retina, high resolution in the retina and reflection free image1. Those constraints make its optical design very sophisticated, but the most difficult to comply with is the reflection free illumination and the final alignment due to the high number of non coaxial optical components in the system. Reflection of the illumination, both in the objective and at the cornea, mask image quality, and a poor alignment make the sophisticated optical design useless. In this work we developed a totally axial optical system for a non-midriatic Fundus Camera. The illumination is performed by a LED ring, coaxial with the optical system and composed of IR of visible LEDs. The illumination ring is projected by the objective lens in the cornea. The Objective, LED illuminator, CCD lens are coaxial making the final alignment easily to perform. The CCD + capture lens module is a CCTV camera with autofocus and Zoom built in, added to a 175 mm focal length doublet corrected for infinity, making the system easily operated and very compact.
Conceptual design for an AIUC multi-purpose spectrograph camera using DMD technology
NASA Astrophysics Data System (ADS)
Rukdee, S.; Bauer, F.; Drass, H.; Vanzi, L.; Jordan, A.; Barrientos, F.
2017-02-01
Current and upcoming massive astronomical surveys are expected to discover a torrent of objects, which need groundbased follow-up observations to characterize their nature. For transient objects in particular, rapid early and efficient spectroscopic identification is needed. In particular, a small-field Integral Field Unit (IFU) would mitigate traditional slit losses and acquisition time. To this end, we present the design of a Digital Micromirror Device (DMD) multi-purpose spectrograph camera capable of running in several modes: traditional longslit, small-field patrol IFU, multi-object and full-field IFU mode via Hadamard spectra reconstruction. AIUC Optical multi-purpose CAMera (AIUCOCAM) is a low-resolution spectrograph camera of R 1,600 covering the spectral range of 0.45-0.85 μm. We employ a VPH grating as a disperser, which is removable to allow an imaging mode. This spectrograph is envisioned for use on a 1-2 m class telescope in Chile to take advantage of good site conditions. We present design decisions and challenges for a costeffective robotized spectrograph. The resulting instrument is remarkably versatile, capable of addressing a wide range of scientific topics.
Auto-converging stereo cameras for 3D robotic tele-operation
NASA Astrophysics Data System (ADS)
Edmondson, Richard; Aycock, Todd; Chenault, David
2012-06-01
Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
A multi-criteria approach to camera motion design for volume data animation.
Hsu, Wei-Hsien; Zhang, Yubo; Ma, Kwan-Liu
2013-12-01
We present an integrated camera motion design and path generation system for building volume data animations. Creating animations is an essential task in presenting complex scientific visualizations. Existing visualization systems use an established animation function based on keyframes selected by the user. This approach is limited in providing the optimal in-between views of the data. Alternatively, computer graphics and virtual reality camera motion planning is frequently focused on collision free movement in a virtual walkthrough. For semi-transparent, fuzzy, or blobby volume data the collision free objective becomes insufficient. Here, we provide a set of essential criteria focused on computing camera paths to establish effective animations of volume data. Our dynamic multi-criteria solver coupled with a force-directed routing algorithm enables rapid generation of camera paths. Once users review the resulting animation and evaluate the camera motion, they are able to determine how each criterion impacts path generation. In this paper, we demonstrate how incorporating this animation approach with an interactive volume visualization system reduces the effort in creating context-aware and coherent animations. This frees the user to focus on visualization tasks with the objective of gaining additional insight from the volume data.
NASA Astrophysics Data System (ADS)
Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent
2003-10-01
In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain computational resources. The system demonstrates the viability of biologically inspired systems in a real time tracking. In future work we plan on implementing additional biological mechanisms for cooperative management of both the sensor and processing resources in this system that include top down biasing for target specificity as well as novelty and the activity of the tracked object in relation to sensitive features of the environment.
How the Army Runs: A Senior Leader Reference Handbook. 2013-2014
2013-01-01
sector , creates a recruiting multiplier, improves employment prospects for transitioning personnel, reduces unemployment compensation costs to the Army...Development (IR&D) efforts. By providing the private sector an unclassified, descriptive list of desired FOCs, the Army is able to tap into a wealth of...civilian sector resources during mobilization. It coordinates the response of the civil agencies to defense needs, always cognizant that without
Ambulatory Care Data Base (ACDB) Data Dictionary Sequential Files of Phase 2.
1992-04-01
ARTERIES 4340 CEREBRAL THROMBOSIS 43491 STROKE , ISCHEMIC 435 TRANSIENT CEREBRAL ISCHEMIA 4359 TRANSIENT ISCHEMIC ATTACK (TIA) 43591 TRANS ISCHEMIC ATTACK W...CRISIS 4373 ANEURYSM, CEREBRAL , NONRUPTURED 4374 ARTERITIS, CEREBRAL 4378 CEREBROVASCULAR DISEASE, OTHER ILL-DEFINED 43781 STROKE , LACUNAR 438 LATE...FOR LOCALIZATION OF CEREBRAL SEIZURE FOC 95999 OTHER NEUROLOGICAL DIAGNOSTIC PROCEDURES 96500 CHEMO INJ, SINGLE, PRE-MIX, PUSH 96501 CHEMO INJ, SINGLE
Eyeglass Benefits: Consideration of Frame of Choice for Retired Service Members
2009-04-20
20100329217 t. ABSTRACT ’he Department of Defense (DoD) provides basic eyewear to our nation’s military members. .Ithough not specifically entitled under...Title X, military retirees historically also receive tandard issue eyewear . The military’s Frame of Choice (FOC) program currently benefits the...current fiscal environment. . SUBJECT TERMS ^eglasses, Frame of Choice, Retiree, Service Members, Entitlements, Eyewear , snefit, Optometry, Optical
Adib, N; Ghanbarpour, R; Solatzadeh, H; Alizade, H
2014-03-01
Escherichia coli (E. coli) strains are the major cause of urinary tract infections (UTI) and belong to the large group of extra-intestinal pathogenic E. coli. The purposes of this study were to determine the antibiotic resistance profile, virulence genes and phylogenetic background of E. coli isolates from UTI cases. A total of 137 E. coli isolates were obtained from UTI samples. The antimicrobial susceptibility of confirmed isolates was determined by disk diffusion method against eight antibiotics. The isolates were examined to determine the presence and prevalence of selected virulence genes including iucD, sfa/focDE, papEF and hly. ECOR phylo-groups of isolates were determined by detection of yjaA and chuA genes and fragment TspE4.C2. The antibiogram results showed that 71% of the isolates were resistant to cefazolin, 60.42% to co-trimoxazole, 54.16% to nalidixic acid, 36.45% to gentamicin, 29.18% to ciprofloxacin, 14.58% to cefepime, 6.25% to nitrofurantoin and 0.00% to imipenem. Twenty-two antibiotic resistance patterns were observed among the isolates. Virulence genotyping of isolates revealed that 58.39% isolates had at least one of the four virulence genes. The iucD gene was the most prevalent gene (43.06%). The other genes including sfa/focDE, papEF and hly genes were detected in 35.76%, 18.97% and 2.18% isolates, respectively. Nine combination patterns of the virulence genes were detected in isolates. Phylotyping of 137 isolates revealed that the isolates fell into A (45.99%), B1 (13.14%), B2 (19.71%) and D (21.16%) groups. Phylotyping of multidrug resistant isolates indicated that these isolates are mostly in A (60.34%) and D (20.38%) groups. In conclusion, the isolates that possessed the iucD, sfa/focDE, papEF and hly virulence genes mostly belonged to A and B2 groups, whereas antibiotic resistant isolates were in groups A and D. Escherichia coli strains carrying virulence factors and antibiotic resistance are distributed in specific phylogenetic background.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror
Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji
2017-01-01
This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385
Hurwitz, David S; Pradhan, Anuj; Fisher, Donald L; Knodler, Michael A; Muttart, Jeffrey W; Menon, Rajiv; Meissner, Uwe
2012-01-01
Context Backing crash injures can be severe; approximately 200 of the 2,500 reported injuries of this type per year to children under the age of 15 years result in death. Technology for assisting drivers when backing has limited success in preventing backing crashes. Objectives Two questions are addressed: Why is the reduction in backing crashes moderate when rear-view cameras are deployed? Could rear-view cameras augment sensor systems? Design 46 drivers (36 experimental, 10 control) completed 16 parking trials over 2 days (eight trials per day). Experimental participants were provided with a sensor camera system, controls were not. Three crash scenarios were introduced. Setting Parking facility at UMass Amherst, USA. Subjects 46 drivers (33 men, 13 women) average age 29 years, who were Massachusetts residents licensed within the USA for an average of 9.3 years. Interventions Vehicles equipped with a rear-view camera and sensor system-based parking aid. Main Outcome Measures Subject’s eye fixations while driving and researcher’s observation of collision with objects during backing. Results Only 20% of drivers looked at the rear-view camera before backing, and 88% of those did not crash. Of those who did not look at the rear-view camera before backing, 46% looked after the sensor warned the driver. Conclusions This study indicates that drivers not only attend to an audible warning, but will look at a rear-view camera if available. Evidence suggests that when used appropriately, rear-view cameras can mitigate the occurrence of backing crashes, particularly when paired with an appropriate sensor system. PMID:20363812
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
Fixed-focus camera objective for small remote sensing satellites
NASA Astrophysics Data System (ADS)
Topaz, Jeremy M.; Braun, Ofer; Freiman, Dov
1993-09-01
An athermalized objective has been designed for a compact, lightweight push-broom camera which is under development at El-Op Ltd. for use in small remote-sensing satellites. The high performance objective has a fixed focus setting, but maintains focus passively over the full range of temperatures encountered in small satellites. The lens is an F/5.0, 320 mm focal length Tessar type, operating over the range 0.5 - 0.9 micrometers . It has a 16 degree(s) field of view and accommodates various state-of-the-art silicon detector arrays. The design and performance of the objective is described in this paper.
3D morphology reconstruction using linear array CCD binocular stereo vision imaging system
NASA Astrophysics Data System (ADS)
Pan, Yu; Wang, Jinjiang
2018-01-01
Binocular vision imaging system, which has a small field of view, cannot reconstruct the 3-D shape of the dynamic object. We found a linear array CCD binocular vision imaging system, which uses different calibration and reconstruct methods. On the basis of the binocular vision imaging system, the linear array CCD binocular vision imaging systems which has a wider field of view can reconstruct the 3-D morphology of objects in continuous motion, and the results are accurate. This research mainly introduces the composition and principle of linear array CCD binocular vision imaging system, including the calibration, capture, matching and reconstruction of the imaging system. The system consists of two linear array cameras which were placed in special arrangements and a horizontal moving platform that can pick up objects. The internal and external parameters of the camera are obtained by calibrating in advance. And then using the camera to capture images of moving objects, the results are then matched and 3-D reconstructed. The linear array CCD binocular vision imaging systems can accurately measure the 3-D appearance of moving objects, this essay is of great significance to measure the 3-D morphology of moving objects.
Motion Imagery and Robotics Application (MIRA)
NASA Technical Reports Server (NTRS)
Martinez, Lindolfo; Rich, Thomas
2011-01-01
Objectives include: I. Prototype a camera service leveraging the CCSDS Integrated protocol stack (MIRA/SM&C/AMS/DTN): a) CCSDS MIRA Service (New). b) Spacecraft Monitor and Control (SM&C). c) Asynchronous Messaging Service (AMS). d) Delay/Disruption Tolerant Networking (DTN). II. Additional MIRA Objectives: a) Demo of Camera Control through ISS using CCSDS protocol stack (Berlin, May 2011). b) Verify that the CCSDS standards stack can provide end-to-end space camera services across ground and space environments. c) Test interoperability of various CCSDS protocol standards. d) Identify overlaps in the design and implementations of the CCSDS protocol standards. e) Identify software incompatibilities in the CCSDS stack interfaces. f) Provide redlines to the SM&C, AMS, and DTN working groups. d) Enable the CCSDS MIRA service for potential use in ISS Kibo camera commanding. e) Assist in long-term evolution of this entire group of CCSDS standards to TRL 6 or greater.
2001-11-29
KENNEDY SPACE CENTER, Fla. -- Fully unwrapped, the Advanced Camera for Surveys, which is suspended by an overhead crane, is checked over by workers. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
Camera calibration method of binocular stereo vision based on OpenCV
NASA Astrophysics Data System (ADS)
Zhong, Wanzhen; Dong, Xiaona
2015-10-01
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
Vision System Measures Motions of Robot and External Objects
NASA Technical Reports Server (NTRS)
Talukder, Ashit; Matthies, Larry
2008-01-01
A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.
Performance of Color Camera Machine Vision in Automated Furniture Rough Mill Systems
D. Earl Kline; Agus Widoyoko; Janice K. Wiedenbeck; Philip A. Araman
1998-01-01
The objective of this study was to evaluate the performance of color camera machine vision for lumber processing in a furniture rough mill. The study used 134 red oak boards to compare the performance of automated gang-rip-first rough mill yield based on a prototype color camera lumber inspection system developed at Virginia Tech with both estimated optimum rough mill...
2001-12-01
KENNEDY SPACE CENTER, Fla. - STS-109 Mission Specialist Richard Lennehan (left) and Payload Commander John Grunsfeld get a feel for tools and equipment that will be used on the mission. The crew is at KSC to take part in Crew Equipment Interface Test activities that include familiarization with the orbiter and equipment. The goal of the mission is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
Mapping Land and Water Surface Topography with instantaneous Structure from Motion
NASA Astrophysics Data System (ADS)
Dietrich, J.; Fonstad, M. A.
2012-12-01
Structure from Motion (SfM) has given researchers an invaluable tool for low-cost, high-resolution 3D mapping of the environment. These SfM 3D surface models are commonly constructed from many digital photographs collected with one digital camera (either handheld or attached to aerial platform). This method works for stationary or very slow moving objects. However, objects in motion are impossible to capture with one-camera SfM. With multiple simultaneously triggered cameras, it becomes possible to capture multiple photographs at the same time which allows for the construction 3D surface models of moving objects and surfaces, an instantaneous SfM (ISfM) surface model. In river science, ISfM provides a low-cost solution for measuring a number of river variables that researchers normally estimate or are unable to collect over large areas. With ISfM and sufficient coverage of the banks and RTK-GPS control it is possible to create a digital surface model of land and water surface elevations across an entire channel and water surface slopes at any point within the surface model. By setting the cameras to collect time-lapse photography of a scene it is possible to create multiple surfaces that can be compared using traditional digital surface model differencing. These water surface models could be combined the high-resolution bathymetry to create fully 3D cross sections that could be useful in hydrologic modeling. Multiple temporal image sets could also be used in 2D or 3D particle image velocimetry to create 3D surface velocity maps of a channel. Other applications in earth science include anything where researchers could benefit from temporal surface modeling like mass movements, lava flows, dam removal monitoring. The camera system that was used for this research consisted of ten pocket digital cameras (Canon A3300) equipped with wireless triggers. The triggers were constructed with an Arduino-style microcontroller and off-the-shelf handheld radios with a maximum range of several kilometers. The cameras are controlled from another microcontroller/radio combination that allows for manual or automatic triggering of the cameras. The total cost of the camera system was approximately 1500 USD.
ACT-Vision: active collaborative tracking for multiple PTZ cameras
NASA Astrophysics Data System (ADS)
Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet
2009-04-01
We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.
Calibration Techniques for Accurate Measurements by Underwater Camera Systems
Shortis, Mark
2015-01-01
Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Concepts, laboratory, and telescope test results of the plenoptic camera as a wavefront sensor
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, L. F.; Montilla, I.; Fernández-Valdivia, J. J.; Trujillo-Sevilla, J. L.; Rodríguez-Ramos, J. M.
2012-07-01
The plenoptic camera has been proposed as an alternative wavefront sensor adequate for extended objects within the context of the design of the European Solar Telescope (EST), but it can also be used with point sources. Originated in the field of the Electronic Photography, the plenoptic camera directly samples the Light Field function, which is the four - dimensional representation of all the light entering a camera. Image formation can then be seen as the result of the photography operator applied to this function, and many other features of the light field can be exploited to extract information of the scene, like depths computation to extract 3D imaging or, as it will be specifically addressed in this paper, wavefront sensing. The underlying concept of the plenoptic camera can be adapted to the case of a telescope by using a lenslet array of the same f-number placed at the focal plane, thus obtaining at the detector a set of pupil images corresponding to every sampled point of view. This approach will generate a generalization of Shack-Hartmann, Curvature and Pyramid wavefront sensors in the sense that all those could be considered particular cases of the plenoptic wavefront sensor, because the information needed as the starting point for those sensors can be derived from the plenoptic image. Laboratory results obtained with extended objects, phase plates and commercial interferometers, and even telescope observations using stars and the Moon as an extended object are presented in the paper, clearly showing the capability of the plenoptic camera to behave as a wavefront sensor.
NASA Technical Reports Server (NTRS)
1997-01-01
Passive millimeter wave (PMMW) sensors have the ability to see through fog, clouds, dust and sandstorms and thus have the potential to support all-weather operations, both military and commercial. Many of the applications, such as military transport or commercial aircraft landing, are technologically stressing in that they require imaging of a scene with a large field of view in real time and with high spatial resolution. The development of a low cost PMMW focal plane array camera is essential to obtain real-time video images to fulfill the above needs. The overall objective of this multi-year project (Phase 1) was to develop and demonstrate the capabilities of a W-band PMMW camera with a microwave/millimeter wave monolithic integrated circuit (MMIC) focal plane array (FPA) that can be manufactured at low cost for both military and commercial applications. This overall objective was met in July 1997 when the first video images from the camera were generated of an outdoor scene. In addition, our consortium partner McDonnell Douglas was to develop a real-time passive millimeter wave flight simulator to permit pilot evaluation of a PMMW-equipped aircraft in a landing scenario. A working version of this simulator was completed. This work was carried out under the DARPA-funded PMMW Camera Technology Reinvestment Project (TRP), also known as the PMMW Camera DARPA Joint Dual-Use Project. In this final report for the Phase 1 activities, a year by year description of what the specific objectives were, the approaches taken, and the progress made is presented, followed by a description of the validation and imaging test results obtained in 1997.
HDR video synthesis for vision systems in dynamic scenes
NASA Astrophysics Data System (ADS)
Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried
2016-09-01
High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.
Comparison of experimental three-band IR detection of buried objects and multiphysics simulations
NASA Astrophysics Data System (ADS)
Rabelo, Renato C.; Tilley, Heather P.; Catterlin, Jeffrey K.; Karunasiri, Gamani; Alves, Fabio D. P.
2018-04-01
A buried-object detection system composed of a LWIR, a MWIR and a SWIR camera, along with a set of ground and ambient temperature sensors was constructed and tested. The objects were buried in a 1.2x1x0.3 m3 sandbox and surface temperature (using LWIR and MWIR cameras) and reflection (using SWIR camera) were recoded throughout the day. Two objects (aluminum and Teflon) with volume of about 2.5x10-4 m3 , were placed at varying depths during the measurements. Ground temperature sensors buried at three different depths measured the vertical temperature profile within the sandbox, while the weather station recorded the ambient temperature and solar radiation intensity. Images from the three cameras were simultaneously acquired in five-minute intervals throughout many days. An algorithm to postprocess and combine the images was developed in order to maximize the probability of detection by identifying thermal anomalies (temperature contrast) resulting from the presence of the buried object in an otherwise homogeneous medium. A simplified detection metric based on contrast differences was established to allow the evaluation of the image processing method. Finite element simulations were performed, reproducing the experiment conditions and, when possible, incorporated with data coming from actual measurements. Comparisons between experiment and simulation results were performed and the simulation parameters were adjusted until images generated from both methods are matched, aiming at obtaining insights of the buried material properties. Preliminary results show a great potential for detection of shallowburied objects such as land mines and IEDs and possible identification using finite element generated maps fitting measured surface maps.
Radar based autonomous sensor module
NASA Astrophysics Data System (ADS)
Styles, Tim
2016-10-01
Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.
Matrix Determination of Reflectance of Hidden Object via Indirect Photography
2012-03-01
the hidden object. This thesis provides an alternative method of processing the camera images by modeling the system as a set of transport and...Distribution Function ( BRDF ). Figure 1. Indirect photography with camera field of view dictated by point of illumination. 3 1.3 Research Focus In an...would need to be modeled using radiometric principles. A large amount of the improvement in this process was due to the use of a blind
Time-of-Flight Microwave Camera.
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-05
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable "stealth" regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows "camera-like" behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Study of a quasi-microscope design for planetary landers
NASA Technical Reports Server (NTRS)
Giat, O.; Brown, E. B.
1973-01-01
The Viking Lander fascimile camera, in its present form, provides for a minimum object distance of 1.9 meters, at which distance its resolution of 0.0007 radian provides an object resolution of 1.33 millimeters. It was deemed desirable, especially for follow-on Viking missions, to provide means for examing Martian terrain at resolutions considerably higher than that now provided. This led to the concept of quasi-microscope, an attachment to be used in conjunction with the fascimile camera to convert it to a low power microscope. The results are reported of an investigation to consider alternate optical configurations for the quasi-microscope and to develop optical designs for the selected system or systems. Initial requirements included consideration of object resolutions in the range of 2 to 50 micrometers, an available field of view of the order of 500 pixels, and no significant modifications to the fascimile camera.
Safety evaluation of red-light cameras
DOT National Transportation Integrated Search
2005-04-01
The objective of this final study was to determine the effectiveness of red-light-camera (RLC) systems in reducing crashes. The study used empirical Bayes before-and-after research using data from seven jurisdictions across the United States at 132 t...
NASA Astrophysics Data System (ADS)
Yabuta, Kenichi; Kitazawa, Hitoshi; Tanaka, Toshihisa
2006-02-01
Recently, monitoring cameras for security have been extensively increasing. However, it is normally difficult to know when and where we are monitored by these cameras and how the recorded images are stored and/or used. Therefore, how to protect privacy in the recorded images is a crucial issue. In this paper, we address this problem and introduce a framework for security monitoring systems considering the privacy protection. We state requirements for monitoring systems in this framework. We propose a possible implementation that satisfies the requirements. To protect privacy of recorded objects, they are made invisible by appropriate image processing techniques. Moreover, the original objects are encrypted and watermarked into the image with the "invisible" objects, which is coded by the JPEG standard. Therefore, the image decoded by a normal JPEG viewer includes the objects that are unrecognized or invisible. We also introduce in this paper a so-called "special viewer" in order to decrypt and display the original objects. This special viewer can be used by limited users when necessary for crime investigation, etc. The special viewer allows us to choose objects to be decoded and displayed. Moreover, in this proposed system, real-time processing can be performed, since no future frame is needed to generate a bitstream.
A direct-view customer-oriented digital holographic camera
NASA Astrophysics Data System (ADS)
Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.
2018-01-01
In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.
FieldSAFE: Dataset for Obstacle Detection in Agriculture.
Kragh, Mikkel Fly; Christiansen, Peter; Laursen, Morten Stigaard; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik; Jørgensen, Rasmus Nyholm
2017-11-09
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360 ∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates.
FieldSAFE: Dataset for Obstacle Detection in Agriculture
Christiansen, Peter; Larsen, Morten; Steen, Kim Arild; Green, Ole; Karstoft, Henrik
2017-01-01
In this paper, we present a multi-modal dataset for obstacle detection in agriculture. The dataset comprises approximately 2 h of raw sensor data from a tractor-mounted sensor system in a grass mowing scenario in Denmark, October 2016. Sensing modalities include stereo camera, thermal camera, web camera, 360∘ camera, LiDAR and radar, while precise localization is available from fused IMU and GNSS. Both static and moving obstacles are present, including humans, mannequin dolls, rocks, barrels, buildings, vehicles and vegetation. All obstacles have ground truth object labels and geographic coordinates. PMID:29120383
Constrained space camera assembly
Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.
1999-05-11
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.
NMCI to NGEN: Managing the Transition of Navy Information Technology Infrastructure
2013-03-01
Decision Review xvi FOC GAO Full Operational Capability Government Accountability Office GFE GIG Government Furnished Equipment Global...global information grid ( GIG ) in accordance with overarching DoD directives. 8 The requirement for adequate workforce began as a phased approach in...certification of personnel conducting IA functions within the DoD workforce supporting the DoD GIG in accordance with overarching DoD directives. 22 The
1987-04-15
Mini- stry of Foreign Trade , is to be effec- tive during /1987, is . renewable yearly/ and focüsses on the exchange of goods and products ...reach accords providing for tUie exchange of goods and products , trade delegations shall visit each other’s country land trade fairs shall be...producer’s costs, rises and drops In the price of such products have a major effect on productivity . Foreign currency allo-; cations
2007-08-01
Community Acquired Pneumonia . The principal investigator is William Weisburg, Ph.D. of Nanogen, and the Medical College of Wisconsin (MCW) is a...captures the most important viruses involved in community - acquired respiratory infections: 1. Influenza A (fluA) 2. Influenza B (fluB) 3. Respiratory...Adenovirus subtype 4, Human Parainfluenza Virus 1, Human Parainfluenza Virus 2, Human Parainfluenza Virus 3, Human Metapneumovirus, Streptococcus pneumonia
New Models for Protocol Security
2015-06-18
Rafael Pass, Alon Rosen , Eylon Yogev: One- Way Functions and (Im)Perfect Obfuscation. FOCS 2014: 374-383 14. Joseph Y. Halpern, Rafael Pass, Lior Seeman...Conservative belief and rationality. Games and Economic Behavior 80: 186-192 (2013) 21. Rafael Pass, Alon Rosen , Wei-Lung Dustin Tseng: Public-Coin...Pass, Wei-Lung Dustin Tseng: The Knowledge Tightness of Par- allel Zero-Knowledge. TCC 2012: 512-529 44. Boaz Barak , Ran Canetti, Yehuda Lindell, Rafael
Köberl, Martina; Dita, Miguel; Martinuz, Alfonso; Staver, Charles; Berg, Gabriele
2017-01-01
Culminating in the 1950’s, bananas, the world’s most extensive perennial monoculture, suffered one of the most devastating disease epidemics in history. In Latin America and the Caribbean, Fusarium wilt (FW) caused by the soil-borne fungus Fusarium oxysporum f. sp. cubense (FOC), forced the abandonment of the Gros Michel-based export banana industry. Comparative microbiome analyses performed between healthy and diseased Gros Michel plants on FW-infested farms in Nicaragua and Costa Rica revealed significant shifts in the gammaproteobacterial microbiome. Although we found substantial differences in the banana microbiome between both countries and a higher impact of FOC on farms in Costa Rica than in Nicaragua, the composition especially in the endophytic microhabitats was similar and the general microbiome response to FW followed similar rules. Gammaproteobacterial diversity and community members were identified as potential health indicators. Healthy plants revealed an increase in potentially plant-beneficial Pseudomonas and Stenotrophomonas, while diseased plants showed a preferential occurrence of Enterobacteriaceae known for their plant-degrading capacity. Significantly higher microbial rhizosphere diversity found in healthy plants could be indicative of pathogen suppression events preventing or minimizing disease expression. This first study examining banana microbiome shifts caused by FW under natural field conditions opens new perspectives for its biological control. PMID:28345666
Onset of thermomagnetic convection around a vertically oriented hot-wire in ferrofluid
NASA Astrophysics Data System (ADS)
Vatani, Ashkan; Woodfield, Peter Lloyd; Nguyen, Nam-Trung; Dao, Dzung Viet
2018-06-01
The onset of thermomagnetic convection in ferrofluid in a vertical transient hot-wire cell is analytically and experimentally investigated by studying the temperature rise of an electrically-heated wire. During the initial stage of heating, the temperature rise is found to correspond well to that predicted by conduction only. For high electrical current densities, the initial heating stage is followed by a sudden change in the slope of the temperature rise with respect to time as a result of the onset of thermomagnetic convection cooling. The observed onset of thermomagnetic convection was then compared to that of natural convection of deionized water. For the first time, the critical time corresponding to the onset of thermomagnetic convection around an electrically-heated wire is characterized and non-dimensionalized as a critical Fourier number (Foc). We propose an equation for Foc as a function of a magnetic Rayleigh number to predict the time for the onset of thermomagnetic convection. We observed that thermomagnetic convection in ferrofluid occurs earlier than natural convection in non-magnetic fluids for similar experimental conditions. The onset of thermomagnetic convection is dependent on the current supplied to the wire. The findings have important implications for cooling of high-power electronics using ferrofluids and for measuring thermal properties of ferrofluids.
Image quality assessment for selfies with and without super resolution
NASA Astrophysics Data System (ADS)
Kubota, Aya; Gohshi, Seiichi
2018-04-01
With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.
Electronic camera-management system for 35-mm and 70-mm film cameras
NASA Astrophysics Data System (ADS)
Nielsen, Allan
1993-01-01
Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.
Occultation and Triangulation Camera (OcTriCam) Cubesat
NASA Astrophysics Data System (ADS)
Batchelor, D. A.
2018-02-01
A camera at Earth-Moon L2 would provide a 240,000 km triangulation baseline to augment near-Earth object observations with Earth-based telescopes such as Pan-STARRS, and planetary occultation research to refine ephemerides and probe ring systems.
Camera pose estimation for augmented reality in a small indoor dynamic scene
NASA Astrophysics Data System (ADS)
Frikha, Rawia; Ejbali, Ridha; Zaied, Mourad
2017-09-01
Camera pose estimation remains a challenging task for augmented reality (AR) applications. Simultaneous localization and mapping (SLAM)-based methods are able to estimate the six degrees of freedom camera motion while constructing a map of an unknown environment. However, these methods do not provide any reference for where to insert virtual objects since they do not have any information about scene structure and may fail in cases of occlusion of three-dimensional (3-D) map points or dynamic objects. This paper presents a real-time monocular piece wise planar SLAM method using the planar scene assumption. Using planar structures in the mapping process allows rendering virtual objects in a meaningful way on the one hand and improving the precision of the camera pose and the quality of 3-D reconstruction of the environment by adding constraints on 3-D points and poses in the optimization process on the other hand. We proposed to benefit from the 3-D planes rigidity motion in the tracking process to enhance the system robustness in the case of dynamic scenes. Experimental results show that using a constrained planar scene improves our system accuracy and robustness compared with the classical SLAM systems.
High-speed line-scan camera with digital time delay integration
NASA Astrophysics Data System (ADS)
Bodenstorfer, Ernst; Fürtler, Johannes; Brodersen, Jörg; Mayer, Konrad J.; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert
2007-02-01
Dealing with high-speed image acquisition and processing systems, the speed of operation is often limited by the amount of available light, due to short exposure times. Therefore, high-speed applications often use line-scan cameras, based on charge-coupled device (CCD) sensors with time delayed integration (TDI). Synchronous shift and accumulation of photoelectric charges on the CCD chip - according to the objects' movement - result in a longer effective exposure time without introducing additional motion blur. This paper presents a high-speed color line-scan camera based on a commercial complementary metal oxide semiconductor (CMOS) area image sensor with a Bayer filter matrix and a field programmable gate array (FPGA). The camera implements a digital equivalent to the TDI effect exploited with CCD cameras. The proposed design benefits from the high frame rates of CMOS sensors and from the possibility of arbitrarily addressing the rows of the sensor's pixel array. For the digital TDI just a small number of rows are read out from the area sensor which are then shifted and accumulated according to the movement of the inspected objects. This paper gives a detailed description of the digital TDI algorithm implemented on the FPGA. Relevant aspects for the practical application are discussed and key features of the camera are listed.
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
UKIRT's Wide Field Camera and the Detection of 10 MJupiter Objects
NASA Astrophysics Data System (ADS)
WFCAM Team; UKIDSS Team
2004-06-01
In mid-2004 a near-infrared wide field camera will be commissioned on UKIRT. About 40% of all UKIRT time will go into sky surveys and one of these, the Large Area Survey using YJHK filters, will extend the field brown dwarf population to temperatures and masses significantly lower than those of the T dwarf population discovered by the Sloan and 2MASS surveys. The LAS should find objects as cool as 450 K and as low mass as 10 MJupiter at 10 pc. These planetary-mass objects will possibly require a new spectral type designation.
Sensory Interactive Teleoperator Robotic Grasping
NASA Technical Reports Server (NTRS)
Alark, Keli; Lumia, Ron
1997-01-01
As the technological world strives for efficiency, the need for economical equipment that increases operator proficiency in minimal time is fundamental. This system links a CCD camera, a controller and a robotic arm to a computer vision system to provide an alternative method of image analysis. The machine vision system which was employed possesses software tools for acquiring and analyzing images which are received through a CCD camera. After feature extraction on the object in the image was performed, information about the object's location, orientation and distance from the robotic gripper is sent to the robot controller so that the robot can manipulate the object.
2017-11-01
ARL-TR-8205 ● NOV 2017 US Army Research Laboratory Strategies for Characterizing the Sensory Environment: Objective and...Subjective Evaluation Methods using the VisiSonic Real Space 64/5 Audio-Visual Panoramic Camera By Joseph McArdle, Ashley Foots, Chris Stachowiak, and...return it to the originator. ARL-TR-8205 ● NOV 2017 US Army Research Laboratory Strategies for Characterizing the Sensory
Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan
NASA Astrophysics Data System (ADS)
Pichette, Julien; Charle, Wouter; Lambrechts, Andy
2017-02-01
Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.
Video model deformation system for the National Transonic Facility
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1983-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. A rudimentary theory section is followed by a description of the video-based system and control measures required to protect cameras from the hostile environment. Preliminary results obtained with the same camera placement as planned for NTF are presented and plans for facility testing with a specially designed test wing are discussed.
Augmented reality glass-free three-dimensional display with the stereo camera
NASA Astrophysics Data System (ADS)
Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu
2017-10-01
An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.
Feasibility of Using Video Cameras for Automated Enforcement on Red-Light Running and Managed Lanes.
DOT National Transportation Integrated Search
2009-12-01
The overall objective of this study is to evaluate the feasibility, effectiveness, legality, and public acceptance aspects of automated enforcement on red light running and high occupancy vehicle (HOV) occupancy requirement using video cameras in Nev...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sels, Seppe, E-mail: Seppe.Sels@uantwerpen.be; Ribbens, Bart; Mertens, Luc
Scanning laser Doppler vibrometers (LDV) are used to measure full-field vibration shapes of products and structures. In most commercially available scanning laser Doppler vibrometer systems the user manually draws a grid of measurement locations on a 2D camera image of the product. The determination of the correct physical measurement locations can be a time consuming and diffcult task. In this paper we present a new methodology for product testing and quality control that integrates 3D imaging techniques with vibration measurements. This procedure allows to test prototypes in a shorter period because physical measurements locations will be located automatically. The proposedmore » methodology uses a 3D time-of-flight camera to measure the location and orientation of the test-object. The 3D image of the time-of-flight camera is then matched with the 3D-CAD model of the object in which measurement locations are pre-defined. A time of flight camera operates strictly in the near infrared spectrum. To improve the signal to noise ratio in the time-of-flight measurement, a time-of-flight camera uses a band filter. As a result of this filter, the laser spot of most laser vibrometers is invisible in the time-of-flight image. Therefore a 2D RGB-camera is used to find the laser-spot of the vibrometer. The laser spot is matched to the 3D image obtained by the time-of-flight camera. Next an automatic calibration procedure is used to aim the laser at the (pre)defined locations. Another benefit from this methodology is that it incorporates automatic mapping between a CAD model and the vibration measurements. This mapping can be used to visualize measurements directly on a 3D CAD model. Secondly the orientation of the CAD model is known with respect to the laser beam. This information can be used to find the direction of the measured vibration relatively to the surface of the object. With this direction, the vibration measurements can be compared more precisely with numerical experiments.« less
Salau, J; Haas, J H; Thaller, G; Leisen, M; Junge, W
2016-09-01
Camera-based systems in dairy cattle were intensively studied over the last years. Different from this study, single camera systems with a limited range of applications were presented, mostly using 2D cameras. This study presents current steps in the development of a camera system comprising multiple 3D cameras (six Microsoft Kinect cameras) for monitoring purposes in dairy cows. An early prototype was constructed, and alpha versions of software for recording, synchronizing, sorting and segmenting images and transforming the 3D data in a joint coordinate system have already been implemented. This study introduced the application of two-dimensional wavelet transforms as method for object recognition and surface analyses. The method was explained in detail, and four differently shaped wavelets were tested with respect to their reconstruction error concerning Kinect recorded depth maps from different camera positions. The images' high frequency parts reconstructed from wavelet decompositions using the haar and the biorthogonal 1.5 wavelet were statistically analyzed with regard to the effects of image fore- or background and of cows' or persons' surface. Furthermore, binary classifiers based on the local high frequencies have been implemented to decide whether a pixel belongs to the image foreground and if it was located on a cow or a person. Classifiers distinguishing between image regions showed high (⩾0.8) values of Area Under reciever operation characteristic Curve (AUC). The classifications due to species showed maximal AUC values of 0.69.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.
Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio
2009-01-01
3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.
Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter
NASA Astrophysics Data System (ADS)
Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun
2018-04-01
Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.
Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)
NASA Astrophysics Data System (ADS)
MoIIberg, Bernard H.
1981-11-01
The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.
A CCD search for geosynchronous debris
NASA Technical Reports Server (NTRS)
Gehrels, Tom; Vilas, Faith
1986-01-01
Using the Spacewatch Camera, a search was conducted for objects in geosynchronous earth orbit. The system is equipped with a CCD camera cooled with dry ice; the image scale is 1.344 arcsec/pixel. The telescope drive was off so that during integrations the stars were trailed while geostationary objects appeared as round images. The technique should detect geostationary objects to a limiting apparent visual magnitude of 19. A sky area of 8.8 square degrees was searched for geostationary objects while geosynchronous debris passing through was 16.4 square degrees. Ten objects were found of which seven are probably geostationary satellites having apparent visual magnitudes brighter than 13.1. Three objects having magnitudes equal to or fainter than 13.7 showed motion in the north-south direction. The absence of fainter stationary objects suggests that a gap in debris size exists between satellites and particles having diameters in the millimeter range.
Passive Infrared Thermographic Imaging for Mobile Robot Object Identification
NASA Astrophysics Data System (ADS)
Hinders, M. K.; Fehlman, W. L.
2010-02-01
The usefulness of thermal infrared imaging as a mobile robot sensing modality is explored, and a set of thermal-physical features used to characterize passive thermal objects in outdoor environments is described. Objects that extend laterally beyond the thermal camera's field of view, such as brick walls, hedges, picket fences, and wood walls as well as compact objects that are laterally within the thermal camera's field of view, such as metal poles and tree trunks, are considered. Classification of passive thermal objects is a subtle process since they are not a source for their own emission of thermal energy. A detailed analysis is included of the acquisition and preprocessing of thermal images, as well as the generation and selection of thermal-physical features from these objects within thermal images. Classification performance using these features is discussed, as a precursor to the design of a physics-based model to automatically classify these objects.
2D virtual texture on 3D real object with coded structured light
NASA Astrophysics Data System (ADS)
Molinier, Thierry; Fofi, David; Salvi, Joaquim; Gorria, Patrick
2008-02-01
Augmented reality is used to improve color segmentation on human body or on precious no touch artifacts. We propose a technique to project a synthesized texture on real object without contact. Our technique can be used in medical or archaeological application. By projecting a suitable set of light patterns onto the surface of a 3D real object and by capturing images with a camera, a large number of correspondences can be found and the 3D points can be reconstructed. We aim to determine these points of correspondence between cameras and projector from a scene without explicit points and normals. We then project an adjusted texture onto the real object surface. We propose a global and automatic method to virtually texture a 3D real object.
2001-11-26
KENNEDY SPACE CENTER, Fla. -- A piece of equipment for Hubble Space Telescope Servicing mission is moved inside Hangar AE, Cape Canaveral. In the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
2001-11-29
KENNEDY SPACE CENTER, Fla. -- In Hangar A&E, workers watch as an overhead crane lifts the Advanced Camera for Surveys out of its transportation container. Part of the payload on the Hubble Space Telescope Servicing Mission, STS-109, the ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. Tasks for the mission include replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
2001-11-26
KENNEDY SPACE CENTER, Fla. - A piece of equipment for Hubble Space Telescope Servicing mission arrives at Hangar AE, Cape Canaveral. Inside the canister is the Advanced Camera for Surveys (ACS). The ACS will increase the discovery efficiency of the HST by a factor of ten. It consists of three electronic cameras and a complement of filters and dispersers that detect light from the ultraviolet to the near infrared (1200 - 10,000 angstroms). The ACS was built through a collaborative effort between Johns Hopkins University, Goddard Space Flight Center, Ball Aerospace Corporation and Space Telescope Science Institute. The goal of the mission, STS-109, is to service the HST, replacing Solar Array 2 with Solar Array 3, replacing the Power Control Unit, removing the Faint Object Camera and installing the ACS, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
Machine vision based teleoperation aid
NASA Technical Reports Server (NTRS)
Hoff, William A.; Gatrell, Lance B.; Spofford, John R.
1991-01-01
When teleoperating a robot using video from a remote camera, it is difficult for the operator to gauge depth and orientation from a single view. In addition, there are situations where a camera mounted for viewing by the teleoperator during a teleoperation task may not be able to see the tool tip, or the viewing angle may not be intuitive (requiring extensive training to reduce the risk of incorrect or dangerous moves by the teleoperator). A machine vision based teleoperator aid is presented which uses the operator's camera view to compute an object's pose (position and orientation), and then overlays onto the operator's screen information on the object's current and desired positions. The operator can choose to display orientation and translation information as graphics and/or text. This aid provides easily assimilated depth and relative orientation information to the teleoperator. The camera may be mounted at any known orientation relative to the tool tip. A preliminary experiment with human operators was conducted and showed that task accuracies were significantly greater with than without this aid.
Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.
Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue
2015-01-01
A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.
Robot Evolutionary Localization Based on Attentive Visual Short-Term Memory
Vega, Julio; Perdices, Eduardo; Cañas, José M.
2013-01-01
Cameras are one of the most relevant sensors in autonomous robots. However, two of their challenges are to extract useful information from captured images, and to manage the small field of view of regular cameras. This paper proposes implementing a dynamic visual memory to store the information gathered from a moving camera on board a robot, followed by an attention system to choose where to look with this mobile camera, and a visual localization algorithm that incorporates this visual memory. The visual memory is a collection of relevant task-oriented objects and 3D segments, and its scope is wider than the current camera field of view. The attention module takes into account the need to reobserve objects in the visual memory and the need to explore new areas. The visual memory is useful also in localization tasks, as it provides more information about robot surroundings than the current instantaneous image. This visual system is intended as underlying technology for service robot applications in real people's homes. Several experiments have been carried out, both with simulated and real Pioneer and Nao robots, to validate the system and each of its components in office scenarios. PMID:23337333
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
Joint Analysis: QDR 2001 and Beyond Mini-Symposium Held in Fairfax, Virginia on 1-3 February 2000
2001-04-11
have done better in: * Articulating a high level, understandable story that was credible to Congress. * Documenting, archiving assessments performed ...to (1) examine DoD assessment capabilities for performing QDR 2001, (2) provide a non-confrontational environment in which OSD, the Joint Staff...example. Foc trcues-Ec Key Issues Tools/databases Defined for Three Levels _________ (Low, Med., High ) Scenarios A . Emphasis on Modernization B. Emphasis
2010-11-01
carbon flipid fraction lipid foc fraction organic carbon fprotein fraction protein GCMS Gas Chromatography -Mass Spectrometry HP Hunter’s...Internal standards were added to the extracts before gas chromatography -mass spectrometry (GCMS) analysis. GCMS was done using a JEOL GCmate...min. The MS was operated in selected ion monitoring (SIM) and EI+ modes. Calibration standards 6 containing at least 25 aromatic compounds
Head-coupled remote stereoscopic camera system for telepresence applications
NASA Astrophysics Data System (ADS)
Bolas, Mark T.; Fisher, Scott S.
1990-09-01
The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.
Jarc, Anthony M; Curet, Myriam J
2017-03-01
Effective visualization of the operative field is vital to surgical safety and education. However, additional metrics for visualization are needed to complement other common measures of surgeon proficiency, such as time or errors. Unlike other surgical modalities, robot-assisted minimally invasive surgery (RAMIS) enables data-driven feedback to trainees through measurement of camera adjustments. The purpose of this study was to validate and quantify the importance of novel camera metrics during RAMIS. New (n = 18), intermediate (n = 8), and experienced (n = 13) surgeons completed 25 virtual reality simulation exercises on the da Vinci Surgical System. Three camera metrics were computed for all exercises and compared to conventional efficiency measures. Both camera metrics and efficiency metrics showed construct validity (p < 0.05) across most exercises (camera movement frequency 23/25, camera movement duration 22/25, camera movement interval 19/25, overall score 24/25, completion time 25/25). Camera metrics differentiated new and experienced surgeons across all tasks as well as efficiency metrics. Finally, camera metrics significantly (p < 0.05) correlated with completion time (camera movement frequency 21/25, camera movement duration 21/25, camera movement interval 20/25) and overall score (camera movement frequency 20/25, camera movement duration 19/25, camera movement interval 20/25) for most exercises. We demonstrate construct validity of novel camera metrics and correlation between camera metrics and efficiency metrics across many simulation exercises. We believe camera metrics could be used to improve RAMIS proficiency-based curricula.
Evaluation of Moving Object Detection Based on Various Input Noise Using Fixed Camera
NASA Astrophysics Data System (ADS)
Kiaee, N.; Hashemizadeh, E.; Zarrinpanjeh, N.
2017-09-01
Detecting and tracking objects in video has been as a research area of interest in the field of image processing and computer vision. This paper evaluates the performance of a novel method for object detection algorithm in video sequences. This process helps us to know the advantage of this method which is being used. The proposed framework compares the correct and wrong detection percentage of this algorithm. This method was evaluated with the collected data in the field of urban transport which include car and pedestrian in fixed camera situation. The results show that the accuracy of the algorithm will decreases because of image resolution reduction.
NASA Technical Reports Server (NTRS)
Weigelt, G.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Boksenberg, A.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.; Kamperman, T. M.
1991-01-01
R136 is the luminous central object of the giant H II region 30 Doradus in the LMC. The first high-resolution observations of R136 with the Faint Object Camera on board the Hubble Space Telescope are reported. The physical nature of the brightest component R136a has been a matter of some controversy over the last few years. The UV images obtained show that R136a is a very compact star cluster consisting of more than eight stars within 0.7 arcsec diameter. From these high-resolution images a mass upper limit can be derived for the most luminous stars observed in R136.
Vector-Based Ground Surface and Object Representation Using Cameras
2009-12-01
representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both
Photogrammetry System and Method for Determining Relative Motion Between Two Bodies
NASA Technical Reports Server (NTRS)
Miller, Samuel A. (Inventor); Severance, Kurt (Inventor)
2014-01-01
A photogrammetry system and method provide for determining the relative position between two objects. The system utilizes one or more imaging devices, such as high speed cameras, that are mounted on a first body, and three or more photogrammetry targets of a known location on a second body. The system and method can be utilized with cameras having fish-eye, hyperbolic, omnidirectional, or other lenses. The system and method do not require overlapping fields-of-view if two or more cameras are utilized. The system and method derive relative orientation by equally weighting information from an arbitrary number of heterogeneous cameras, all with non-overlapping fields-of-view. Furthermore, the system can make the measurements with arbitrary wide-angle lenses on the cameras.
GALEX 1st Light Near and Far Ultraviolet -100
2003-05-28
NASA's Galaxy Evolution Explorer took this image on May 21 and 22, 2003. The image was made from data gathered by the two channels of the spacecraft camera during the mission's "first light" milestone. It shows about 100 celestial objects in the constellation Hercules. The reddish objects represent those detected by the camera's near ultraviolet channel over a 5-minute period, while bluish objects were detected over a 3-minute period by the camera's far ultraviolet channel. The Galaxy Evolution Explorer's first light images are dedicated to the crew of the Space Shuttle Columbia. The Hercules region was directly above Columbia when it made its last contact with NASA Mission Control on February 1, over the skies of Texas. The Galaxy Evolution Explorer launched on April 28 on a mission to map the celestial sky in the ultraviolet and determine the history of star formation in the universe over the last 10 billion years. http://photojournal.jpl.nasa.gov/catalog/PIA04281
Time-of-Flight Microwave Camera
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-01-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz–12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum. PMID:26434598
Range camera on conveyor belts: estimating size distribution and systematic errors due to occlusion
NASA Astrophysics Data System (ADS)
Blomquist, Mats; Wernersson, Ake V.
1999-11-01
When range cameras are used for analyzing irregular material on a conveyor belt there will be complications like missing segments caused by occlusion. Also, a number of range discontinuities will be present. In a frame work towards stochastic geometry, conditions are found for the cases when range discontinuities take place. The test objects in this paper are pellets for the steel industry. An illuminating laser plane will give range discontinuities at the edges of each individual object. These discontinuities are used to detect and measure the chord created by the intersection of the laser plane and the object. From the measured chords we derive the average diameter and its variance. An improved method is to use a pair of parallel illuminating light planes to extract two chords. The estimation error for this method is not larger than the natural shape fluctuations (the difference in diameter) for the pellets. The laser- camera optronics is sensitive enough both for material on a conveyor belt and free falling material leaving the conveyor.
Time-of-Flight Microwave Camera
NASA Astrophysics Data System (ADS)
Charvat, Gregory; Temme, Andrew; Feigin, Micha; Raskar, Ramesh
2015-10-01
Microwaves can penetrate many obstructions that are opaque at visible wavelengths, however microwave imaging is challenging due to resolution limits associated with relatively small apertures and unrecoverable “stealth” regions due to the specularity of most objects at microwave frequencies. We demonstrate a multispectral time-of-flight microwave imaging system which overcomes these challenges with a large passive aperture to improve lateral resolution, multiple illumination points with a data fusion method to reduce stealth regions, and a frequency modulated continuous wave (FMCW) receiver to achieve depth resolution. The camera captures images with a resolution of 1.5 degrees, multispectral images across the X frequency band (8 GHz-12 GHz), and a time resolution of 200 ps (6 cm optical path in free space). Images are taken of objects in free space as well as behind drywall and plywood. This architecture allows “camera-like” behavior from a microwave imaging system and is practical for imaging everyday objects in the microwave spectrum.
Calibration of EFOSC2 Broadband Linear Imaging Polarimetry
NASA Astrophysics Data System (ADS)
Wiersema, K.; Higgins, A. B.; Covino, S.; Starling, R. L. C.
2018-03-01
The European Southern Observatory Faint Object Spectrograph and Camera v2 is one of the workhorse instruments on ESO's New Technology Telescope, and is one of the most popular instruments at La Silla observatory. It is mounted at a Nasmyth focus, and therefore exhibits strong, wavelength and pointing-direction-dependent instrumental polarisation. In this document, we describe our efforts to calibrate the broadband imaging polarimetry mode, and provide a calibration for broadband B, V, and R filters to a level that satisfies most use cases (i.e. polarimetric calibration uncertainty 0.1%). We make our calibration codes public. This calibration effort can be used to enhance the yield of future polarimetric programmes with the European Southern Observatory Faint Object Spectrograph and Camera v2, by allowing good calibration with a greatly reduced number of standard star observations. Similarly, our calibration model can be combined with archival calibration observations to post-process data taken in past years, to form the European Southern Observatory Faint Object Spectrograph and Camera v2 legacy archive with substantial scientific potential.
Low Noise Camera for Suborbital Science Applications
NASA Technical Reports Server (NTRS)
Hyde, David; Robertson, Bryan; Holloway, Todd
2015-01-01
Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.
2001-11-27
KENNEDY SPACE CENTER, Fla. -- In the Vertical Processing Facility, members of the STS-109 crew look over the Solar Array 3 panels that will be replacing Solar Array 2 panels on the Hubble Space Telescope (HST). Trainers, at left, point to the panels while Mission Specialist Nancy Currie (second from right) and Commander Scott Altman (far right) look on. Other crew members are Pilot Duane Carey, Payload Commander John Grunsfeld and Mission Specialists James Newman, Richard Linnehan and Michael Massimino. The other goals of the mission are replacing the Power Control Unit, removing the Faint Object Camera and installing the Advanced Camera for Surveys, installing the Near Infrared Camera and Multi-Object Spectrometer (NICMOS) Cooling System, and installing New Outer Blanket Layer insulation on bays 5 through 8. Mission STS-109 is scheduled for launch Feb. 14, 2002
Instantaneous phase-shifting Fizeau interferometry with high-speed pixelated phase-mask camera
NASA Astrophysics Data System (ADS)
Yatagai, Toyohiko; Jackin, Boaz Jessie; Ono, Akira; Kiyohara, Kosuke; Noguchi, Masato; Yoshii, Minoru; Kiyohara, Motosuke; Niwa, Hayato; Ikuo, Kazuyuki; Onuma, Takashi
2015-08-01
A Fizeou interferometer with instantaneous phase-shifting ability using a Wollaston prism is designed. to measure dynamic phase change of objects, a high-speed video camera of 10-5s of shutter speed is used with a pixelated phase-mask of 1024 × 1024 elements. The light source used is a laser of wavelength 532 nm which is split into orthogonal polarization states by passing through a Wollaston prism. By adjusting the tilt of the reference surface it is possible to make the reference and object beam with orthogonal polarizations states to coincide and interfere. Then the pixelated phase-mask camera calculate the phase changes and hence the optical path length difference. Vibration of speakers and turbulence of air flow were successfully measured in 7,000 frames/sec.
GEMINI-TITAN (GT)-11 - MISC. EXPERIMENTS - MSC
1966-03-22
S66-02611 (22 March 1966) --- Gemini-11 Experiment S-13 Ultraviolet Astronomical Camera. It will be used to test the techniques of ultraviolet photography under vacuum conditions and obtain ultraviolet radiation observations of stars in wave length region of 2,000 to 4,000 Angstroms by spectral means. Equipment is the Maurer 70mm camera with UV lens (f3.3) and magazine, objective grating and objective prism, extended shuttle actuator, and mounting bracket. For the experiment, the camera is mounted on the centerline torque box to point through the opened right-hand hatch. Propellant expenditure is estimated at 4.5 pounds per night pass. Two night passes will be used to photograph probably six star fields. Sponsors are NASA's Office of Space Science and Applications and Northwestern University. Photo credit: NASA
Guaranteed time observations support for Faint Object Spectrograph (FOS) on HST
NASA Technical Reports Server (NTRS)
Harms, Richard
1994-01-01
The goals of the GTO effort are for investigations defined in previous years by the IDT to be carried out as HST observations and for the results to be communicated to the scientific community and to the public. The search for possible black holes in the nuclei of both normal and active nucleus galaxies has had to be delayed to the post-servicing era. FOS spectropolarimetric observations of the nuclear region of the peculiar Seyfert galaxy Mrk 231 reveal that the continuum polarization peaks at 18% in the near UV and then declines rapidly toward shorter wavelengths. The papers on the absorption line analysis for our galactic halo address the spatial distribution of high and intermediate level ions in the halo and illustrate the patchy and heterogeneous nature of the halo. The papers on the scattering characteristics of the HST/FOS have provided us with data that shows that the HST mirror surfaces are quite smooth, even at the UV wavelengths. WF-PC and FOC images of the halo PN K648 have been fully analyzed.
A multi-camera system for real-time pose estimation
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin
2007-04-01
This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.
NASA Astrophysics Data System (ADS)
Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri
2012-01-01
An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.
Observations of interplanetary dust by the Juno magnetometer investigation
NASA Astrophysics Data System (ADS)
Benn, M.; Jorgensen, J. L.; Denver, T.; Brauer, P.; Jorgensen, P. S.; Andersen, A. C.; Connerney, J. E. P.; Oliversen, R.; Bolton, S. J.; Levin, S.
2017-05-01
One of the Juno magnetometer investigation's star cameras was configured to search for unidentified objects during Juno's transit en route to Jupiter. This camera detects and registers luminous objects to magnitude 8. Objects persisting in more than five consecutive images and moving with an apparent angular rate of between 2 and 18,000 arcsec/s were recorded. Among the objects detected were a small group of objects tracked briefly in close proximity to the spacecraft. The trajectory of these objects demonstrates that they originated on the Juno spacecraft, evidently excavated by micrometeoroid impacts on the solar arrays. The majority of detections occurred just prior to and shortly after Juno's transit of the asteroid belt. This rather novel detection technique utilizes the Juno spacecraft's prodigious 60 m2 of solar array as a dust detector and provides valuable information on the distribution and motion of interplanetary (>μm sized) dust.
Low-cost camera modifications and methodologies for very-high-resolution digital images
USDA-ARS?s Scientific Manuscript database
Aerial color and color-infrared photography are usually acquired at high altitude so the ground resolution of the photographs is < 1 m. Moreover, current color-infrared cameras and manned aircraft flight time are expensive, so the objective is the development of alternative methods for obtaining ve...
Fast and robust curve skeletonization for real-world elongated objects
USDA-ARS?s Scientific Manuscript database
These datasets were generated for calibrating robot-camera systems. In an extension, we also considered the problem of calibrating robots with more than one camera. These datasets are provided as a companion to the paper, "Solving the Robot-World Hand-Eye(s) Calibration Problem with Iterative Meth...
Using DSLR cameras in digital holography
NASA Astrophysics Data System (ADS)
Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge
2017-08-01
In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Friend or foe: exploiting sensor failures for transparent object localization and classification
NASA Astrophysics Data System (ADS)
Seib, Viktor; Barthen, Andreas; Marohn, Philipp; Paulus, Dietrich
2017-02-01
In this work we address the problem of detecting and recognizing transparent objects using depth images from an RGB-D camera. Using this type of sensor usually prohibits the localization of transparent objects since the structured light pattern of these cameras is not reflected by transparent surfaces. Instead, transparent surfaces often appear as undefined values in the resulting images. However, these erroneous sensor readings form characteristic patterns that we exploit in the presented approach. The sensor data is fed into a deep convolutional neural network that is trained to classify and localize drinking glasses. We evaluate our approach with four different types of transparent objects. To our best knowledge, no datasets offering depth images of transparent objects exist so far. With this work we aim at closing this gap by providing our data to the public.
Chao, C Y; Yeh, S L; Lin, M T; Chen, W J
2000-04-01
This study was designed to investigate the effects of preinfusion with total parenteral nutrition (TPN) using fish-oil (FO) versus safflower-oil (SO) emulsion as fat sources on hepatic lipids, plasma amino-acid profiles, and inflammatory-related mediators in septic rats. Normal rats, with internal jugular catheters, were assigned to two different groups and received TPN. TPN provided 300 kcal. kg(-1). d(-1), with 40% of the non-protein energy as fat. All TPN solutions were isonitrogenous and identical in nutrient composition except for the fat emulsion, which was made of SO or FO. After receiving TPN for 6 d, each group of rats was further divided into control and sepsis subgroups. Sepsis was induced by cecal ligation and puncture; control rats received sham operation. All rats were classified into four groups as follows: FO control group (FOC; n = 7), FO sepsis group (FOS; n = 8), SO control group (SOC; n = 8), and SO sepsis group (SOS; n = 9). The results of the study demonstrated that plasma concentrations of triacylglycerol and non-esterified fatty acids did not differ between the FO and SO groups, regardless of whether the animals were septic. SOS had significantly higher total lipids and cholesterol content in the liver than did the SOC group. The FOS group, however, showed no difference from the FOC group. Plasma leucine and isoleucine levels were significantly lower in the SOS group than in the SOC group, whereas no difference in these two amino acids was observed between the FOC and FOS groups. Plasma arginine levels were significantly lower in both septic groups than in the groups without sepsis when either FO or SO was infused. Plasma glutamine levels, however, did not differ across groups. No differences in interleukin-1beta, interleukin-6, tumor necrosis factor-alpha, or leukotriene B(4) concentrations in peritoneal lavage fluid were observed between the two septic groups. These results suggest that catabolic reaction in septic rats preinfused with FO is not as obvious as those preinfused with SO. Compared with SO emulsion, TPN with FO emulsion prevents liver fat accumulation associated with sepsis. However, parenterally administered FO had no beneficial effect in lowering cytokines and LTB(4) levels in peritoneal lavage fluid in septic rats induced by cecal ligation and puncture.
Zhu, Ying; Price, Oliver R; Tao, Shu; Jones, Kevin C; Sweetman, Andy J
2014-08-01
We present a new multimedia chemical fate model (SESAMe) which was developed to assess chemical fate and behaviour across China. We apply the model to quantify the influence of environmental parameters on chemical overall persistence (POV) and long-range transport potential (LRTP) in China, which has extreme diversity in environmental conditions. Sobol sensitivity analysis was used to identify the relative importance of input parameters. Physicochemical properties were identified as more influential than environmental parameters on model output. Interactive effects of environmental parameters on POV and LRTP occur mainly in combination with chemical properties. Hypothetical chemicals and emission data were used to model POV and LRTP for neutral and acidic chemicals with different KOW/DOW, vapour pressure and pKa under different precipitation, wind speed, temperature and soil organic carbon contents (fOC). Generally for POV, precipitation was more influential than the other environmental parameters, whilst temperature and wind speed did not contribute significantly to POV variation; for LRTP, wind speed was more influential than the other environmental parameters, whilst the effects of other environmental parameters relied on specific chemical properties. fOC had a slight effect on POV and LRTP, and higher fOC always increased POV and decreased LRTP. Example case studies were performed on real test chemicals using SESAMe to explore the spatial variability of model output and how environmental properties affect POV and LRTP. Dibenzofuran released to multiple media had higher POV in northwest of Xinjiang, part of Gansu, northeast of Inner Mongolia, Heilongjiang and Jilin. Benzo[a]pyrene released to the air had higher LRTP in south Xinjiang and west Inner Mongolia, whilst acenaphthene had higher LRTP in Tibet and west Inner Mongolia. TCS released into water had higher LRTP in Yellow River and Yangtze River catchments. The initial case studies demonstrated that SESAMe performed well on comparing POV and LRTP of chemicals in different regions across China in order to potentially identify the most sensitive regions. This model should not only be used to estimate POV and LRTP for screening and risk assessments of chemicals, but could potentially be used to help design chemical monitoring programmes across China in the future. Copyright © 2014 Elsevier Ltd. All rights reserved.
Control system for several rotating mirror camera synchronization operation
NASA Astrophysics Data System (ADS)
Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji
1997-05-01
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.
NASA Technical Reports Server (NTRS)
Chen, Fang-Jenq
1997-01-01
Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.
Non-destructive 3D shape measurement of transparent and black objects with thermal fringes
NASA Astrophysics Data System (ADS)
Brahm, Anika; Rößler, Conrad; Dietrich, Patrick; Heist, Stefan; Kühmstedt, Peter; Notni, Gunther
2016-05-01
Fringe projection is a well-established optical method for the non-destructive contactless three-dimensional (3D) measurement of object surfaces. Typically, fringe sequences in the visible wavelength range (VIS) are projected onto the surfaces of objects to be measured and are observed by two cameras in a stereo vision setup. The reconstruction is done by finding corresponding pixels in both cameras followed by triangulation. Problems can occur if the properties of some materials disturb the measurements. If the objects are transparent, translucent, reflective, or strongly absorbing in the VIS range, the projected patterns cannot be recorded properly. To overcome these challenges, we present a new alternative approach in the infrared (IR) region of the electromagnetic spectrum. For this purpose, two long-wavelength infrared (LWIR) cameras (7.5 - 13 μm) are used to detect the emitted heat radiation from surfaces which is induced by a pattern projection unit driven by a CO2 laser (10.6 μm). Thus, materials like glass or black objects, e.g. carbon fiber materials, can be measured non-destructively without the need of any additional paintings. We will demonstrate the basic principles of this heat pattern approach and show two types of 3D systems based on a freeform mirror and a GOBO wheel (GOes Before Optics) projector unit.
Speech versus manual control of camera functions during a telerobotic task
NASA Technical Reports Server (NTRS)
Bierschwale, John M.; Sampaio, Carlos E.; Stuart, Mark A.; Smith, Randy L.
1989-01-01
Voice input for control of camera functions was investigated in this study. Objective were to (1) assess the feasibility of a voice-commanded camera control system, and (2) identify factors that differ between voice and manual control of camera functions. Subjects participated in a remote manipulation task that required extensive camera-aided viewing. Each subject was exposed to two conditions, voice and manual input, with a counterbalanced administration order. Voice input was found to be significantly slower than manual input for this task. However, in terms of remote manipulator performance errors and subject preference, there was no difference between modalities. Voice control of continuous camera functions is not recommended. It is believed that the use of voice input for discrete functions, such as multiplexing or camera switching, could aid performance. Hybrid mixes of voice and manual input may provide the best use of both modalities. This report contributes to a better understanding of the issues that affect the design of an efficient human/telerobot interface.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-03-23
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.
Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor
Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung
2018-01-01
Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690
High Scalability Video ISR Exploitation
2012-10-01
Surveillance, ARGUS) on the National Image Interpretability Rating Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K...Scale (NIIRS) at level 6. Ultra-high quality cameras like the Digital Cinema 4K (DC-4K), which recognizes objects smaller than people, will be available...purchase ultra-high quality cameras like the Digital Cinema 4K (DC-4K) for use in the field. However, even if such a UAV sensor with a DC-4K was flown
1990-07-01
electrohtic dissociation of the electrode mate- pedo applications seem to be still somewhat rial, and to provide a good gas evolution wlhich out of the...rod cathode. A unique feature of this preliminary experiment was the use of a prototype gated, intensified video camera. This camera is based on a...microprocessor controlled microchannel plate intensifier tube. The intensifier tube image is focused on a standard CCD video camera so that the object
Model deformation measurements at a cryogenic wind tunnel using photogrammetry
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1985-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility (NTF) is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. Data reduction procedures and the results of tunnel tests at the NTF are presented.
Model Deformation Measurements at a Cryogenic Wind Tunnel Using Photogrammetry
NASA Technical Reports Server (NTRS)
Burner, A. W.; Snow, W. L.; Goad, W. K.
1982-01-01
A photogrammetric closed circuit television system to measure model deformation at the National Transonic Facility (NTF) is described. The photogrammetric approach was chosen because of its inherent rapid data recording of the entire object field. Video cameras are used to acquire data instead of film cameras due to the inaccessibility of cameras which must be housed within the cryogenic, high pressure plenum of this facility. Data reduction procedures and the results of tunnel tests at the NTF are presented.
Lessons Learned from Crime Caught on Camera
Bernasco, Wim
2018-01-01
Objectives: The widespread use of camera surveillance in public places offers criminologists the opportunity to systematically and unobtrusively observe crime, their main subject matter. The purpose of this essay is to inform the reader of current developments in research on crimes caught on camera. Methods: We address the importance of direct observation of behavior and review criminological studies that used observational methods, with and without cameras, including the ones published in this issue. We also discuss the uses of camera recordings in other social sciences and in biology. Results: We formulate six key insights that emerge from the literature and make recommendations for future research. Conclusions: Camera recordings of real-life crime are likely to become part of the criminological tool kit that will help us better understand the situational and interactional elements of crime. Like any source, it has limitations that are best addressed by triangulation with other sources. PMID:29472728
The suitability of lightfield camera depth maps for coordinate measurement applications
NASA Astrophysics Data System (ADS)
Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael
2015-12-01
Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
1995-12-20
STS074-361-035 (12-20 Nov 1995) --- This medium close-up view centers on the IMAX Cargo Bay Camera (ICBC) and its associated IMAX Camera Container Equipment (ICCE) at its position in the cargo bay of the Earth-orbiting Space Shuttle Atlantis. With its own ?space suit? or protective covering to protect it from the rigors of space, this version of the IMAX was able to record scenes not accessible with the in-cabin cameras. For docking and undocking activities involving Russia?s Mir Space Station and the Space Shuttle Atlantis, the camera joined a variety of in-cabin camera hardware in recording the historical events. IMAX?s secondary objectives were to film Earth views. The IMAX project is a collaboration between NASA, the Smithsonian Institution?s National Air and Space Museum (NASM), IMAX Systems Corporation, and the Lockheed Corporation to document significant space activities and promote NASA?s educational goals using the IMAX film medium.
Enhancing swimming pool safety by the use of range-imaging cameras
NASA Astrophysics Data System (ADS)
Geerardyn, D.; Boulanger, S.; Kuijk, M.
2015-05-01
Drowning is the cause of death of 372.000 people, each year worldwide, according to the report of November 2014 of the World Health Organization.1 Currently, most swimming pools only use lifeguards to detect drowning people. In some modern swimming pools, camera-based detection systems are nowadays being integrated. However, these systems have to be mounted underwater, mostly as a replacement of the underwater lighting. In contrast, we are interested in range imaging cameras mounted on the ceiling of the swimming pool, allowing to distinguish swimmers at the surface from drowning people underwater, while keeping the large field-of-view and minimizing occlusions. However, we have to take into account that the water surface of a swimming pool is not a flat, but mostly rippled surface, and that the water is transparent for visible light, but less transparent for infrared or ultraviolet light. We investigated the use of different types of 3D cameras to detect objects underwater at different depths and with different amplitudes of surface perturbations. Specifically, we performed measurements with a commercial Time-of-Flight camera, a commercial structured-light depth camera and our own Time-of-Flight system. Our own system uses pulsed Time-of-Flight and emits light of 785 nm. The measured distances between the camera and the object are influenced through the perturbations on the water surface. Due to the timing of our Time-of-Flight camera, our system is theoretically able to minimize the influence of the reflections of a partially-reflecting surface. The combination of a post image-acquisition filter compensating for the perturbations and the use of a light source with shorter wavelengths to enlarge the depth range can improve the current commercial cameras. As a result, we can conclude that low-cost range imagers can increase swimming pool safety, by inserting a post-processing filter and the use of another light source.
Automatic Camera Calibration for Cultural Heritage Applications Using Unstructured Planar Objects
NASA Astrophysics Data System (ADS)
Adam, K.; Kalisperakis, I.; Grammatikopoulos, L.; Karras, G.; Petsa, E.
2013-07-01
As a rule, image-based documentation of cultural heritage relies today on ordinary digital cameras and commercial software. As such projects often involve researchers not familiar with photogrammetry, the question of camera calibration is important. Freely available open-source user-friendly software for automatic camera calibration, often based on simple 2D chess-board patterns, are an answer to the demand for simplicity and automation. However, such tools cannot respond to all requirements met in cultural heritage conservation regarding possible imaging distances and focal lengths. Here we investigate the practical possibility of camera calibration from unknown planar objects, i.e. any planar surface with adequate texture; we have focused on the example of urban walls covered with graffiti. Images are connected pair-wise with inter-image homographies, which are estimated automatically through a RANSAC-based approach after extracting and matching interest points with the SIFT operator. All valid points are identified on all images on which they appear. Provided that the image set includes a "fronto-parallel" view, inter-image homographies with this image are regarded as emulations of image-to-world homographies and allow computing initial estimates for the interior and exterior orientation elements. Following this initialization step, the estimates are introduced into a final self-calibrating bundle adjustment. Measures are taken to discard unsuitable images and verify object planarity. Results from practical experimentation indicate that this method may produce satisfactory results. The authors intend to incorporate the described approach into their freely available user-friendly software tool, which relies on chess-boards, to assist non-experts in their projects with image-based approaches.
NASA Technical Reports Server (NTRS)
2008-01-01
We can determine distances between objects and points of interest in 3-D space to a useful degree of accuracy from a set of camera images by using multiple camera views and reference targets in the camera s field of view (FOV). The core of the software processing is based on the previously developed foreign-object debris vision trajectory software (see KSC Research and Technology 2004 Annual Report, pp. 2 5). The current version of this photogrammetry software includes the ability to calculate distances between any specified point pairs, the ability to process any number of reference targets and any number of camera images, user-friendly editing features, including zoom in/out, translate, and load/unload, routines to help mark reference points with a Find function, while comparing them with the reference point database file, and a comprehensive output report in HTML format. In this system, scene reference targets are replaced by a photogrammetry cube whose exterior surface contains multiple predetermined precision 2-D targets. Precise measurement of the cube s 2-D targets during the fabrication phase eliminates the need for measuring 3-D coordinates of reference target positions in the camera's FOV, using for example a survey theodolite or a Faroarm. Placing the 2-D targets on the cube s surface required the development of precise machining methods. In response, 2-D targets were embedded into the surface of the cube and then painted black for high contrast. A 12-inch collapsible cube was developed for room-size scenes. A 3-inch, solid, stainless-steel photogrammetry cube was also fabricated for photogrammetry analysis of small objects.
Towards next generation 3D cameras
NASA Astrophysics Data System (ADS)
Gupta, Mohit
2017-03-01
We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.
Detection and tracking of drones using advanced acoustic cameras
NASA Astrophysics Data System (ADS)
Busset, Joël.; Perrodin, Florian; Wellig, Peter; Ott, Beat; Heutschi, Kurt; Rühl, Torben; Nussbaumer, Thomas
2015-10-01
Recent events of drones flying over city centers, official buildings and nuclear installations stressed the growing threat of uncontrolled drone proliferation and the lack of real countermeasure. Indeed, detecting and tracking them can be difficult with traditional techniques. A system to acoustically detect and track small moving objects, such as drones or ground robots, using acoustic cameras is presented. The described sensor, is completely passive, and composed of a 120-element microphone array and a video camera. The acoustic imaging algorithm determines in real-time the sound power level coming from all directions, using the phase of the sound signals. A tracking algorithm is then able to follow the sound sources. Additionally, a beamforming algorithm selectively extracts the sound coming from each tracked sound source. This extracted sound signal can be used to identify sound signatures and determine the type of object. The described techniques can detect and track any object that produces noise (engines, propellers, tires, etc). It is a good complementary approach to more traditional techniques such as (i) optical and infrared cameras, for which the object may only represent few pixels and may be hidden by the blooming of a bright background, and (ii) radar or other echo-localization techniques, suffering from the weakness of the echo signal coming back to the sensor. The distance of detection depends on the type (frequency range) and volume of the noise emitted by the object, and on the background noise of the environment. Detection range and resilience to background noise were tested in both, laboratory environments and outdoor conditions. It was determined that drones can be tracked up to 160 to 250 meters, depending on their type. Speech extraction was also experimentally investigated: the speech signal of a person being 80 to 100 meters away can be captured with acceptable speech intelligibility.
Fine tuning GPS clock estimation in the MCS
NASA Technical Reports Server (NTRS)
Hutsell, Steven T.
1995-01-01
With the completion of a 24 operational satellite constellation, GPS is fast approaching the critical milestone, Full Operational Capability (FOC). Although GPS is well capable of providing the timing accuracy and stability figures required by system specifications, the GPS community will continue to strive for further improvements in performance. The GPS Master Control Station (MCS) recently demonstrated that timing improvements are always composite Clock, and hence, Kalman Filter state estimation, providing a small improvement to user accuracy.
2011-01-01
Louis Chow, David Woodburn, Lei Zhou, Jared Bindl, Yang Hu, and Wendell Brokaw University of Central Florida JANUARY 2011 Interim Report...Magnet (PM) Motor % % Written in SI or MKS Unit System. % % Authors: % David Woodburn % Dr. Lei Zhou % Dr. Thomas X. Wu clear all...Initial phase winding resistance [ ohm ] id = 0; % Phase d current [A] iq = 0; % Phase q current [A] did_dt = 0
Challenges to Public Order and the Seas
2014-03-01
these excessive claims will ever be rolled back. Worse, they could be strengthened in a game of one- upmanship. A laissez faire approach to flag...to the rule of law and a basis for the conduct of af- fairs among nations. What is necessary for an effective system of ocean governance? This...gain an increased market share as reputable national flags decline. Depending on which FOC is involved, there is a fair probability that the flag state
Mission Integration Study for Solid Teflon Pulsed Plasma Millipound Propulsion System.
1980-09-01
ADDRESSCOERE Air Foc IoktPouso abrtr 1 SD~kr8PIS reto o f Scegain and oSld Teyn Pulsedr A*S team s ona Pr o on rolling 181 87 -68 14. ONITRINGA N ~ (d~ff...57 4.3.3 Subsystem Impact Assessment .......... 63 4.4 Interactive Effects ....... ............... 671 4.4.1 Material Deposition on Spacecraft...Assessment ....... ... 127 5.4 Interactive Effects .... ................ .... 128 5.4.1 Material Deposition Requirements for Pulsed Plasma Thruster on DSP
Measuring the Operational Readiness of an Air Force Network Warfare Squadron
2008-06-01
Abstract As part of its unit activation, the 315th Network Warfare Squadron (NWS) needed to measure and report its progression of unit readiness...NWS unit readiness should be measured and reported by SORTS Category Levels (C-Level) to support wartime missions, not by IOC and FOC milestones...This paper reviews SORTS computations and provides a case study of a notional Air Force NWS to propose that any new cyber squadron should report
2011-10-01
specific modules as needed. The term “startup” is inclusive of any point in a DoD acquisition program. As noted above, methodology for conducting...Acquisition Sustainment =Decision Point =Milestone Review =Decision Point if PDR is not conducted before Milestone B ProgramA B Initiation) C IOC FOC...start a new program 2.2 Background Conclusions flowing from these observations led the Office of the Secretary of Defense, the De - fense Acquisition
NOAA's Satellite Climate Data Records: The Research to Operations Process and Current State
NASA Astrophysics Data System (ADS)
Privette, J. L.; Bates, J. J.; Kearns, E. J.; NOAA's Climate Data Record Program
2011-12-01
In support of NOAA's mandate to provide climate products and services to the Nation, the National Climatic Data Center initiated the satellite Climate Data Record (CDR) Program. The Program develops and sustains climate information products derived from satellite data that NOAA has collected over the past 30+ years. These are the longest sets of continuous global measurements in existence. Data from other satellite programs, including those in NASA, the Department of Defense, and foreign space agencies, are also used. NOAA is now applying advanced analysis techniques to these historic data. This process is unraveling underlying climate trend and variability information and returning new value from the data. However, the transition of complex data processing chains, voluminous data products and documentation into an systematic, configuration controlled context involves many challenges. In this presentation, we focus on the Program's process for research-to-operations transition and the evolving systems designed to ensure transparency, security, economy and authoritative value. The Program has adopted a two-phase process defined by an Initial Operational Capability (IOC) and a Full Operational Capability (FOC). The principles and procedures for IOC are described, as well as the process for moving CDRs from IOC to FOC. Finally, we will describe the state of the CDRs in all phases the Program, with an emphasis on the seven community-developed CDRs transitioned to NOAA in 2011. Details on CDR access and distribution will be provided.
A mathematical model of embodied consciousness.
Rudrauf, David; Bennequin, Daniel; Granic, Isabela; Landini, Gregory; Friston, Karl; Williford, Kenneth
2017-09-07
We introduce a mathematical model of embodied consciousness, the Projective Consciousness Model (PCM), which is based on the hypothesis that the spatial field of consciousness (FoC) is structured by a projective geometry and under the control of a process of active inference. The FoC in the PCM combines multisensory evidence with prior beliefs in memory and frames them by selecting points of view and perspectives according to preferences. The choice of projective frames governs how expectations are transformed by consciousness. Violations of expectation are encoded as free energy. Free energy minimization drives perspective taking, and controls the switch between perception, imagination and action. In the PCM, consciousness functions as an algorithm for the maximization of resilience, using projective perspective taking and imagination in order to escape local minima of free energy. The PCM can account for a variety of psychological phenomena: the characteristic spatial phenomenology of subjective experience, the distinctions and integral relationships between perception, imagination and action, the role of affective processes in intentionality, but also perceptual phenomena such as the dynamics of bistable figures and body swap illusions in virtual reality. It relates phenomenology to function, showing the computational advantages of consciousness. It suggests that changes of brain states from unconscious to conscious reflect the action of projective transformations and suggests specific neurophenomenological hypotheses about the brain, guidelines for designing artificial systems, and formal principles for psychology. Copyright © 2017 Elsevier Ltd. All rights reserved.
Production and Performance of the InFOCmicronS 20-40 keV Graded Multilayer Mirror
NASA Technical Reports Server (NTRS)
Berendse, F.; Owens, S. M.; Serlemitsos, P. J.; Tueller, J.; Chan, K.-W.; Soong, Y.; Krimm, H.; Baumgartner, W. H.; Tamura, K.; Okajima, T.;
2002-01-01
The International Focusing Optics Collaboration for micron Crab Sensitivity (InFOC micronS) balloon-borne hard x-ray incorporates graded multilayer technology to obtain significant effective area at energies previously inaccessible to x-ray optics. The telescope mirror consists of 2040 segmented thin aluminum foils coated with replicated Pt/C multilayers. A sample of these foils was scanned using a pencil-beam reflectometer to determine, multilayer quality. The results of the reflectometer measurements demonstrate our capability to produce large quantity of foils while maintaining high-quality multilayers with a mean Nevot-Croce interface roughness of 0.5nm. We characterize the performance of the complete InFOC micronS telescope with a pencil beam raster scan to determine the effective area and encircled energy function of the telescope. The effective area of the complete telescope is 78, 42 and 22 square centimeters at 20 30 and 40 keV. respectively. The measured encircled energy fraction of the mirror has a half-power diameter of 2.0 plus or minus 0.5 arcmin (90% confidence). The mirror successfully obtained an image of the accreting black hole Cygnus X-1 during a balloon flight in July, 2001. The successful completion and flight test of this telescope demonstrates that graded-multilayer telescopes can be manufactured with high reliability for future x-ray telescope missions such as Constellation-X.
Ding, Zhaojian; Li, Minhui; Sun, Fei; Xi, Pinggen; Sun, Longhua; Zhang, Lianhui; Jiang, Zide
2015-01-01
Fusarium oxysporum f. sp. cubense (FOC) is an important soil-borne fungal pathogen causing devastating vascular wilt disease of banana plants and has become a great concern threatening banana production worldwide. However, little information is known about the molecular mechanisms that govern the expression of virulence determinants of this important fungal pathogen. In this study, we showed that null mutation of three mitogen-activated protein (MAP) kinase genes, designated as FoSlt2, FoMkk2 and FoBck1, respectively, led to substantial attenuation in fungal virulence on banana plants. Transcriptional analysis revealed that the MAP kinase signaling pathway plays a key role in regulation of the genes encoding production of chitin, peroxidase, beauvericin and fusaric acid. Biochemical analysis further confirmed the essential role of MAP kinases in modulating the production of fusaric acid, which was a crucial phytotoxin in accelerating development of Fusarium wilt symptoms in banana plants. Additionally, we found that the MAP kinase FoSlt2 was required for siderophore biosynthesis under iron-depletion conditions. Moreover, disruption of the MAP kinase genes resulted in abnormal hypha and increased sensitivity to Congo Red, Calcofluor White and H2O2. Taken together, these results depict the critical roles of MAP kinases in regulation of FOC physiology and virulence. PMID:25849862
Kaczmarek, Agnieszka; Budzynska, Anna; Gospodarek, Eugenia
2012-10-01
Multiplex PCR was used to detect genes encoding selected virulence determinants associated with strains of Escherichia coli with K1 antigen (K1(+)) and non-K1 E. coli (K1(-)). The prevalence of the fimA, fimH, sfa/foc, ibeA, iutA and hlyF genes was studied for 134 (67 K1(+) and 67 K1(-)) E. coli strains isolated from pregnant women and neonates. The fimA gene was present in 83.6 % of E. coli K1(+) and in 86.6 % of E. coli K1(-) strains. The fimH gene was present in all tested E. coli K1(+) strains and in 97.0 % of non-K1 strains. E. coli K1(+) strains were significantly more likely to possess the following genes than E. coli K1(-) strains: sfa/foc (37.3 vs 16.4 %, P = 0.006), ibeA (35.8 vs 4.5 %, P<0.001), iutA (82.1 vs 35.8 %, P<0.001) and hlyF (28.4 vs 6.0 %, P<0.001). In conclusion, E. coli K1(+) seems to be more virulent than E. coli K1(-) strains in developing severe infections, thereby increasing possible sepsis or neonatal bacterial meningitis.
Measurement of vibration using phase only correlation technique
NASA Astrophysics Data System (ADS)
Balachandar, S.; Vipin, K.
2017-08-01
A novel method for the measurement of vibration is proposed and demonstrated. The proposed experiment is based on laser triangulation: consists of line laser, object under test and a high speed camera remotely controlled by a software. Experiment involves launching a line-laser probe beam perpendicular to the axis of the vibrating object. The reflected probe beam is recorded by a high speed camera. The dynamic position of the line laser in camera plane is governed by the magnitude and frequency of the vibrating test-object. Using phase correlation technique the maximum distance travelled by the probe beam in CCD plane is measured in terms of pixels using MATLAB. An actual displacement of the object in mm is measured by calibration. Using displacement data with time, other vibration associated quantities such as acceleration, velocity and frequency are evaluated. The preliminary result of the proposed method is reported for acceleration from 1g to 3g, and from frequency 6Hz to 26Hz. The results are closely matching with its theoretical values. The advantage of the proposed method is that it is a non-destructive method and using phase correlation algorithm subpixel displacement in CCD plane can be measured with high accuracy.
Pulsed spatial phase-shifting digital shearography based on a micropolarizer camera
NASA Astrophysics Data System (ADS)
Aranchuk, Vyacheslav; Lal, Amit K.; Hess, Cecil F.; Trolinger, James Davis; Scott, Eddie
2018-02-01
We developed a pulsed digital shearography system that utilizes the spatial phase-shifting technique. The system employs a commercial micropolarizer camera and a double pulse laser, which allows for instantaneous phase measurements. The system can measure dynamic deformation of objects as large as 1 m at a 2-m distance during the time between two laser pulses that range from 30 μs to 30 ms. The ability of the system to measure dynamic deformation was demonstrated by obtaining phase wrapped and unwrapped shearograms of a vibrating object.
NASA Technical Reports Server (NTRS)
Sutro, L. L.; Lerman, J. B.
1973-01-01
The operation of a system is described that is built both to model the vision of primate animals, including man, and serve as a pre-prototype of possible object recognition system. It was employed in a series of experiments to determine the practicability of matching left and right images of a scene to determine the range and form of objects. The experiments started with computer generated random-dot stereograms as inputs and progressed through random square stereograms to a real scene. The major problems were the elimination of spurious matches, between the left and right views, and the interpretation of ambiguous regions, on the left side of an object that can be viewed only by the left camera, and on the right side of an object that can be viewed only by the right camera.
NASA Astrophysics Data System (ADS)
Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried
2017-09-01
Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.
Aslam, Tariq Mehmood; Shakir, Savana; Wong, James; Au, Leon; Ashworth, Jane
2012-12-01
Mucopolysaccharidoses (MPS) can cause corneal opacification that is currently difficult to objectively quantify. With newer treatments for MPS comes an increased need for a more objective, valid and reliable index of disease severity for clinical and research use. Clinical evaluation by slit lamp is very subjective and techniques based on colour photography are difficult to standardise. In this article the authors present evidence for the utility of dedicated image analysis algorithms applied to images obtained by a highly sophisticated iris recognition camera that is small, manoeuvrable and adapted to achieve rapid, reliable and standardised objective imaging in a wide variety of patients while minimising artefactual interference in image quality.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhu Feipeng; Shi Hongjian; Bai Pengxiang
In fringe projection, the CCD camera and the projector are often placed at equal height. In this paper, we will study the calibration of an unequal arrangement of the CCD camera and the projector. The principle of fringe projection with two-dimensional digital image correlation to acquire the profile of object surface is described in detail. By formula derivation and experiment, the linear relationship between the out-of-plane calibration coefficient and the y coordinate is clearly found. To acquire the three-dimensional (3D) information of an object correctly, this paper presents an effective calibration method with linear least-squares fitting, which is very simplemore » in principle and calibration. Experiments are implemented to validate the availability and reliability of the calibration method.« less
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
Dual beam optical interferometer
NASA Technical Reports Server (NTRS)
Gutierrez, Roman C. (Inventor)
2003-01-01
A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.
Software Graphical User Interface For Analysis Of Images
NASA Technical Reports Server (NTRS)
Leonard, Desiree M.; Nolf, Scott R.; Avis, Elizabeth L.; Stacy, Kathryn
1992-01-01
CAMTOOL software provides graphical interface between Sun Microsystems workstation and Eikonix Model 1412 digitizing camera system. Camera scans and digitizes images, halftones, reflectives, transmissives, rigid or flexible flat material, or three-dimensional objects. Users digitize images and select from three destinations: work-station display screen, magnetic-tape drive, or hard disk. Written in C.
LAMOST CCD camera-control system based on RTS2
NASA Astrophysics Data System (ADS)
Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng
2018-05-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.
Plate refractive camera model and its applications
NASA Astrophysics Data System (ADS)
Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai
2017-03-01
In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Wide Field Camera 3 Accommodations for HST Robotics Servicing Mission
NASA Technical Reports Server (NTRS)
Ginyard, Amani
2005-01-01
This slide presentation discusses the objectives of the Hubble Space Telescope (HST) Robotics Servicing and Deorbit Mission (HRSDM), reviews the Wide Field Camera 3 (WFC3), and also reviews the contamination accomodations for the WFC3. The objectives of the HRSDM are (1) to provide a disposal capability at the end of HST's useful life, (2) to upgrade the hardware by installing two new scientific instruments: replace the Corrective Optics Space Telescope Axial Replacement (COSTAR) with the Cosmic Origins Spectrograph (COS), and to replace the Wide Field/Planetary Camera-2 (WFPC2) with Wide Field Camera-3, and (3) Extend the Scientific life of HST for a minimum of 5 years after servicing. Included are slides showing the Hubble Robotic Vehicle (HRV) and slides describing what the HRV contains. There are also slides describing the WFC3. One of the mechanisms of the WFC3 is to serve partially as replacement gyroscopes for HST. There are also slides that discuss the contamination requirements for the Rate Sensor Units (RSUs), that are part of the Rate Gyroscope Assembly on the WFC3.
Refocusing distance of a standard plenoptic camera.
Hahne, Christopher; Aggoun, Amar; Velisavljevic, Vladan; Fiebig, Susanne; Pesch, Matthias
2016-09-19
Recent developments in computational photography enabled variation of the optical focus of a plenoptic camera after image exposure, also known as refocusing. Existing ray models in the field simplify the camera's complexity for the purpose of image and depth map enhancement, but fail to satisfyingly predict the distance to which a photograph is refocused. By treating a pair of light rays as a system of linear functions, it will be shown in this paper that its solution yields an intersection indicating the distance to a refocused object plane. Experimental work is conducted with different lenses and focus settings while comparing distance estimates with a stack of refocused photographs for which a blur metric has been devised. Quantitative assessments over a 24 m distance range suggest that predictions deviate by less than 0.35 % in comparison to an optical design software. The proposed refocusing estimator assists in predicting object distances just as in the prototyping stage of plenoptic cameras and will be an essential feature in applications demanding high precision in synthetic focus or where depth map recovery is done by analyzing a stack of refocused photographs.
Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System
NASA Astrophysics Data System (ADS)
Bethmann, F.; Luhmann, T.
2012-07-01
The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.
Visual fatigue modeling for stereoscopic video shot based on camera motion
NASA Astrophysics Data System (ADS)
Shi, Guozhong; Sang, Xinzhu; Yu, Xunbo; Liu, Yangdong; Liu, Jing
2014-11-01
As three-dimensional television (3-DTV) and 3-D movie become popular, the discomfort of visual feeling limits further applications of 3D display technology. The cause of visual discomfort from stereoscopic video conflicts between accommodation and convergence, excessive binocular parallax, fast motion of objects and so on. Here, a novel method for evaluating visual fatigue is demonstrated. Influence factors including spatial structure, motion scale and comfortable zone are analyzed. According to the human visual system (HVS), people only need to converge their eyes to the specific objects for static cameras and background. Relative motion should be considered for different camera conditions determining different factor coefficients and weights. Compared with the traditional visual fatigue prediction model, a novel visual fatigue predicting model is presented. Visual fatigue degree is predicted using multiple linear regression method combining with the subjective evaluation. Consequently, each factor can reflect the characteristics of the scene, and the total visual fatigue score can be indicated according to the proposed algorithm. Compared with conventional algorithms which ignored the status of the camera, our approach exhibits reliable performance in terms of correlation with subjective test results.
Real-time object detection, tracking and occlusion reasoning
DOE Office of Scientific and Technical Information (OSTI.GOV)
Divakaran, Ajay; Yu, Qian; Tamrakar, Amir
A system for object detection and tracking includes technologies to, among other things, detect and track moving objects, such as pedestrians and/or vehicles, in a real-world environment, handle static and dynamic occlusions, and continue tracking moving objects across the fields of view of multiple different cameras.
Nie, Min; Ren, Jie; Li, Zhengjun; Niu, Jinhai; Qiu, Yihong; Zhu, Yisheng; Tong, Shanbao
2009-01-01
Without visual information, the blind people live in various hardships with shopping, reading, finding objects and etc. Therefore, we developed a portable auditory guide system, called SoundView, for visually impaired people. This prototype system consists of a mini-CCD camera, a digital signal processing unit and an earphone, working with built-in customizable auditory coding algorithms. Employing environment understanding techniques, SoundView processes the images from a camera and detects objects tagged with barcodes. The recognized objects in the environment are then encoded into stereo speech signals for the blind though an earphone. The user would be able to recognize the type, motion state and location of the interested objects with the help of SoundView. Compared with other visual assistant techniques, SoundView is object-oriented and has the advantages of cheap cost, smaller size, light weight, low power consumption and easy customization.
NASA Astrophysics Data System (ADS)
Mundermann, Lars; Mundermann, Annegret; Chaudhari, Ajit M.; Andriacchi, Thomas P.
2005-01-01
Anthropometric parameters are fundamental for a wide variety of applications in biomechanics, anthropology, medicine and sports. Recent technological advancements provide methods for constructing 3D surfaces directly. Of these new technologies, visual hull construction may be the most cost-effective yet sufficiently accurate method. However, the conditions influencing the accuracy of anthropometric measurements based on visual hull reconstruction are unknown. The purpose of this study was to evaluate the conditions that influence the accuracy of 3D shape-from-silhouette reconstruction of body segments dependent on number of cameras, camera resolution and object contours. The results demonstrate that the visual hulls lacked accuracy in concave regions and narrow spaces, but setups with a high number of cameras reconstructed a human form with an average accuracy of 1.0 mm. In general, setups with less than 8 cameras yielded largely inaccurate visual hull constructions, while setups with 16 and more cameras provided good volume estimations. Body segment volumes were obtained with an average error of 10% at a 640x480 resolution using 8 cameras. Changes in resolution did not significantly affect the average error. However, substantial decreases in error were observed with increasing number of cameras (33.3% using 4 cameras; 10.5% using 8 cameras; 4.1% using 16 cameras; 1.2% using 64 cameras).
NASA Astrophysics Data System (ADS)
Dickensheets, David L.; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
Dickensheets, David L; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind
2016-02-01
Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.
Determination of feature generation methods for PTZ camera object tracking
NASA Astrophysics Data System (ADS)
Doyle, Daniel D.; Black, Jonathan T.
2012-06-01
Object detection and tracking using computer vision (CV) techniques have been widely applied to sensor fusion applications. Many papers continue to be written that speed up performance and increase learning of artificially intelligent systems through improved algorithms, workload distribution, and information fusion. Military application of real-time tracking systems is becoming more and more complex with an ever increasing need of fusion and CV techniques to actively track and control dynamic systems. Examples include the use of metrology systems for tracking and measuring micro air vehicles (MAVs) and autonomous navigation systems for controlling MAVs. This paper seeks to contribute to the determination of select tracking algorithms that best track a moving object using a pan/tilt/zoom (PTZ) camera applicable to both of the examples presented. The select feature generation algorithms compared in this paper are the trained Scale-Invariant Feature Transform (SIFT) and Speeded Up Robust Features (SURF), the Mixture of Gaussians (MoG) background subtraction method, the Lucas- Kanade optical flow method (2000) and the Farneback optical flow method (2003). The matching algorithm used in this paper for the trained feature generation algorithms is the Fast Library for Approximate Nearest Neighbors (FLANN). The BSD licensed OpenCV library is used extensively to demonstrate the viability of each algorithm and its performance. Initial testing is performed on a sequence of images using a stationary camera. Further testing is performed on a sequence of images such that the PTZ camera is moving in order to capture the moving object. Comparisons are made based upon accuracy, speed and memory.
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Space telescope low scattered light camera - A model
NASA Technical Reports Server (NTRS)
Breckinridge, J. B.; Kuper, T. G.; Shack, R. V.
1982-01-01
A design approach for a camera to be used with the space telescope is given. Camera optics relay the system pupil onto an annular Gaussian ring apodizing mask to control scattered light. One and two dimensional models of ripple on the primary mirror were calculated. Scattered light calculations using ripple amplitudes between wavelength/20 wavelength/200 with spatial correlations of the ripple across the primary mirror between 0.2 and 2.0 centimeters indicate that the detection of an object a billion times fainter than a bright source in the field is possible. Detection of a Jovian type planet in orbit about alpha Centauri with a camera on the space telescope may be possible.
Structural analysis of color video camera installation on tank 241AW101 (2 Volumes)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Strehlow, J.P.
1994-08-24
A video camera is planned to be installed on the radioactive storage tank 241AW101 at the DOE` s Hanford Site in Richland, Washington. The camera will occupy the 20 inch port of the Multiport Flange riser which is to be installed on riser 5B of the 241AW101 (3,5,10). The objective of the project reported herein was to perform a seismic analysis and evaluation of the structural components of the camera for a postulated Design Basis Earthquake (DBE) per the reference Structural Design Specification (SDS) document (6). The detail of supporting engineering calculations is documented in URS/Blume Calculation No. 66481-01-CA-03 (1).
Geometrical calibration television measuring systems with solid state photodetectors
NASA Astrophysics Data System (ADS)
Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.
2000-11-01
The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.
An automated calibration method for non-see-through head mounted displays.
Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew
2011-08-15
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.
A 3D camera for improved facial recognition
NASA Astrophysics Data System (ADS)
Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim
2004-12-01
We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.
A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; Macleod, Todd; Gagliano, Larry
2015-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
Small Orbital Stereo Tracking Camera Technology Development
NASA Technical Reports Server (NTRS)
Bryan, Tom; MacLeod, Todd; Gagliano, Larry
2016-01-01
On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.
Prototypic Development and Evaluation of a Medium Format Metric Camera
NASA Astrophysics Data System (ADS)
Hastedt, H.; Rofallski, R.; Luhmann, T.; Rosenbauer, R.; Ochsner, D.; Rieke-Zapp, D.
2018-05-01
Engineering applications require high-precision 3D measurement techniques for object sizes that vary between small volumes (2-3 m in each direction) and large volumes (around 20 x 20 x 1-10 m). The requested precision in object space (1σ RMS) is defined to be within 0.1-0.2 mm for large volumes and less than 0.01 mm for small volumes. In particular, focussing large volume applications the availability of a metric camera would have different advantages for several reasons: 1) high-quality optical components and stabilisations allow for a stable interior geometry of the camera itself, 2) a stable geometry leads to a stable interior orientation that enables for an a priori camera calibration, 3) a higher resulting precision can be expected. With this article the development and accuracy evaluation of a new metric camera, the ALPA 12 FPS add|metric will be presented. Its general accuracy potential is tested against calibrated lengths in a small volume test environment based on the German Guideline VDI/VDE 2634.1 (2002). Maximum length measurement errors of less than 0.025 mm are achieved with different scenarios having been tested. The accuracy potential for large volumes is estimated within a feasibility study on the application of photogrammetric measurements for the deformation estimation on a large wooden shipwreck in the German Maritime Museum. An accuracy of 0.2 mm-0.4 mm is reached for a length of 28 m (given by a distance from a lasertracker network measurement). All analyses have proven high stabilities of the interior orientation of the camera and indicate the applicability for a priori camera calibration for subsequent 3D measurements.
A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.
Rau, Jiann-Yeou; Yeh, Po-Chia
2012-01-01
The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.
Sub-Camera Calibration of a Penta-Camera
NASA Astrophysics Data System (ADS)
Jacobsen, K.; Gerke, M.
2016-03-01
Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.
Quantitative optical metrology with CMOS cameras
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.
2004-08-01
Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.
LivePhantom: Retrieving Virtual World Light Data to Real Environments.
Kolivand, Hoshang; Billinghurst, Mark; Sunar, Mohd Shahrizal
2016-01-01
To achieve realistic Augmented Reality (AR), shadows play an important role in creating a 3D impression of a scene. Casting virtual shadows on real and virtual objects is one of the topics of research being conducted in this area. In this paper, we propose a new method for creating complex AR indoor scenes using real time depth detection to exert virtual shadows on virtual and real environments. A Kinect camera was used to produce a depth map for the physical scene mixing into a single real-time transparent tacit surface. Once this is created, the camera's position can be tracked from the reconstructed 3D scene. Real objects are represented by virtual object phantoms in the AR scene enabling users holding a webcam and a standard Kinect camera to capture and reconstruct environments simultaneously. The tracking capability of the algorithm is shown and the findings are assessed drawing upon qualitative and quantitative methods making comparisons with previous AR phantom generation applications. The results demonstrate the robustness of the technique for realistic indoor rendering in AR systems.
Europe's space camera unmasks a cosmic gamma-ray machine
NASA Astrophysics Data System (ADS)
1996-11-01
The new-found neutron star is the visible counterpart of a pulsating radio source, Pulsar 1055-52. It is a mere 20 kilometres wide. Although the neutron star is very hot, at about a million degrees C, very little of its radiant energy takes the form of visible light. It emits mainly gamma-rays, an extremely energetic form of radiation. By examining it at visible wavelengths, astronomers hope to figure out why Pulsar 1055-52 is the most efficient generator of gamma-rays known so far, anywhere the Universe. The Faint Object Camera found Pulsar 1055-52 in near ultraviolet light at 3400 angstroms, a little shorter in wavelength than the violet light at the extremity of the human visual range. Roberto Mignani, Patrizia Caraveo and Giovanni Bignami of the Istituto di Fisica Cosmica in Milan, Italy, report its optical identification in a forthcoming issue of Astrophysical Journal Letters (1 January 1997). The formal name of the object is PSR 1055-52. Evading the glare of an adjacent star The Italian team had tried since 1988 to spot Pulsar 1055-52 with two of the most powerful ground-based optical telescopes in the Southern Hemisphere. These were the 3.6-metre Telescope and the 3.5-metre New Technology Telescope of the European Southern Observatory at La Silla, Chile. Unfortunately an ordinary star 100,000 times brighter lay in almost the same direction in the sky, separated from the neutron star by only a thousandth of a degree. The Earth's atmosphere defocused the star's light sufficiently to mask the glimmer from Pulsar 1055-52. The astronomers therefore needed an instrument in space. The Faint Object Camera offered the best precision and sensitivity to continue the hunt. Devised by European astronomers to complement the American wide field camera in the Hubble Space Telescope, the Faint Object Camera has a relatively narrow field of view. It intensifies the image of a faint object by repeatedly accelerating electrons from photo-electric films, so as to produce brighter flashes when the electrons hit a phosphor screen. Since Hubble's launch in 1990, the Faint Object Camera has examined many different kinds of cosmic objects, from the moons of Jupiter to remote galaxies and quasars. When the space telescope's optics were corrected at the end of 1993 the Faint Object Camera immediately celebrated the event with the discovery of primeval helium in intergalactic gas. In their search for Pulsar 1055-52, the astronomers chose a near-ultraviolet filter to sharpen the Faint Object Camera's vision and reduce the adjacent star's huge advantage in intensity. In May 1996, the Hubble Space Telescope operators aimed at the spot which radio astronomers had indicated, as the source of the radio pulsations of Pulsar 1055-52. The neutron star appeared precisely in the centre of the field of view, and it was clearly separated from the glare of the adjacent star. At magnitude 24.9, Pulsar 1055-52 was comfortably within the power of the Faint Object Camera, which can see stars 20 times fainter still. "The Faint Object Camera is the instrument of choice for looking for neutron stars," says Giovanni Bignami, speaking on behalf of the Italian team. "Whenever it points to a judiciously selected neutron star it detects the corresponding visible or ultraviolet light. The Faint Object Camera has now identified three neutron stars in that way, including Pulsar 1055-52, and it has examined a few that were first detected by other instruments." Mysteries of the neutron stars The importance of the new result can be gauged by the tally of only eight neutron stars seen so far at optical wavelengths, compared with about 760 known from their radio pulsations, and about 21 seen emitting X-rays. Since the first pulsar was detected by radio astronomers in Cambridge, England, nearly 30 years ago, theorists have come to recognize neutron stars as fantastic objects. They are veritable cosmic laboratories in which Nature reveals the behaviour of matter under extreme stress, just one step short of a black hole. A neutron star is created by the force of a supernova explosion in a large star, which crushes the star's core to an unimaginable density. A mass greater than the Sun's is squeezed into a ball no wider than a city. The gravity and magnetic fields are billions of times stronger than the Earth's. The neutron star revolves rapidly, which causes it to wink like a cosmic lighthouse as it swivels its magnetic poles towards and away from the Earth. Pulsar 1055-52 spins at five revolutions per second. At its formation in a supernova explosion, a neutron star is endowed with two main forms of energy. One is heat, at temperatures of millions of degrees, which the neutron star radiates mainly as X-rays, with only a small proportion emerging as visible light. The other power supply for the neutron star comes from its high rate of spin and a gradual slowing of the rotation. By a variety of processes involving the magnetic field and accelerated particles in the neutron star's vicinity, the spin energy of the neutron star is converted into radiation at many different wavelengths, from radio waves to gamma-rays. The exceptional gamma-ray intensity of Pulsar 1055-52 was first appreciated in observations by NASA's Compton Gamma Ray Observatory. The team in Milan recently used the Hubble Space Telescope to find the distance of the peculiar neutron star Geminga, which is not detectable by radio pulses but is a strong source of gamma-rays (see ESA Information Note 04-96, 28 March 1996). Pulsar 1055-52 is even more powerful in that respect. About 50 per cent of its radiant energy is gamma-rays, compared with 15 per cent from Geminga and 0.1 per cent from the famous Crab Pulsar, the first neutron star seen by visible light. Making the gamma-rays requires the acceleration of electrons through billions of volts. The magnetic environment of Pulsar 1055-52 fashions a natural gamma-ray machine of amazing power. The orientation of the neutron star's magnetic field with respect to the Earth may contribute to its brightness in gamma-rays. Geminga, Pulsar 1055-52 and another object, Pulsar 0656+14, make a trio that the Milanese astronomers call the Three Musketeers. All have been observed with the Faint Object Camera. They are isolated, elderly neutron stars, some hundreds of thousands of years old, contrasting with the 942 year-old Crab Pulsar which is still surrounded by dispersing debris of a supernova seen by Chinese astronomers in the 11th Century. The mysteries of the neutron stars will keep astronomers busy for years to come, and the Faint Object Camera in the Hubble Space Telescope will remain the best instrument for spotting their faint visible light. The Hubble Space Telescope is a project of international cooperation between NASA and the European Space Agency (ESA). The Space Telescope Science Institute is operated by the Association of Universities for Research in Astronomy, Inc. (AURA) for NASA, under contract with the Goddard Space Flight Center, Greenbelt, Maryland. Note to editors: An image is available of (i) PSR 1055-52 seen by ESA's Faint Object Camera in the Hubble Space Telescope, and (ii) the same region of the sky seen by the European Southern Observatory's New Technology Telescope, with the position of PSR 1055-52 indicated. The image is available on the World Wide Web at http://ecf.hq.eso.org/stecf-pubrel.html http://www.estec.esa.nl/spdwww/h2000/html/snlmain.htm
Lattice Dynamics of Rare Gas Multilayers on the Ag(111) Surface. Theory and Experiment.
1985-08-01
phonon spectra generated from some simpler models, such as a nearest neighbor central force model, and also use of the Lennard - Jones ) Sa potential ... potentials and one from the Lennard - jones 6-12 potential , foc the ehr.. rare aases. The value for ko was defined from the experi- A 4. 7’,.V 19 mentally...derivative divided by the adsorbate mass. It is immediately obvious that the Barker pair potential value for ko is about 50% larger than the Lennard - Jones
Proposed Navy Software Acquisition Improvement Strategy
2009-03-16
Production and Deployment Operations and Support PRR IOC FOC OTRR DoD/ASN/RDA Policies Call for Gov’t SMEs to Define System Req’s, Support Milestone Reviews...of the SW; but with Gov’t Software SME oversight and insight W o A B C 12 Statement A: Approved for Public Release; Distribution is Unlimited 12...Comp, Segment levels is not sufficient to ensure & meet OA goalsSegment Level CSCIs CSCs Level of De SW CSCI 2 SW CSCI 1 SW CSCI ### Gov’t SW SMEs
Astronaut Ronald Evans photographed during transearth coast EVA
NASA Technical Reports Server (NTRS)
1972-01-01
Astronaut Ronald E. Evans is photographed performing extravehicular activity (EVA) during the Apollo 17 spacecraft's transearth coast. During his EVA Command Module pilot Evans retrieved film cassettes from the Lunar Sounder, Mapping Camera, and Panoramic Camera. The cylindrical object at Evans left side is the mapping camera cassette. The total time for the transearth EVA was one hour seven minutes 19 seconds, starting at ground elapsed time of 257:25 (2:28 p.m.) amd ending at ground elapsed time of 258:42 (3:35 p.m.) on Sunday, December 17, 1972.
Optical synthesizer for a large quadrant-array CCD camera: Center director's discretionary fund
NASA Technical Reports Server (NTRS)
Hagyard, Mona J.
1992-01-01
The objective of this program was to design and develop an optical device, an optical synthesizer, that focuses four contiguous quadrants of a solar image on four spatially separated CCD arrays that are part of a unique CCD camera system. This camera and the optical synthesizer will be part of the new NASA-Marshall Experimental Vector Magnetograph, and instrument developed to measure the Sun's magnetic field as accurately as present technology allows. The tasks undertaken in the program are outlined and the final detailed optical design is presented.
Hubble Space Telescope, Faint Object Spectrograph
NASA Technical Reports Server (NTRS)
1981-01-01
This drawing illustrates the Hubble Space Telescope's (HST's), Faint Object Spectrograph (FOS). The HST's two spectrographs, the Goddard High-Resolution Spectrograph and the FOS, can detect a broader range of wavelengths than is possible from the Earth because there is no atmosphere to absorb certain wavelengths. Scientists can determine the chemical composition, temperature, pressure, and turbulence of the stellar atmosphere producing the light, all from spectral data. The FOC can detect detail in very faint objects, such as those at great distances, and light ranging from ultraviolet to red spectral bands. Both spectrographs operate in essentially the same way. The incoming light passes through a small entrance aperture, then passes through filters and diffraction gratings, that work like prisms. The filter or grating used determines what range of wavelength will be examined and in what detail. Then the spectrograph detectors record the strength of each wavelength band and sends it back to Earth. The purpose of the HST, the most complex and sensitive optical telescope ever made, is to study the cosmos from a low-Earth orbit. By placing the telescope in space, astronomers are able to collect data that is free of the Earth's atmosphere. The HST views galaxies, stars, planets, comets, possibly other solar systems, and even unusual phenomena such as quasars, with 10 times the clarity of ground-based telescopes. The HST was deployed from the Space Shuttle Discovery (STS-31 mission) into Earth orbit in April 1990. The Marshall Space Flight Center had responsibility for design, development, and construction of the HST. The Perkin-Elmer Corporation, in Danbury, Cornecticut, developed the optical system and guidance sensors.
Aerial surveillance based on hierarchical object classification for ground target detection
NASA Astrophysics Data System (ADS)
Vázquez-Cervantes, Alberto; García-Huerta, Juan-Manuel; Hernández-Díaz, Teresa; Soto-Cajiga, J. A.; Jiménez-Hernández, Hugo
2015-03-01
Unmanned aerial vehicles have turned important in surveillance application due to the flexibility and ability to inspect and displace in different regions of interest. The instrumentation and autonomy of these vehicles have been increased; i.e. the camera sensor is now integrated. Mounted cameras allow flexibility to monitor several regions of interest, displacing and changing the camera view. A well common task performed by this kind of vehicles correspond to object localization and tracking. This work presents a hierarchical novel algorithm to detect and locate objects. The algorithm is based on a detection-by-example approach; this is, the target evidence is provided at the beginning of the vehicle's route. Afterwards, the vehicle inspects the scenario, detecting all similar objects through UTM-GPS coordinate references. Detection process consists on a sampling information process of the target object. Sampling process encode in a hierarchical tree with different sampling's densities. Coding space correspond to a huge binary space dimension. Properties such as independence and associative operators are defined in this space to construct a relation between the target object and a set of selected features. Different densities of sampling are used to discriminate from general to particular features that correspond to the target. The hierarchy is used as a way to adapt the complexity of the algorithm due to optimized battery duty cycle of the aerial device. Finally, this approach is tested in several outdoors scenarios, proving that the hierarchical algorithm works efficiently under several conditions.
NASA Technical Reports Server (NTRS)
Papanyan, Valeri; Oshle, Edward; Adamo, Daniel
2008-01-01
Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.
Smart-Phone Based Magnetic Levitation for Measuring Densities
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615
Smart-Phone Based Magnetic Levitation for Measuring Densities.
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.
The LST scientific instruments
NASA Technical Reports Server (NTRS)
Levin, G. M.
1975-01-01
Seven scientific instruments are presently being studied for use with the Large Space Telescope (LST). These instruments are the F/24 Field Camera, the F/48-F/96 Planetary Camera, the High Resolution Spectrograph, the Faint Object Spectrograph, the Infrared Photometer, and the Astrometer. These instruments are being designed as facility instruments to be replaceable during the life of the Observatory.
Integrating motion-detection cameras and hair snags for wolverine identification
Audrey J. Magoun; Clinton D. Long; Michael K. Schwartz; Kristine L. Pilgrim; Richard E. Lowell; Patrick Valkenburg
2011-01-01
We developed an integrated system for photographing a wolverine's (Gulo gulo) ventral pattern while concurrently collecting hair for microsatellite DNA genotyping. Our objectives were to 1) test the system on a wild population of wolverines using an array of camera and hair-snag (C&H) stations in forested habitat where wolverines were known to occur, 2)...
The Antartic Ice Borehole Probe
NASA Technical Reports Server (NTRS)
Behar, A.; Carsey, F.; Lane, A.; Engelhardt, H.
2000-01-01
The Antartic Ice Borehole Probe mission is a glaciological investigation, scheduled for November 2000-2001, that will place a probe in a hot-water drilled hole in the West Antartic ice sheet. The objectives of the probe are to observe ice-bed interactions with a downward looking camera, and ice inclusions and structure, including hypothesized ice accretion, with a side-looking camera.
Why Do Photo Finish Images Look Weird?
ERIC Educational Resources Information Center
Gregorcic, Bor; Planinsic, Gorazd
2012-01-01
This paper deals with effects that appear on photographs of rotating objects when taken by a photo finish camera, a rolling shutter camera or a computer scanner. These effects are very similar to Roget's palisade illusion. A simple quantitative analysis of the images is also provided. The effects are explored using a computer scanner in a way that…
The Topological Panorama Camera: A New Tool for Teaching Concepts Related to Space and Time.
ERIC Educational Resources Information Center
Gelphman, Janet L.; And Others
1992-01-01
Included are the description, operating characteristics, uses, and future plans for the Topological Panorama Camera, which is an experimental, robotic photographic device capable of producing visual renderings of the mathematical characteristics of an equation in terms of position changes of an object or in terms of the shape of the space…
NASA Technical Reports Server (NTRS)
1982-01-01
Model II Multispectral Camera is an advanced aerial camera that provides optimum enhancement of a scene by recording spectral signatures of ground objects only in narrow, preselected bands of the electromagnetic spectrum. Its photos have applications in such areas as agriculture, forestry, water pollution investigations, soil analysis, geologic exploration, water depth studies and camouflage detection. The target scene is simultaneously photographed in four separate spectral bands. Using a multispectral viewer, such as their Model 75 Spectral Data creates a color image from the black and white positives taken by the camera. With this optical image analysis unit, all four bands are superimposed in accurate registration and illuminated with combinations of blue green, red, and white light. Best color combination for displaying the target object is selected and printed. Spectral Data Corporation produces several types of remote sensing equipment and also provides aerial survey, image processing and analysis and number of other remote sensing services.
Traffic intensity monitoring using multiple object detection with traffic surveillance cameras
NASA Astrophysics Data System (ADS)
Hamdan, H. G. Muhammad; Khalifah, O. O.
2017-11-01
Object detection and tracking is a field of research that has many applications in the current generation with increasing number of cameras on the streets and lower cost for Internet of Things(IoT). In this paper, a traffic intensity monitoring system is implemented based on the Macroscopic Urban Traffic model is proposed using computer vision as its source. The input of this program is extracted from a traffic surveillance camera which has another program running a neural network classification which can identify and differentiate the vehicle type is implanted. The neural network toolbox is trained with positive and negative input to increase accuracy. The accuracy of the program is compared to other related works done and the trends of the traffic intensity from a road is also calculated. relevant articles in literature searches, great care should be taken in constructing both. Lastly the limitation and the future work is concluded.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608
Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe
2012-01-01
In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.
Sky camera geometric calibration using solar observations
Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan
2016-09-05
A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less
Automatic Recognition Of Moving Objects And Its Application To A Robot For Picking Asparagus
NASA Astrophysics Data System (ADS)
Baylou, P.; Amor, B. El Hadj; Bousseau, G.
1983-10-01
After a brief description of the robot for picking white asparagus, a statistical study of the different shapes of asparagus tips allowed us to determine certain discriminating parameters to detect the tips as they appear on the silhouette of the mound of earth. The localisation was done stereometrically with the help of two cameras. As the robot carrying the system of vision-localisation moves, the images are altered and decision cri-teria modified. A study of the image from mobile objects produced by both tube and CCD came-ras was carried out. A simulation of this phenomenon has been achieved in order to determine the modifications concerning object shapes, thresholding levels and decision parameters in function of the robot speed.
QuadCam - A Quadruple Polarimetric Camera for Space Situational Awareness
NASA Astrophysics Data System (ADS)
Skuljan, J.
A specialised quadruple polarimetric camera for space situational awareness, QuadCam, has been built at the Defence Technology Agency (DTA), New Zealand, as part of collaboration with the Defence Science and Technology Laboratory (Dstl), United Kingdom. The design was based on a similar system originally developed at Dstl, with some significant modifications for improved performance. The system is made up of four identical CCD cameras looking in the same direction, but in a different plane of polarisation at 0, 45, 90 and 135 degrees with respect to the reference plane. A standard set of Stokes parameters can be derived from the four images in order to describe the state of polarisation of an object captured in the field of view. The modified design of the DTA QuadCam makes use of four small Raspberry Pi computers, so that each camera is controlled by its own computer in order to speed up the readout process and ensure that the four individual frames are taken simultaneously (to within 100-200 microseconds). In addition, a new firmware was requested from the camera manufacturer so that an output signal is generated to indicate the state of the camera shutter. A specialised GPS unit (also developed at DTA) is then used to monitor the shutter signals from the four cameras and record the actual time of exposure to an accuracy of about 100 microseconds. This makes the system well suited for the observation of fast-moving objects in the low Earth orbit (LEO). The QuadCam is currently mounted on a Paramount MEII robotic telescope mount at the newly built DTA space situational awareness observatory located on Whangaparaoa Peninsula near Auckland, New Zealand. The system will be used for tracking satellites in low Earth orbit and geostationary belt as well. The performance of the camera has been evaluated and a series of test images have been collected in order to derive the polarimetric signatures for selected satellites.
Speckle imaging for planetary research
NASA Technical Reports Server (NTRS)
Nisenson, P.; Goody, R.; Apt, J.; Papaliolios, C.
1983-01-01
The present study of speckle imaging technique effectiveness encompasses image reconstruction by means of a division algorithm for Fourier amplitudes, and the Knox-Thompson (1974) algorithm for Fourier phases. Results which have been obtained for Io, Titan, Pallas, Jupiter and Uranus indicate that spatial resolutions lower than the seeing limit by a factor of four are obtainable for objects brighter than Uranus. The resolutions obtained are well above the diffraction limit, due to inadequacies of the video camera employed. A photon-counting camera has been developed to overcome these difficulties, making possible the diffraction-limited resolution of objects as faint as Charon.
The application of holography as a real-time three-dimensional motion picture camera
NASA Technical Reports Server (NTRS)
Kurtz, R. L.
1973-01-01
A historical introduction to holography is presented, as well as a basic description of sideband holography for stationary objects. A brief theoretical development of both time-dependent and time-independent holography is also provided, along with an analytical and intuitive discussion of a unique holographic arrangement which allows the resolution of front surface detail from an object moving at high speeds. As an application of such a system, a real-time three-dimensional motion picture camera system is discussed and the results of a recent demonstration of the world's first true three-dimensional motion picture are given.
Full color natural light holographic camera.
Kim, Myung K
2013-04-22
Full-color, three-dimensional images of objects under incoherent illumination are obtained by a digital holography technique. Based on self-interference of two beam-split copies of the object's optical field with differential curvatures, the apparatus consists of a beam-splitter, a few mirrors and lenses, a piezo-actuator, and a color camera. No lasers or other special illuminations are used for recording or reconstruction. Color holographic images of daylight-illuminated outdoor scenes and a halogen lamp-illuminated toy figure are obtained. From a recorded hologram, images can be calculated, or numerically focused, at any distances for viewing.
Variation in detection among passive infrared triggered-cameras used in wildlife research
Damm, Philip E.; Grand, James B.; Barnett, Steven W.
2010-01-01
Precise and accurate estimates of demographics such as age structure, productivity, and density are necessary in determining habitat and harvest management strategies for wildlife populations. Surveys using automated cameras are becoming an increasingly popular tool for estimating these parameters. However, most camera studies fail to incorporate detection probabilities, leading to parameter underestimation. The objective of this study was to determine the sources of heterogeneity in detection for trail cameras that incorporate a passive infrared (PIR) triggering system sensitive to heat and motion. Images were collected at four baited sites within the Conecuh National Forest, Alabama, using three cameras at each site operating continuously over the same seven-day period. Detection was estimated for four groups of animals based on taxonomic group and body size. Our hypotheses of detection considered variation among bait sites and cameras. The best model (w=0.99) estimated different rates of detection for each camera in addition to different detection rates for four animal groupings. Factors that explain this variability might include poor manufacturing tolerances, variation in PIR sensitivity, animal behavior, and species-specific infrared radiation. Population surveys using trail cameras with PIR systems must incorporate detection rates for individual cameras. Incorporating time-lapse triggering systems into survey designs should eliminate issues associated with PIR systems.
NASA Astrophysics Data System (ADS)
Hanel, A.; Stilla, U.
2017-05-01
Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.
Thiessen, Amber; Beukelman, David; Hux, Karen; Longenecker, Maria
2016-04-01
The purpose of the study was to compare the visual attention patterns of adults with aphasia and adults without neurological conditions when viewing visual scenes with 2 types of engagement. Eye-tracking technology was used to measure the visual attention patterns of 10 adults with aphasia and 10 adults without neurological conditions. Participants viewed camera-engaged (i.e., human figure facing camera) and task-engaged (i.e., human figure looking at and touching an object) visual scenes. Participants with aphasia responded to engagement cues by focusing on objects of interest more for task-engaged scenes than camera-engaged scenes; however, the difference in their responses to these scenes were not as pronounced as those observed in adults without neurological conditions. In addition, people with aphasia spent more time looking at background areas of interest and less time looking at person areas of interest for camera-engaged scenes than did control participants. Results indicate people with aphasia visually attend to scenes differently than adults without neurological conditions. As a consequence, augmentative and alternative communication (AAC) facilitators may have different visual attention behaviors than the people with aphasia for whom they are constructing or selecting visual scenes. Further examination of the visual attention of people with aphasia may help optimize visual scene selection.
Optimization of digitization procedures in cultural heritage preservation
NASA Astrophysics Data System (ADS)
Martínez, Bea; Mitjà, Carles; Escofet, Jaume
2013-11-01
The digitization of both volumetric and flat objects is the nowadays-preferred method in order to preserve cultural heritage items. High quality digital files obtained from photographic plates, films and prints, paintings, drawings, gravures, fabrics and sculptures, allows not only for a wider diffusion and on line transmission, but also for the preservation of the original items from future handling. Early digitization procedures used scanners for flat opaque or translucent objects and camera only for volumetric or flat highly texturized materials. The technical obsolescence of the high-end scanners and the improvement achieved by professional cameras has result in a wide use of cameras with digital back to digitize any kind of cultural heritage item. Since the lens, the digital back, the software controlling the camera and the digital image processing provide a wide range of possibilities, there is necessary to standardize the methods used in the reproduction work leading to preserve as high as possible the original item properties. This work presents an overview about methods used for camera system characterization, as well as the best procedures in order to identify and counteract the effect of the lens residual aberrations, sensor aliasing, image illumination, color management and image optimization by means of parametric image processing. As a corollary, the work shows some examples of reproduction workflow applied to the digitization of valuable art pieces and glass plate photographic black and white negatives.
Blood pulsation measurement using cameras operating in visible light: limitations.
Koprowski, Robert
2016-10-03
The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).
3D imaging and wavefront sensing with a plenoptic objective
NASA Astrophysics Data System (ADS)
Rodríguez-Ramos, J. M.; Lüke, J. P.; López, R.; Marichal-Hernández, J. G.; Montilla, I.; Trujillo-Sevilla, J.; Femenía, B.; Puga, M.; López, M.; Fernández-Valdivia, J. J.; Rosa, F.; Dominguez-Conde, C.; Sanluis, J. C.; Rodríguez-Ramos, L. F.
2011-06-01
Plenoptic cameras have been developed over the last years as a passive method for 3d scanning. Several superresolution algorithms have been proposed in order to increase the resolution decrease associated with lightfield acquisition with a microlenses array. A number of multiview stereo algorithms have also been applied in order to extract depth information from plenoptic frames. Real time systems have been implemented using specialized hardware as Graphical Processing Units (GPUs) and Field Programmable Gates Arrays (FPGAs). In this paper, we will present our own implementations related with the aforementioned aspects but also two new developments consisting of a portable plenoptic objective to transform every conventional 2d camera in a 3D CAFADIS plenoptic camera, and the novel use of a plenoptic camera as a wavefront phase sensor for adaptive optics (OA). The terrestrial atmosphere degrades the telescope images due to the diffraction index changes associated with the turbulence. These changes require a high speed processing that justify the use of GPUs and FPGAs. Na artificial Laser Guide Stars (Na-LGS, 90km high) must be used to obtain the reference wavefront phase and the Optical Transfer Function of the system, but they are affected by defocus because of the finite distance to the telescope. Using the telescope as a plenoptic camera allows us to correct the defocus and to recover the wavefront phase tomographically. These advances significantly increase the versatility of the plenoptic camera, and provides a new contribution to relate the wave optics and computer vision fields, as many authors claim.
Kidd, David G; McCartt, Anne T
2016-02-01
This study characterized the use of various fields of view during low-speed parking maneuvers by drivers with a rearview camera, a sensor system, a camera and sensor system combined, or neither technology. Participants performed four different low-speed parking maneuvers five times. Glances to different fields of view the second time through the four maneuvers were coded along with the glance locations at the onset of the audible warning from the sensor system and immediately after the warning for participants in the sensor and camera-plus-sensor conditions. Overall, the results suggest that information from cameras and/or sensor systems is used in place of mirrors and shoulder glances. Participants with a camera, sensor system, or both technologies looked over their shoulders significantly less than participants without technology. Participants with cameras (camera and camera-plus-sensor conditions) used their mirrors significantly less compared with participants without cameras (no-technology and sensor conditions). Participants in the camera-plus-sensor condition looked at the center console/camera display for a smaller percentage of the time during the low-speed maneuvers than participants in the camera condition and glanced more frequently to the center console/camera display immediately after the warning from the sensor system compared with the frequency of glances to this location at warning onset. Although this increase was not statistically significant, the pattern suggests that participants in the camera-plus-sensor condition may have used the warning as a cue to look at the camera display. The observed differences in glance behavior between study groups were illustrated by relating it to the visibility of a 12-15-month-old child-size object. These findings provide evidence that drivers adapt their glance behavior during low-speed parking maneuvers following extended use of rearview cameras and parking sensors, and suggest that other technologies which augment the driving task may do the same. Copyright © 2015 Elsevier Ltd. All rights reserved.
Distributed Sensing and Processing for Multi-Camera Networks
NASA Astrophysics Data System (ADS)
Sankaranarayanan, Aswin C.; Chellappa, Rama; Baraniuk, Richard G.
Sensor networks with large numbers of cameras are becoming increasingly prevalent in a wide range of applications, including video conferencing, motion capture, surveillance, and clinical diagnostics. In this chapter, we identify some of the fundamental challenges in designing such systems: robust statistical inference, computationally efficiency, and opportunistic and parsimonious sensing. We show that the geometric constraints induced by the imaging process are extremely useful for identifying and designing optimal estimators for object detection and tracking tasks. We also derive pipelined and parallelized implementations of popular tools used for statistical inference in non-linear systems, of which multi-camera systems are examples. Finally, we highlight the use of the emerging theory of compressive sensing in reducing the amount of data sensed and communicated by a camera network.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Adaptive DOF for plenoptic cameras
NASA Astrophysics Data System (ADS)
Oberdörster, Alexander; Lensch, Hendrik P. A.
2013-03-01
Plenoptic cameras promise to provide arbitrary re-focusing through a scene after the capture. In practice, however, the refocusing range is limited by the depth of field (DOF) of the plenoptic camera. For the focused plenoptic camera, this range is given by the range of object distances for which the microimages are in focus. We propose a technique of recording light fields with an adaptive depth of focus. Between multiple exposures { or multiple recordings of the light field { the distance between the microlens array (MLA) and the image sensor is adjusted. The depth and quality of focus is chosen by changing the number of exposures and the spacing of the MLA movements. In contrast to traditional cameras, extending the DOF does not necessarily lead to an all-in-focus image. Instead, the refocus range is extended. There is full creative control about the focus depth; images with shallow or selective focus can be generated.
NASA Astrophysics Data System (ADS)
Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo
2008-11-01
Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.
Photogrammetry of a 5m Inflatable Space Antenna With Consumer Digital Cameras
NASA Technical Reports Server (NTRS)
Pappa, Richard S.; Giersch, Louis R.; Quagliaroli, Jessica M.
2000-01-01
This paper discusses photogrammetric measurements of a 5m-diameter inflatable space antenna using four Kodak DC290 (2.1 megapixel) digital cameras. The study had two objectives: 1) Determine the photogrammetric measurement precision obtained using multiple consumer-grade digital cameras and 2) Gain experience with new commercial photogrammetry software packages, specifically PhotoModeler Pro from Eos Systems, Inc. The paper covers the eight steps required using this hardware/software combination. The baseline data set contained four images of the structure taken from various viewing directions. Each image came from a separate camera. This approach simulated the situation of using multiple time-synchronized cameras, which will be required in future tests of vibrating or deploying ultra-lightweight space structures. With four images, the average measurement precision for more than 500 points on the antenna surface was less than 0.020 inches in-plane and approximately 0.050 inches out-of-plane.