Sample records for narrow angle cameras

  1. Reconditioning of Cassini Narrow-Angle Camera

    NASA Image and Video Library

    2002-07-23

    These five images of single stars, taken at different times with the narrow-angle camera on NASA Cassini spacecraft, show the effects of haze collecting on the camera optics, then successful removal of the haze by warming treatments.

  2. Flight Calibration of the LROC Narrow Angle Camera

    NASA Astrophysics Data System (ADS)

    Humm, D. C.; Tschimmel, M.; Brylow, S. M.; Mahanti, P.; Tran, T. N.; Braden, S. E.; Wiseman, S.; Danton, J.; Eliason, E. M.; Robinson, M. S.

    2016-04-01

    Characterization and calibration are vital for instrument commanding and image interpretation in remote sensing. The Lunar Reconnaissance Orbiter Camera Narrow Angle Camera (LROC NAC) takes 500 Mpixel greyscale images of lunar scenes at 0.5 meters/pixel. It uses two nominally identical line scan cameras for a larger crosstrack field of view. Stray light, spatial crosstalk, and nonlinearity were characterized using flight images of the Earth and the lunar limb. These are important for imaging shadowed craters, studying ˜1 meter size objects, and photometry respectively. Background, nonlinearity, and flatfield corrections have been implemented in the calibration pipeline. An eight-column pattern in the background is corrected. The detector is linear for DN = 600--2000 but a signal-dependent additive correction is required and applied for DN<600. A predictive model of detector temperature and dark level was developed to command dark level offset. This avoids images with a cutoff at DN=0 and minimizes quantization error in companding. Absolute radiometric calibration is derived from comparison of NAC images with ground-based images taken with the Robotic Lunar Observatory (ROLO) at much lower spatial resolution but with the same photometric angles.

  3. Reconditioning of Cassini Narrow-Angle Camera

    NASA Technical Reports Server (NTRS)

    2002-01-01

    These five images of single stars, taken at different times with the narrow-angle camera on NASA's Cassini spacecraft, show the effects of haze collecting on the camera's optics, then successful removal of the haze by warming treatments.

    The image on the left was taken on May 25, 2001, before the haze problem occurred. It shows a star named HD339457.

    The second image from left, taken May 30, 2001, shows the effect of haze that collected on the optics when the camera cooled back down after a routine-maintenance heating to 30 degrees Celsius (86 degrees Fahrenheit). The star is Maia, one of the Pleiades.

    The third image was taken on October 26, 2001, after a weeklong decontamination treatment at minus 7 C (19 F). The star is Spica.

    The fourth image was taken of Spica January 30, 2002, after a weeklong decontamination treatment at 4 C (39 F).

    The final image, also of Spica, was taken July 9, 2002, following three additional decontamination treatments at 4 C (39 F) for two months, one month, then another month.

    Cassini, on its way toward arrival at Saturn in 2004, is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the Cassini mission for NASA's Office of Space Science, Washington, D.C.

  4. Narrow Angle movie

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This brief three-frame movie of the Moon was made from three Cassini narrow-angle images as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. The purpose of this particular set of images was to calibrate the spectral response of the narrow-angle camera and to test its 'on-chip summing mode' data compression technique in flight. From left to right, they show the Moon in the green, blue and ultraviolet regions of the spectrum in 40, 60 and 80 millisecond exposures, respectively. All three images have been scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is the same in each image. The spatial scale in the blue and ultraviolet images is 1.4 miles per pixel (2.3 kilometers). The original scale in the green image (which was captured in the usual manner and then reduced in size by 2x2 pixel summing within the camera system) was 2.8 miles per pixel (4.6 kilometers). It has been enlarged for display to the same scale as the other two. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS) at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  5. The Wide Angle Camera of the ROSETTA Mission

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Verani, S.; Bertini, I.; Lazzarin, M.; Rampazzi, F.; Cremonese, G.; Ragazzoni, R.; Marzari, F.; Angrilli, F.; Bianchini, G. A.; Debei, S.; Dececco, M.; Guizzo, G.; Parzianello, G.; Ramous, P.; Saggin, B.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.; Nicolosi, G.; Pelizzo, M. G.; Tondello, G.; Brunello, P.; Peron, F.

    This paper aims to give a brief description of the Wide Angle Camera (WAC), built by the Centro Servizi e AttivitàSpaziali (CISAS) of the University of Padova for the ESA ROSETTA Mission to comet 46P/Wirtanen and asteroids 4979 Otawara and 140 Siwa. The WAC is part of the OSIRIS imaging system, which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front cover mechanism for the NAC. The flight model of the WAC was delivered in December 2001, and has been already integrated on ROSETTA.

  6. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  7. Extracting accurate and precise topography from LROC narrow angle camera stereo observations

    NASA Astrophysics Data System (ADS)

    Henriksen, M. R.; Manheim, M. R.; Burns, K. N.; Seymour, P.; Speyerer, E. J.; Deran, A.; Boyd, A. K.; Howington-Kraus, E.; Rosiek, M. R.; Archinal, B. A.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) includes two identical Narrow Angle Cameras (NAC) that each provide 0.5 to 2.0 m scale images of the lunar surface. Although not designed as a stereo system, LROC can acquire NAC stereo observations over two or more orbits using at least one off-nadir slew. Digital terrain models (DTMs) are generated from sets of stereo images and registered to profiles from the Lunar Orbiter Laser Altimeter (LOLA) to improve absolute accuracy. With current processing methods, DTMs have absolute accuracies better than the uncertainties of the LOLA profiles and relative vertical and horizontal precisions less than the pixel scale of the DTMs (2-5 m). We computed slope statistics from 81 highland and 31 mare DTMs across a range of baselines. For a baseline of 15 m the highland mean slope parameters are: median = 9.1°, mean = 11.0°, standard deviation = 7.0°. For the mare the mean slope parameters are: median = 3.5°, mean = 4.9°, standard deviation = 4.5°. The slope values for the highland terrain are steeper than previously reported, likely due to a bias in targeting of the NAC DTMs toward higher relief features in the highland terrain. Overlapping DTMs of single stereo sets were also combined to form larger area DTM mosaics that enable detailed characterization of large geomorphic features. From one DTM mosaic we mapped a large viscous flow related to the Orientale basin ejecta and estimated its thickness and volume to exceed 300 m and 500 km3, respectively. Despite its ∼3.8 billion year age the flow still exhibits unconfined margin slopes above 30°, in some cases exceeding the angle of repose, consistent with deposition of material rich in impact melt. We show that the NAC stereo pairs and derived DTMs represent an invaluable tool for science and exploration purposes. At this date about 2% of the lunar surface is imaged in high-resolution stereo, and continued acquisition of stereo observations will serve to strengthen our

  8. Telescope and mirrors development for the monolithic silicon carbide instrument of the osiris narrow angle camera

    NASA Astrophysics Data System (ADS)

    Calvel, Bertrand; Castel, Didier; Standarovski, Eric; Rousset, Gérard; Bougoin, Michel

    2017-11-01

    The international Rosetta mission, now planned by ESA to be launched in January 2003, will provide a unique opportunity to directly study the nucleus of comet 46P/Wirtanen and its activity in 2013. We describe here the design, the development and the performances of the telescope of the Narrow Angle Camera of the OSIRIS experiment et its Silicon Carbide telescope which will give high resolution images of the cometary nucleus in the visible spectrum. The development of the mirrors has been specifically detailed. The SiC parts have been manufactured by BOOSTEC, polished by STIGMA OPTIQUE and ion figured by IOM under the prime contractorship of ASTRIUM. ASTRIUM was also in charge of the alignment. The final optical quality of the aligned telescope is 30 nm rms wavefront error.

  9. High-resolution topomapping of candidate MER landing sites with Mars Orbiter Camera narrow-angle images

    USGS Publications Warehouse

    Kirk, R.L.; Howington-Kraus, E.; Redding, B.; Galuszka, D.; Hare, T.M.; Archinal, B.A.; Soderblom, L.A.; Barrett, J.M.

    2003-01-01

    We analyzed narrow-angle Mars Orbiter Camera (MOC-NA) images to produce high-resolution digital elevation models (DEMs) in order to provide topographic and slope information needed to assess the safety of candidate landing sites for the Mars Exploration Rovers (MER) and to assess the accuracy of our results by a variety of tests. The mapping techniques developed also support geoscientific studies and can be used with all present and planned Mars-orbiting scanner cameras. Photogrammetric analysis of MOC stereopairs yields DEMs with 3-pixel (typically 10 m) horizontal resolution, vertical precision consistent with ???0.22 pixel matching errors (typically a few meters), and slope errors of 1-3??. These DEMs are controlled to the Mars Orbiter Laser Altimeter (MOLA) global data set and consistent with it at the limits of resolution. Photoclinometry yields DEMs with single-pixel (typically ???3 m) horizontal resolution and submeter vertical precision. Where the surface albedo is uniform, the dominant error is 10-20% relative uncertainty in the amplitude of topography and slopes after "calibrating" photoclinometry against a stereo DEM to account for the influence of atmospheric haze. We mapped portions of seven candidate MER sites and the Mars Pathfinder site. Safety of the final four sites (Elysium, Gusev, Isidis, and Meridiani) was assessed by mission engineers by simulating landings on our DEMs of "hazard units" mapped in the sites, with results weighted by the probability of landing on those units; summary slope statistics show that most hazard units are smooth, with only small areas of etched terrain in Gusev crater posing a slope hazard.

  10. Ocular Biometrics of Myopic Eyes With Narrow Angles.

    PubMed

    Chong, Gabriel T; Wen, Joanne C; Su, Daniel Hsien-Wen; Stinnett, Sandra; Asrani, Sanjay

    2016-02-01

    The purpose of this study was to compare the ocular biometrics between myopic patients with and without narrow angles. Patients with a stable myopic refraction (myopia worse than -1.00 D spherical equivalent) were prospectively recruited. Angle status was assessed using gonioscopy and biometric measurements were performed using an anterior segment optical coherence tomography and an IOLMaster. A total of 29 patients (58 eyes) were enrolled with 13 patients (26 eyes) classified as having narrow angles and 16 patients (32 eyes) classified as having open angles. Baseline demographics of age, sex, and ethnicity did not differ significantly between the 2 groups. The patients with narrow angles were on average older than those with open angles but the difference did not reach statistical significance (P=0.12). The central anterior chamber depth was significantly less in the eyes with narrow angles (P=0.05). However, the average lens thickness, although greater in the eyes with narrow angles, did not reach statistical significance (P=0.10). Refractive error, axial lengths, and iris thicknesses did not differ significantly between the 2 groups (P=0.32, 0.47, 0.15). Narrow angles can occur in myopic eyes. Routine gonioscopy is therefore recommended for all patients regardless of refractive error.

  11. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  12. Associations between narrow angle and adult anthropometry: the Liwan Eye Study.

    PubMed

    Jiang, Yuzhen; He, Mingguang; Friedman, David S; Khawaja, Anthony P; Lee, Pak Sang; Nolan, Winifred P; Yin, Qiuxia; Foster, Paul J

    2014-06-01

    To assess the associations between narrow angle and adult anthropometry. Chinese adults aged 50 years and older were recruited from a population-based survey in the Liwan District of Guangzhou, China. Narrow angle was defined as the posterior trabecular meshwork not visible under static gonioscopy in at least three quadrants (i.e. a circumference of at least 270°). Logistic regression models were used to examine the associations between narrow angle and anthropomorphic measures (height, weight and body mass index, BMI). Among the 912 participants, lower weight, shorter height, and lower BMI were significantly associated with narrower angle width (tests for trend: mean angle width in degrees vs weight p < 0.001; vs height p < 0.001; vs BMI p = 0.012). In univariate analyses, shorter height, lower weight and lower BMI were all significantly associated with greater odds of narrow angle. The crude association between height and narrow angle was largely attributable to a stronger association with age and sex. Lower BMI and weight remained significantly associated with narrow angle after adjustment for height, age, sex, axial ocular biometric measures and education. In analyses stratified by sex, the association between BMI and narrow angle was only observed in women. Lower BMI and weight were associated with significantly greater odds of narrow angle after adjusting for age, education, axial ocular biometric measures and height. The odds of narrow angle increased 7% per 1 unit decrease in BMI. This association was most evident in women.

  13. Associations between Narrow Angle and Adult Anthropometry: The Liwan Eye Study

    PubMed Central

    Jiang, Yuzhen; He, Mingguang; Friedman, David S.; Khawaja, Anthony P.; Lee, Pak Sang; Nolan, Winifred P.; Yin, Qiuxia; Foster, Paul J.

    2015-01-01

    Purpose To assess the associations between narrow angle and adult anthropometry. Methods Chinese adults aged 50 years and older were recruited from a population-based survey in the Liwan District of Guangzhou, China. Narrow angle was defined as the posterior trabecular meshwork not visible under static gonioscopy in at least three quadrants (i.e. a circumference of at least 270°). Logistic regression models were used to examine the associations between narrow angle and anthropomorphic measures (height, weight and body mass index, BMI). Results Among the 912 participants, lower weight, shorter height, and lower BMI were significantly associated with narrower angle width (tests for trend: mean angle width in degrees vs weight p<0.001; vs height p<0.001; vs BMI p = 0.012). In univariate analyses, shorter height, lower weight and lower BMI were all significantly associated with greater odds of narrow angle. The crude association between height and narrow angle was largely attributable to a stronger association with age and sex. Lower BMI and weight remained significantly associated with narrow angle after adjustment for height, age, sex, axial ocular biometric measures and education. In analyses stratified by sex, the association between BMI and narrow angle was only observed in women. Conclusion Lower BMI and weight were associated with significantly greater odds of narrow angle after adjusting for age, education, axial ocular biometric measures and height. The odds of narrow angle increased 7% per 1 unit decrease in BMI. This association was most evident in women. PMID:24707840

  14. Non-contact measurement of rotation angle with solo camera

    NASA Astrophysics Data System (ADS)

    Gan, Xiaochuan; Sun, Anbin; Ye, Xin; Ma, Liqun

    2015-02-01

    For the purpose to measure a rotation angle around the axis of an object, a non-contact rotation angle measurement method based on solo camera was promoted. The intrinsic parameters of camera were calibrated using chessboard on principle of plane calibration theory. The translation matrix and rotation matrix between the object coordinate and the camera coordinate were calculated according to the relationship between the corners' position on object and their coordinates on image. Then the rotation angle between the measured object and the camera could be resolved from the rotation matrix. A precise angle dividing table (PADT) was chosen as the reference to verify the angle measurement error of this method. Test results indicated that the rotation angle measurement error of this method did not exceed +/- 0.01 degree.

  15. First Results from the Wide Angle Camera of the ROSETTA Mission .

    NASA Astrophysics Data System (ADS)

    Barbieri, C.; Fornasier, S.; Bertini, I.; Angrilli, F.; Bianchini, G. A.; Debei, S.; De Cecco, M.; Parzianello, G.; Zaccariotto, M.; Da Deppo, V.; Naletto, G.

    This paper gives a brief description of the Wide Angle Camera (WAC), built by the Center of Studies and Activities for Space (CISAS) of the University of Padova for the ESA ROSETTA Mission, of data we have obtained about the new mission targets, and of the first results achieved after the launch in March 2004. The WAC is part of the OSIRIS imaging system, built under the PI-ship of Dr. U. Keller (Max-Planck-Institute for Solar System Studies) which comprises also a Narrow Angle Camera (NAC) built by the Laboratoire d'Astrophysique Spatiale (LAS) of Marseille. CISAS had also the responsibility to build the shutter and the front door mechanism for the NAC. The images show the excellent optical quality of the WAC, exceeding the specifications both in term of encircled energy (80% in one pixel over a FoV of 12×12 sq degree), limiting magnitude (fainter than the 13th in 30s exposure time through a wideband red filter) and amount of distortions.

  16. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  17. High prevalence of narrow angles among Filipino-American patients.

    PubMed

    Seider, Michael I; Sáles, Christopher S; Lee, Roland Y; Agadzi, Anthony K; Porco, Travis C; Weinreb, Robert N; Lin, Shan C

    2011-03-01

    To determine the prevalence of gonioscopically narrow anterior chamber angles in a Filipino-American clinic population. The records of 122 consecutive, new, self-declared Filipino-American patients examined in a comprehensive ophthalmology clinic in Vallejo, California were reviewed retrospectively. After exclusion, 222 eyes from 112 patients remained for analysis. Data were collected for anterior chamber angle grade as determined by gonioscopy (Shaffer system), age, sex, manifest refraction (spherical equivalent), intraocular pressure, and cup-to-disk ratio. Data from both eyes of patients were included and modeled using standard linear mixed-effects regression. As a comparison, data were also collected from a group of 30 consecutive White patients from the same clinic. After exclusion, 50 eyes from 25 White patients remained for comparison. At least 1 eye of 24% of Filipino-American patients had a narrow anterior chamber angle (Shaffer grade ≤ 2). Filipino-American angle grade significantly decreased with increasingly hyperopic refraction (P=0.007) and larger cup-to-disk ratio (P=0.038). Filipino-American women had significantly decreased angle grades compared with men (P=0.028), but angle grade did not vary by intraocular pressure or age (all, P≥ 0.059). Narrow anterior chamber angles are highly prevalent in Filipino-American patients in our clinic population.

  18. A wide-angle camera module for disposable endoscopy

    NASA Astrophysics Data System (ADS)

    Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee

    2016-08-01

    A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.

  19. High prevalence of narrow angles among Chinese-American glaucoma and glaucoma suspect patients.

    PubMed

    Seider, Michael I; Pekmezci, Melike; Han, Ying; Sandhu, Simi; Kwok, Shiu Y; Lee, Roland Y; Lin, Shan C

    2009-01-01

    To evaluate the prevalence of gonioscopically narrow angles in a Chinese-American population with glaucoma or glaucoma suspicion. Charts from all Chinese-American patients seen in a comprehensive ophthalmology clinic in the Chinatown district of San Francisco in 2002 were reviewed. One eye from each patient with glaucoma or glaucoma suspicion that met inclusion criteria was included (n=108). Data were collected for sex, age, race (self-declared), refraction (spherical equivalent), intraocular pressure, gonioscopy, and vertical cup-to-disk ratio. Sixty percent (n=65) of Chinese-American eyes with glaucoma or glaucoma suspicion had gonioscopically narrow angles (Shaffer grade < or = 2 in 3 or more quadrants). Those with narrow angles were significantly older (P=0.004) than their open angle counterparts, but the 2 groups did not differ in terms of sex, refraction, intraocular pressure, or cup-to-disk ratio (all, P > or = 0.071). In a multivariate model including age, sex, and refraction as predictors of angle grade (open or narrow), only age was a significant predictor of angle grade (P=0.004). A large proportion of Chinese-Americans in our study population with glaucoma or glaucoma suspicion had gonioscopically narrow angles. In multivariate analysis, patients with narrow angles were older than those with open angles but did not differ from them in terms of sex or refraction. Continued evaluation of angle closure glaucoma risk among Chinese-Americans is needed.

  20. High Prevalence of Narrow Angles among Chinese-American Glaucoma and Glaucoma Suspect Patients

    PubMed Central

    Seider, Michael I; Pekmezci, Melike; Han, Ying; Sandhu, Simi; Kwok, Shiu Y; Lee, Roland Y; Lin, Shan C

    2009-01-01

    Purpose To evaluate the prevalence of gonioscopically narrow angles in a Chinese-American population with glaucoma or glaucoma suspicion. Patients and Methods Charts from all Chinese-American patients seen in a comprehensive ophthalmology clinic in the Chinatown district of San Francisco in 2002 were reviewed. One eye from each patient with glaucoma or glaucoma suspicion that met inclusion criteria was included (n=108). Data was collected for gender, age, race (self-declared), refraction (spherical equivalent), intraocular pressure (IOP), gonioscopy and vertical cup-to-disk ratio (CDR). Results Sixty percent (n=65) of Chinese-American eyes with glaucoma or glaucoma suspicion had gonioscopically narrow angles (Shaffer grade ≤2 in three or more quadrants). Those with narrow angles were significantly older (P=0.004) than their open angle counterparts, but the two groups did not differ in terms of gender, refraction, IOP or CDR (all, P≥0.071). In a multivariate model including age, gender and refraction as predictors of angle grade (open or narrow), only age was a significant predictor of angle grade (P=0.004). Conclusions A large proportion of Chinese-Americans in our study population with glaucoma or glaucoma suspicion had gonioscopically narrow angles. In multivariate analysis, patients with narrow angles were older than those with open angles but did not differ from them in terms of gender or refraction. Continued evaluation of angle closure glaucoma risk among Chinese-Americans is needed. PMID:19826385

  1. Determinants of lens vault and association with narrow angles in patients from Singapore.

    PubMed

    Tan, Gavin S; He, Mingguang; Zhao, Wanting; Sakata, Lisandro M; Li, Jialiang; Nongpiur, Monisha E; Lavanya, Raghavan; Friedman, David S; Aung, Tin

    2012-07-01

    To describe the distribution and determinants of lens vault and to investigate the association of lens vault with narrow angles. Prospective cross-sectional study. Phakic subjects 50 years and older were evaluated at a primary healthcare clinic with gonioscopy, partial laser interferometry, and anterior segment optical coherence tomography (AS-OCT). Narrow angles were defined as posterior trabecular meshwork not visible for ≥2 quadrants on non-indentation gonioscopy. Lens vault was defined as the perpendicular distance between the anterior pole of the crystalline lens and the horizontal line joining the 2 scleral spurs on horizontal AS-OCT scans. Analysis of covariance, multivariate logistic regression, and area under the receiver operating characteristic curves (AUC) were performed. Of the 2047 subjects recruited, 582 were excluded because of poor image quality or inability to locate scleral spurs, leaving 1465 subjects for analysis. Eyes with narrow angles had greater lens vault compared to eyes with open angles (775.6 µm vs 386.5 µm, P < .0001). Women had significantly greater lens vault than men (497.28 µm vs 438.56 µm, P < .001), and lens vault increased significantly with age (P for trend <.001). Adjusted for age and sex, significant associations with greater lens vault were shorter axial length, shallower anterior chamber depth(ACD), higher intraocular pressure, and more hyperopic spherical equivalent (all P < .001). On multivariate analysis, subjects with lens vault >667.6 µm were more likely to have narrow angles (OR 2.201, 95% CI: 1.070-4.526) compared to those with lens vault ≤462.7 µm. The AUC for lens vault (0.816) and ACD (0.822) for detecting narrow angles were similar (P = .582). Lens vault was independently associated with narrow angles and may be useful in screening to detect eyes with narrow angles. Copyright © 2012 Elsevier Inc. All rights reserved.

  2. Multi-Angle Snowflake Camera Value-Added Product

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shkurko, Konstantin; Garrett, T.; Gaustad, K

    The Multi-Angle Snowflake Camera (MASC) addresses a need for high-resolution multi-angle imaging of hydrometeors in freefall with simultaneous measurement of fallspeed. As illustrated in Figure 1, the MASC consists of three cameras, separated by 36°, each pointing at an identical focal point approximately 10 cm away. Located immediately above each camera, a light aims directly at the center of depth of field for its corresponding camera. The focal point at which the cameras are aimed lies within a ring through which hydrometeors fall. The ring houses a system of near-infrared emitter-detector pairs, arranged in two arrays separated vertically by 32more » mm. When hydrometeors pass through the lower array, they simultaneously trigger all cameras and lights. Fallspeed is calculated from the time it takes to traverse the distance between the upper and lower triggering arrays. The trigger electronics filter out ambient light fluctuations associated with varying sunlight and shadows. The microprocessor onboard the MASC controls the camera system and communicates with the personal computer (PC). The image data is sent via FireWire 800 line, and fallspeed (and camera control) is sent via a Universal Serial Bus (USB) line that relies on RS232-over-USB serial conversion. See Table 1 for specific details on the MASC located at the Oliktok Point Mobile Facility on the North Slope of Alaska. The value-added product (VAP) detailed in this documentation analyzes the raw data (Section 2.0) using Python: images rely on OpenCV image processing library and derived aggregated statistics rely on some clever averaging. See Sections 4.1 and 4.2 for more details on what variables are computed.« less

  3. Assessment of narrow angles by gonioscopy, Van Herick method and anterior segment optical coherence tomography.

    PubMed

    Park, Seong Bae; Sung, Kyung Rim; Kang, Sung Yung; Jo, Jung Woo; Lee, Kyoung Sub; Kook, Michael S

    2011-07-01

    To evaluate anterior chamber (AC) angles using gonioscopy, Van Herick technique and anterior segment optical coherence tomography (AS-OCT). One hundred forty-eight consecutive subjects were enrolled. The agreement between any two of three diagnostic methods, gonioscopy, AS-OCT and Van Herick, was calculated in narrow-angle patients. The area under receiver-operating characteristic curves (AUC) for discriminating between narrow and open angles determined by gonioscopy was calculated in all participants for AS-OCT parameter angle opening distance (AOD), angle recess area, trabecular iris surface area and anterior chamber depth (ACD). As a subgroup analysis, capability of AS-OCT parameters for detecting angle closure defined by AS-OCT was assessed in narrow-angle patients. The agreement between the Van Herick method and gonioscopy in detecting angle closure was excellent in narrow angles (κ = 0.80, temporal; κ = 0.82, nasal). However, agreement between gonioscopy and AS-OCT and between the Van Herick method and AS-OCT was poor (κ = 0.11-0.16). Discrimination capability of AS-OCT parameters between open and narrow angles determined by gonioscopy was excellent for all AS-OCT parameters (AUC, temporal: AOD500 = 0.96, nasal: AOD500 = 0.99). The AUCs for detecting angle closure defined by AS-OCT image in narrow angle subjects was good for all AS-OCT parameters (AUC, 0.80-0.94) except for ACD (temporal: ACD = 0.70, nasal: ACD = 0.63). Assessment of narrow angles by gonioscopy and the Van Herick technique showed good agreement, but both measurements revealed poor agreement with AS-OCT. The angle closure detection capability of AS-OCT parameters was excellent; however, it was slightly lower in ACD.

  4. Photogrammetric measurement of 3D freeform millimetre-sized objects with micro features: an experimental validation of the close-range camera calibration model for narrow angles of view

    NASA Astrophysics Data System (ADS)

    Percoco, Gianluca; Sánchez Salmerón, Antonio J.

    2015-09-01

    The measurement of millimetre and micro-scale features is performed by high-cost systems based on technologies with narrow working ranges to accurately control the position of the sensors. Photogrammetry would lower the costs of 3D inspection of micro-features and would be applicable to the inspection of non-removable micro parts of large objects too. Unfortunately, the behaviour of photogrammetry is not known when photogrammetry is applied to micro-features. In this paper, the authors address these issues towards the application of digital close-range photogrammetry (DCRP) to the micro-scale, taking into account that in literature there are research papers stating that an angle of view (AOV) around 10° is the lower limit to the application of the traditional pinhole close-range calibration model (CRCM), which is the basis of DCRP. At first a general calibration procedure is introduced, with the aid of an open-source software library, to calibrate narrow AOV cameras with the CRCM. Subsequently the procedure is validated using a reflex camera with a 60 mm macro lens, equipped with extension tubes (20 and 32 mm) achieving magnification of up to 2 times approximately, to verify literature findings with experimental photogrammetric 3D measurements of millimetre-sized objects with micro-features. The limitation experienced by the laser printing technology, used to produce the bi-dimensional pattern on common paper, has been overcome using an accurate pattern manufactured with a photolithographic process. The results of the experimental activity prove that the CRCM is valid for AOVs down to 3.4° and that DCRP results are comparable with the results of existing and more expensive commercial techniques.

  5. Narrow-angle Astrometry with SUSI

    NASA Astrophysics Data System (ADS)

    Kok, Y.; Ireland, M. J.; Robertson, J. G.; Tuthill, P. G.; Warrington, B. A.; Tango, W. J.

    2014-09-01

    SUSI (Sydney University Stellar Interferometer) is currently being fitted with a 2nd beam combiner, MUSCA (Micro-arcsecond University of Sydney Companion Astrometry), for the purpose of narrow-angle astrometry. With an aim to achieve ˜10 micro-arcseconds of angular resolution at its best, MUSCA allows SUSI to search for planets around bright binary stars, which are its primary targets. While the first beam combiner, PAVO (Precision Astronomical Visible Observations), is used to track stellar fringes during an observation, MUSCA will be used to measure separations of binary stars. MUSCA is a Michelson interferometer and its setup at SUSI will be described in this poster.

  6. Relationship between relative lens position and appositional closure in eyes with narrow angles.

    PubMed

    Otori, Yasumasa; Tomita, Yuki; Hamamoto, Ayumi; Fukui, Kanae; Usui, Shinichi; Tatebayashi, Misako

    2011-03-01

    To investigate the relationship between relative lens position (RLP) and appositional closure in eyes with narrow angles. Ultrasound biomicroscopy (UBM) was used to measure anterior chamber depth (ACD) and lens thickness (LT), and the IOLMaster to measure axial length (AL). The number of quadrants with appositional closure was assessed by UBM under dark conditions. The RLP was calculated thus: RLP = 10 × (ACD + 0.5 LT) /AL. This study comprised 30 consecutive patients (30 eyes) with narrow-angle eyes defined as Shaffer grade 2 or lower and without peripheral anterior synechiae (24 women, 6 men; mean age ± SD, 67.3 ± 10.4 years; range, 42-87 years). Under dark conditions, 66.7% of the eyes with narrow angles showed appositional closure in at least one quadrant. Of the various ocular biometric parameters, only the RLP significantly decreased with appositional closure in at least one quadrant (P = 0.005). A decrease in the RLP can be predictive of appositional closure for narrow-angle eyes under dark conditions.

  7. Memoris, A Wide Angle Camera For Bepicolombo

    NASA Astrophysics Data System (ADS)

    Cremonese, G.; Memoris Team

    In order to answer to the Announcement of Opportunity of ESA for the BepiColombo payload, we are working on a wide angle camera concept named MEMORIS (MEr- cury MOderate Resolution Imaging System). MEMORIS will performe stereoscopic images of the whole Mercury surface using two different channels at +/- 20 degrees from the nadir point. It will achieve a spatial resolution of 50m per pixel at 400 km from the surface (peri-Herm), corresponding to a vertical resolution of about 75m with the stereo performances. The scientific objectives will be addressed by MEMORIS may be identified as follows: Estimate of surface age based on crater counting Crater morphology and degrada- tion Stratigraphic sequence of geological units Identification of volcanic features and related deposits Origin of plain units from morphological observations Distribution and type of the tectonic structures Determination of relative age among the structures based on cross-cutting relationships 3D Tectonics Global mineralogical mapping of main geological units Identification of weathering products The last two items will come from the multispectral capabilities of the camera utilizing 8 to 12 (TBD) broad band filters. MEMORIS will be equipped by a further channel devoted to the observations of the tenuous exosphere. It will look at the limb on a given arc of the BepiColombo orbit, in so doing it will observe the exosphere above a surface latitude range of 25-75 degrees in the northern emisphere. The exosphere images will be obtained above the surface just observed by the other two channels, trying to find possible relantionship, as ground-based observations suggest. The exospheric channel will have four narrow-band filters centered on the sodium and potassium emissions and the adjacent continua.

  8. Predictors of Intraocular Pressure After Phacoemulsification in Primary Open-Angle Glaucoma Eyes with Wide Versus Narrower Angles (An American Ophthalmological Society Thesis)

    PubMed Central

    Lin, Shan C.; Masis, Marisse; Porco, Travis C.; Pasquale, Louis R.

    2017-01-01

    Purpose To assess if narrower-angle status and anterior segment optical coherence tomography (AS-OCT) parameters can predict intraocular pressure (IOP) drop in primary open-angle glaucoma (POAG) patients after cataract surgery. Methods This was a prospective case series of consecutive cataract surgery patients with POAG and no peripheral anterior synechiae (PAS) using a standardized postoperative management protocol. Preoperatively, patients underwent gonioscopy and AS-OCT. The same glaucoma medication regimen was resumed by 1 month. Potential predictors of IOP reduction included narrower-angle status by gonioscopy and angle-opening distance (AOD500) as well as other AS-OCT parameters. Mixed-effects regression adjusted for use of both eyes and other potential confounders. Results We enrolled 66 eyes of 40 glaucoma patients. The IOP reduction at 1 year was 4.2±3 mm Hg (26%, P<.001) in the narrower-angle group vs 2.2±3 mm Hg (14%, P<.001) in the wide-angle group (P=.027 for difference), as classified by gonioscopy. By AOD500 classification, the narrower-angle group had 3.4±3 mm Hg (21%, P<.001) reduction vs 2.5±3 mm Hg (16%, P<.001) in the wide-angle group (P=.031 for difference). When the entire cohort was assessed, iris thickness, iris area, and lens vault were correlated with increasing IOP reduction at 1 year (P<.05 for all). Conclusions In POAG eyes, cataract surgery lowered IOP to a greater degree in the narrower-angle group than in the wide-angle group, and parameters relating to iris thickness and area, as well as lens vault, were correlated with IOP reduction. These findings can guide ophthalmologists in their selection of cataract surgery as a potential management option. PMID:29147104

  9. Predictors of Intraocular Pressure After Phacoemulsification in Primary Open-Angle Glaucoma Eyes with Wide Versus Narrower Angles (An American Ophthalmological Society Thesis).

    PubMed

    Lin, Shan C; Masis, Marisse; Porco, Travis C; Pasquale, Louis R

    2017-08-01

    To assess if narrower-angle status and anterior segment optical coherence tomography (AS-OCT) parameters can predict intraocular pressure (IOP) drop in primary open-angle glaucoma (POAG) patients after cataract surgery. This was a prospective case series of consecutive cataract surgery patients with POAG and no peripheral anterior synechiae (PAS) using a standardized postoperative management protocol. Preoperatively, patients underwent gonioscopy and AS-OCT. The same glaucoma medication regimen was resumed by 1 month. Potential predictors of IOP reduction included narrower-angle status by gonioscopy and angle-opening distance (AOD500) as well as other AS-OCT parameters. Mixed-effects regression adjusted for use of both eyes and other potential confounders. We enrolled 66 eyes of 40 glaucoma patients. The IOP reduction at 1 year was 4.2±3 mm Hg (26%, P <.001) in the narrower-angle group vs 2.2±3 mm Hg (14%, P <.001) in the wide-angle group ( P =.027 for difference), as classified by gonioscopy. By AOD500 classification, the narrower-angle group had 3.4±3 mm Hg (21%, P <.001) reduction vs 2.5±3 mm Hg (16%, P <.001) in the wide-angle group ( P =.031 for difference). When the entire cohort was assessed, iris thickness, iris area, and lens vault were correlated with increasing IOP reduction at 1 year ( P <.05 for all). In POAG eyes, cataract surgery lowered IOP to a greater degree in the narrower-angle group than in the wide-angle group, and parameters relating to iris thickness and area, as well as lens vault, were correlated with IOP reduction. These findings can guide ophthalmologists in their selection of cataract surgery as a potential management option.

  10. 13. 22'X34' original vellum, VariableAngle Launcher, 'SIDEVIEW CAMERA CAR TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    13. 22'X34' original vellum, Variable-Angle Launcher, 'SIDEVIEW CAMERA CAR TRACK DETAILS' drawn at 1/4'=1'-0' (BUORD Sketch # 208078, PAPW 908). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  11. 10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  12. Preliminary calibration results of the wide angle camera of the imaging instrument OSIRIS for the Rosetta mission

    NASA Astrophysics Data System (ADS)

    Da Deppo, V.; Naletto, G.; Nicolosi, P.; Zambolin, P.; De Cecco, M.; Debei, S.; Parzianello, G.; Ramous, P.; Zaccariotto, M.; Fornasier, S.; Verani, S.; Thomas, N.; Barthol, P.; Hviid, S. F.; Sebastian, I.; Meller, R.; Sierks, H.; Keller, H. U.; Barbieri, C.; Angrilli, F.; Lamy, P.; Rodrigo, R.; Rickman, H.; Wenzel, K. P.

    2017-11-01

    Rosetta is one of the cornerstone missions of the European Space Agency for having a rendezvous with the comet 67P/Churyumov-Gerasimenko in 2014. The imaging instrument on board the satellite is OSIRIS (Optical, Spectroscopic and Infrared Remote Imaging System), a cooperation among several European institutes, which consists of two cameras: a Narrow (NAC) and a Wide Angle Camera (WAC). The WAC optical design is an innovative one: it adopts an all reflecting, unvignetted and unobstructed two mirror configuration which allows to cover a 12° × 12° field of view with an F/5.6 aperture and gives a nominal contrast ratio of about 10-4. The flight model of this camera has been successfully integrated and tested in our laboratories, and finally has been integrated on the satellite which is now waiting to be launched in February 2004. In this paper we are going to describe the optical characteristics of the camera, and to summarize the results so far obtained with the preliminary calibration data. The analysis of the optical performance of this model shows a good agreement between theoretical performance and experimental results.

  13. Reduced intraocular pressure after cataract surgery in patients with narrow angles and chronic angle-closure glaucoma.

    PubMed

    Brown, Reay H; Zhong, Le; Whitman, Allison L; Lynch, Mary G; Kilgo, Patrick D; Hovis, Kristen L

    2014-10-01

    To evaluate the effect of cataract surgery on intraocular pressure (IOP) in patients with narrow angles and chronic angle-closure glaucoma (ACG) and to determine whether the change in IOP was correlated with the preoperative pressure, axial length (AL), and anterior chamber depth (ACD). Private practice, Atlanta, Georgia, USA. Retrospective case series. Charts of patients with narrow angles or chronic ACG who had cataract surgery were reviewed. All eyes had previous laser iridotomies. Data recorded included preoperative and postoperative IOP, AL, and ACD. The preoperative IOP was used to stratify eyes into 4 groups. The charts of 56 patients (83 eyes) were reviewed. The mean reduction IOP in all eyes was 3.28 mm Hg (18%), with 88% having a decrease in IOP. There was a significant correlation between preoperative IOP and the magnitude of IOP reduction (r = 0.68, P < .001). The mean decrease in IOP was 5.3 mm Hg in eyes with a preoperative IOP above 20 mm Hg, 4.6 mm Hg in the over 18 to 20 mm Hg group, 2.5 mm Hg in the over 15 to 18 mm Hg group, and 1.4 mm Hg in the 15 mm Hg or less group. The mean follow-up was 3.0 years ± 2.3 (SD). Cataract surgery reduced IOP in patients with narrow angles and chronic ACG. The magnitude of reduction was highly correlated with preoperative IOP and weakly correlated with ACD. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  14. The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; Wakshlag, Jacob J.

    The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…

  15. A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.

    PubMed

    Qian, Shuo; Sheng, Yang

    2011-11-01

    Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.

  16. Fisheye Multi-Camera System Calibration for Surveying Narrow and Complex Architectures

    NASA Astrophysics Data System (ADS)

    Perfetti, L.; Polari, C.; Fassi, F.

    2018-05-01

    Narrow spaces and passages are not a rare encounter in cultural heritage, the shape and extension of those areas place a serious challenge on any techniques one may choose to survey their 3D geometry. Especially on techniques that make use of stationary instrumentation like terrestrial laser scanning. The ratio between space extension and cross section width of many corridors and staircases can easily lead to distortions/drift of the 3D reconstruction because of the problem of propagation of uncertainty. This paper investigates the use of fisheye photogrammetry to produce the 3D reconstruction of such spaces and presents some tests to contain the degree of freedom of the photogrammetric network, thereby containing the drift of long data set as well. The idea is that of employing a multi-camera system composed of several fisheye cameras and to implement distances and relative orientation constraints, as well as the pre-calibration of the internal parameters for each camera, within the bundle adjustment. For the beginning of this investigation, we used the NCTech iSTAR panoramic camera as a rigid multi-camera system. The case study of the Amedeo Spire of the Milan Cathedral, that encloses a spiral staircase, is the stage for all the tests. Comparisons have been made between the results obtained with the multi-camera configuration, the auto-stitched equirectangular images and a data set obtained with a monocular fisheye configuration using a full frame DSLR. Results show improved accuracy, down to millimetres, using a rigidly constrained multi-camera.

  17. Association of narrow angles with anterior chamber area and volume measured with anterior-segment optical coherence tomography.

    PubMed

    Wu, Ren-Yi; Nongpiur, Monisha E; He, Ming-Guang; Sakata, Lisandro M; Friedman, David S; Chan, Yiong-Huak; Lavanya, Raghavan; Wong, Tien-Yin; Aung, Tin

    2011-05-01

    To describe the measurement of anterior chamber area and anterior chamber volume by anterior-segment optical coherence tomography and to investigate the association of these parameters with the presence of narrow angles. This was a cross-sectional study of subjects aged at least 50 years without ophthalmic symptoms recruited from a community clinic. All participants underwent standardized ocular examination and anterior-segment optical coherence tomography. Customized software was used to measure anterior chamber area (cross-sectional area bounded by the corneal endothelium, anterior surface of iris, and lens within the pupil) and anterior chamber volume (calculated by rotating the anterior chamber area 360° around a vertical axis through the midpoint of the anterior chamber area). An eye was considered to have narrow angles if the posterior pigmented trabecular meshwork was not visible for at least 180° on gonioscopy with the eye in the primary position. A total of 1922 subjects were included in the final analyses, 317 (16.5%) of whom had narrow angles. Mean anterior chamber area (15.6 vs 21.1 mm(2); P < .001) and anterior chamber volume (97.6 vs 142.1 mm(3); P < .001) were smaller in eyes with narrow angles compared with those in eyes without narrow angles. After adjusting for age, sex, anterior chamber depth, axial length, and pupil size, smaller anterior chamber area (odds ratio, 53.2; 95% confidence interval, 27.1-104.5) and anterior chamber volume (odds ratio, 40.2; 95% confidence interval, 21.5-75.2) were significantly associated with the presence of narrow angles. Smaller anterior chamber area and anterior chamber volume were independently associated with narrow angles in Singaporeans, even after controlling for other known ocular risk factors.

  18. Spectral data of specular reflectance, narrow-angle transmittance and angle-resolved surface scattering of materials for solar concentrators.

    PubMed

    Good, Philipp; Cooper, Thomas; Querci, Marco; Wiik, Nicolay; Ambrosetti, Gianluca; Steinfeld, Aldo

    2016-03-01

    The spectral specular reflectance of conventional and novel reflective materials for solar concentrators is measured with an acceptance angle of 17.5 mrad over the wavelength range 300-2500 nm at incidence angles 15-60° using a spectroscopic goniometry system. The same experimental setup is used to determine the spectral narrow-angle transmittance of semi-transparent materials for solar collector covers at incidence angles 0-60°. In addition, the angle-resolved surface scattering of reflective materials is recorded by an area-scan CCD detector over the spectral range 350-1050 nm. A comprehensive summary, discussion, and interpretation of the results are included in the associated research article "Spectral reflectance, transmittance, and angular scattering of materials for solar concentrators" in Solar Energy Materials and Solar Cells.

  19. Peripapillary Schisis in Glaucoma Patients With Narrow Angles and Increased Intraocular Pressure

    PubMed Central

    Kahook, Malik Y.; Noecker, Robert J.; Ishikawa, Hiroshi; Wollstein, Gadi; Kagemann, Larry; Wojtkowski, Maciej; Duker, Jay S.; Srinivasan, Vivek J.; Fujimoto, James G.; Schuman, Joel S.

    2007-01-01

    PURPOSE To describe two cases of peripapillary retinal schisis in patients with glaucoma without evidence of optic nerve pits, pseudopits, or X-linked retinoschisis. DESIGN Two observational case reports and literature review. METHODS Imaging of the peripapillary nerve fiber layer and schisis cavities was completed in two patients, and one patient was followed over time. RESULTS The first patient, diagnosed with narrow angle glaucoma, was noted to have peripapillary schisis in the right eye with matching changes on visual field and optical coherence tomographic (OCT) results. Follow-up examination revealed that the schisis disappeared in the right eye while appearing in the left. The findings were verified with high-speed ultra-high-resolution OCT performed in both eyes. The second case involved a patient with anatomically narrow angles, high intraocular pressure (IOP), and peripapillary schisis extending into the macula. CONCLUSIONS Peripapillary retinoschisis may represent a unique sequelae of intraocular fluctuations in patients with uncontrolled glaucoma. Further studies are needed to better understand this disease process. PMID:17386284

  20. The first demonstration of the concept of "narrow-FOV Si/CdTe semiconductor Compton camera"

    NASA Astrophysics Data System (ADS)

    Ichinohe, Yuto; Uchida, Yuusuke; Watanabe, Shin; Edahiro, Ikumi; Hayashi, Katsuhiro; Kawano, Takafumi; Ohno, Masanori; Ohta, Masayuki; Takeda, Shin`ichiro; Fukazawa, Yasushi; Katsuragawa, Miho; Nakazawa, Kazuhiro; Odaka, Hirokazu; Tajima, Hiroyasu; Takahashi, Hiromitsu; Takahashi, Tadayuki; Yuasa, Takayuki

    2016-01-01

    The Soft Gamma-ray Detector (SGD), to be deployed on board the ASTRO-H satellite, has been developed to provide the highest sensitivity observations of celestial sources in the energy band of 60-600 keV by employing a detector concept which uses a Compton camera whose field-of-view is restricted by a BGO shield to a few degree (narrow-FOV Compton camera). In this concept, the background from outside the FOV can be heavily suppressed by constraining the incident direction of the gamma ray reconstructed by the Compton camera to be consistent with the narrow FOV. We, for the first time, demonstrate the validity of the concept using background data taken during the thermal vacuum test and the low-temperature environment test of the flight model of SGD on ground. We show that the measured background level is suppressed to less than 10% by combining the event rejection using the anti-coincidence trigger of the active BGO shield and by using Compton event reconstruction techniques. More than 75% of the signals from the field-of-view are retained against the background rejection, which clearly demonstrates the improvement of signal-to-noise ratio. The estimated effective area of 22.8 cm2 meets the mission requirement even though not all of the operational parameters of the instrument have been fully optimized yet.

  1. Automatic helmet-wearing detection for law enforcement using CCTV cameras

    NASA Astrophysics Data System (ADS)

    Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.

    2018-04-01

    The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.

  2. Winter precipitation particle size distribution measurement by Multi-Angle Snowflake Camera

    NASA Astrophysics Data System (ADS)

    Huang, Gwo-Jong; Kleinkort, Cameron; Bringi, V. N.; Notaroš, Branislav M.

    2017-12-01

    From the radar meteorology viewpoint, the most important properties for quantitative precipitation estimation of winter events are 3D shape, size, and mass of precipitation particles, as well as the particle size distribution (PSD). In order to measure these properties precisely, optical instruments may be the best choice. The Multi-Angle Snowflake Camera (MASC) is a relatively new instrument equipped with three high-resolution cameras to capture the winter precipitation particle images from three non-parallel angles, in addition to measuring the particle fall speed using two pairs of infrared motion sensors. However, the results from the MASC so far are usually presented as monthly or seasonally, and particle sizes are given as histograms, no previous studies have used the MASC for a single storm study, and no researchers use MASC to measure the PSD. We propose the methodology for obtaining the winter precipitation PSD measured by the MASC, and present and discuss the development, implementation, and application of the new technique for PSD computation based on MASC images. Overall, this is the first study of the MASC-based PSD. We present PSD MASC experiments and results for segments of two snow events to demonstrate the performance of our PSD algorithm. The results show that the self-consistency of the MASC measured single-camera PSDs is good. To cross-validate PSD measurements, we compare MASC mean PSD (averaged over three cameras) with the collocated 2D Video Disdrometer, and observe good agreements of the two sets of results.

  3. A New Approach to Micro-arcsecond Astrometry with SIM Allowing Early Mission Narrow Angle Measurements of Compelling Astronomical Targets

    NASA Technical Reports Server (NTRS)

    Shaklan, Stuart; Pan, Xiaopei

    2004-01-01

    The Space Interferometry Mission (SIM) is capable of detecting and measuring the mass of terrestrial planets around stars other than our own. It can measure the mass of black holes and the visual orbits of radio and x-ray binary sources. SIM makes possible a new level of understanding of complex astrophysical processes. SIM achieves its high precision in the so-called narrow-angle regime. This is defined by a 1 degree diameter field in which the position of a target star is measured with respect to a set of reference stars. The observation is performed in two parts: first, SIM observes a grid of stars that spans the full sky. After a few years, repeated observations of the grid allow one to determine the orientation of the interferometer baseline. Second, throughout the mission, SIM periodically observes in the narrow-angle mode. Every narrow-angle observation is linked to the grid to determine the precise attitude and length of the baseline. The narrow angle process demands patience. It is not until five years after launch that SIM achieves its ultimate accuracy of 1 microarcsecond. The accuracy is degraded by a factor of approx. 2 at mid-mission. Our work proposes a technique for narrow angle astrometry that does not rely on the measurement of grid stars. This technique, called Gridless Narrow Angle Astrometry (GNAA) can obtain microarcsecond accuracy and can detect extra-solar planets and other exciting objects with a few days of observation. It can be applied as early as during the first six months of in-orbit calibration (IOC). The motivations for doing this are strong. First, and obviously, it is an insurance policy against a catastrophic mid-mission failure. Second, at the start of the mission, with several space-based interferometers in the planning or implementation phase, NASA will be eager to capture the public's imagination with interferometric science. Third, early results and a technique that can duplicate those results throughout the mission will

  4. Comparison of Scheimpflug imaging and spectral domain anterior segment optical coherence tomography for detection of narrow anterior chamber angles.

    PubMed

    Grewal, D S; Brar, G S; Jain, R; Grewal, S P S

    2011-05-01

    To compare the performance of anterior chamber volume (ACV) and anterior chamber depth (ACD) obtained using Scheimpflug imaging with angle opening distance (AOD500) and trabecular-iris space area (TISA500) obtained using spectral domain anterior segment optical coherence tomography (SD-ASOCT) in detecting narrow angles classified using gonioscopy. In this prospective, cross-sectional observational study, 265 eyes of 265 consecutive patients underwent sequential Scheimpflug imaging, SD-ASOCT imaging, and gonioscopy. Correlations between gonioscopy grading, ACV, ACD, AOD500, and TISA500 were evaluated. Area under receiver operating characteristic curve (AUC), sensitivity, specificity, and likelihood ratios (LRs) were calculated to assess the performance of ACV, ACD, AOD500, and TISA500 in detecting narrow angles (defined as Shaffer grade ≤1 in all quadrants). SD-ASOCT images were obtained at the nasal and temporal quadrants only. Twenty-eight eyes (10.6%) were classified as narrow angles on gonioscopy. ACV correlated with gonioscopy grading (P<0.001) for temporal (r=0.204), superior (r=0.251), nasal (r=0.213), and inferior (r=0.236) quadrants. ACV correlated with TISA500 for nasal (r=0.135, P=0.029) and temporal (P=0.160, P=0.009) quadrants and also with AOD500 for nasal (r=0.498, P<0.001) and temporal (r=0.517, P<0.001) quadrants. For detection of narrow angles, ACV (AUC=0.935; 95% confidence interval (CI) =0.898-0.961) performed similar to ACD (AUC=0.88, P=0.06) and significantly better than AOD500 nasal (AUC=0.761, P=0.001), AOD500 temporal (AUC=0.808, P<0.001), TISA500 nasal (AUC=0.756, P<0.001), and TISA500 temporal (AUC=0.738, P<0.001). Using a cutoff of 113 mm(3), ACV had 90% sensitivity and 88% specificity for detecting narrow angles. Positive and negative LRs for ACV were 8.63 (95% CI=7.4-10.0) and 0.11 (95% CI=0.03-0.4), respectively. ACV measurements using Scheimpflug imaging outperformed AOD500 and TISA500 using SD-ASOCT for detecting narrow angles.

  5. Comparison of Scheimpflug imaging and spectral domain anterior segment optical coherence tomography for detection of narrow anterior chamber angles

    PubMed Central

    Grewal, D S; Brar, G S; Jain, R; Grewal, S P S

    2011-01-01

    Purpose To compare the performance of anterior chamber volume (ACV) and anterior chamber depth (ACD) obtained using Scheimpflug imaging with angle opening distance (AOD500) and trabecular-iris space area (TISA500) obtained using spectral domain anterior segment optical coherence tomography (SD-ASOCT) in detecting narrow angles classified using gonioscopy. Methods In this prospective, cross-sectional observational study, 265 eyes of 265 consecutive patients underwent sequential Scheimpflug imaging, SD-ASOCT imaging, and gonioscopy. Correlations between gonioscopy grading, ACV, ACD, AOD500, and TISA500 were evaluated. Area under receiver operating characteristic curve (AUC), sensitivity, specificity, and likelihood ratios (LRs) were calculated to assess the performance of ACV, ACD, AOD500, and TISA500 in detecting narrow angles (defined as Shaffer grade ≤1 in all quadrants). SD-ASOCT images were obtained at the nasal and temporal quadrants only. Results Twenty-eight eyes (10.6%) were classified as narrow angles on gonioscopy. ACV correlated with gonioscopy grading (P<0.001) for temporal (r=0.204), superior (r=0.251), nasal (r=0.213), and inferior (r=0.236) quadrants. ACV correlated with TISA500 for nasal (r=0.135, P=0.029) and temporal (P=0.160, P=0.009) quadrants and also with AOD500 for nasal (r=0.498, P<0.001) and temporal (r=0.517, P<0.001) quadrants. For detection of narrow angles, ACV (AUC=0.935; 95% confidence interval (CI) =0.898–0.961) performed similar to ACD (AUC=0.88, P=0.06) and significantly better than AOD500 nasal (AUC=0.761, P=0.001), AOD500 temporal (AUC=0.808, P<0.001), TISA500 nasal (AUC=0.756, P<0.001), and TISA500 temporal (AUC=0.738, P<0.001). Using a cutoff of 113 mm3, ACV had 90% sensitivity and 88% specificity for detecting narrow angles. Positive and negative LRs for ACV were 8.63 (95% CI=7.4–10.0) and 0.11 (95% CI=0.03–0.4), respectively. Conclusions ACV measurements using Scheimpflug imaging outperformed AOD500 and TISA500 using

  6. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  7. SCDU (Spectral Calibration Development Unit) Testbed Narrow Angle Astrometric Performance

    NASA Technical Reports Server (NTRS)

    Wang, Xu; Goullioud, Renaud; Nemati, Bijan; Shao, Michael; Wehmeier, Udo J.; Weilert, Mark A.; Werne, Thomas A.; Wu, Janet P.; Zhai, Chengxing

    2010-01-01

    The most stringent astrometric performance requirements on NASA's SIM(Space Interferometer Mission)-Lite mission will come from the so-called Narrow-Angle (NA) observing scenario, aimed at finding Earth-like exoplanets, where the interferometer chops between the target star and several nearby reference stars multiple times over the course of a single visit. Previously, about 20 pm NA error with various shifts was reported. Since then, investigation has been under way to understand the mechanisms that give rise to these shifts. In this paper we report our findings, the adopted mitigation strategies, and the resulting testbed performance.

  8. Wide-Angle Polarimetric Camera for Korea Pathfinder Lunar Orbiter

    NASA Astrophysics Data System (ADS)

    Choi, Y. J.; Kim, S.; Kang, K. I.

    2016-12-01

    A polarimetry data contains valuable information about the lunar surface such as the grain size and porosity of the regolith. However, a polarimetry toward the Moon in its orbit has not been performed. We plan to perform the polarimetry in lunar orbit through Korea Pathfinder Lunar Orbiter (KPLO), which will be launched around 2018/2019 as the first Korean lunar mission. Wide-Angle Polarimetric Camera (PolCam) is selected as one of the onboard instrument for KPLO. The science objectives are ; (1) To obtain the polarization data of the whole lunar surface at wavelengths of 430nm and 650nm for phase angle range from 0° to 120° with a spatial resolution of 80 m. (2) To obtain the reflectance ratios at 320 nm and 430 nm for the whole lunar surface with a spatial resolution of 80m. We will summarize recent results of lunar surface from ground-based polarimetric observations and will briefly introduce the science rationals and operation concept of PolCam.

  9. Improved iris localization by using wide and narrow field of view cameras for iris recognition

    NASA Astrophysics Data System (ADS)

    Kim, Yeong Gon; Shin, Kwang Yong; Park, Kang Ryoung

    2013-10-01

    Biometrics is a method of identifying individuals by their physiological or behavioral characteristics. Among other biometric identifiers, iris recognition has been widely used for various applications that require a high level of security. When a conventional iris recognition camera is used, the size and position of the iris region in a captured image vary according to the X, Y positions of a user's eye and the Z distance between a user and the camera. Therefore, the searching area of the iris detection algorithm is increased, which can inevitably decrease both the detection speed and accuracy. To solve these problems, we propose a new method of iris localization that uses wide field of view (WFOV) and narrow field of view (NFOV) cameras. Our study is new as compared to previous studies in the following four ways. First, the device used in our research acquires three images, one each of the face and both irises, using one WFOV and two NFOV cameras simultaneously. The relation between the WFOV and NFOV cameras is determined by simple geometric transformation without complex calibration. Second, the Z distance (between a user's eye and the iris camera) is estimated based on the iris size in the WFOV image and anthropometric data of the size of the human iris. Third, the accuracy of the geometric transformation between the WFOV and NFOV cameras is enhanced by using multiple matrices of the transformation according to the Z distance. Fourth, the searching region for iris localization in the NFOV image is significantly reduced based on the detected iris region in the WFOV image and the matrix of geometric transformation corresponding to the estimated Z distance. Experimental results showed that the performance of the proposed iris localization method is better than that of conventional methods in terms of accuracy and processing time.

  10. Visible-infrared achromatic imaging by wavefront coding with wide-angle automobile camera

    NASA Astrophysics Data System (ADS)

    Ohta, Mitsuhiko; Sakita, Koichi; Shimano, Takeshi; Sugiyama, Takashi; Shibasaki, Susumu

    2016-09-01

    We perform an experiment of achromatic imaging with wavefront coding (WFC) using a wide-angle automobile lens. Our original annular phase mask for WFC was inserted to the lens, for which the difference between the focal positions at 400 nm and at 950 nm is 0.10 mm. We acquired images of objects using a WFC camera with this lens under the conditions of visible and infrared light. As a result, the effect of the removal of the chromatic aberration of the WFC system was successfully determined. Moreover, we fabricated a demonstration set assuming the use of a night vision camera in an automobile and showed the effect of the WFC system.

  11. Sheath effects on current collection by particle detectors with narrow acceptance angles

    NASA Technical Reports Server (NTRS)

    Singh, N.; Baugher, C. R.

    1981-01-01

    Restriction of the aperture acceptance angle of an ion or electron trap on an attracting spacecraft significantly alters the volt-ampere characteristics of the instrument in a low Mach number plasma. It is shown when the angular acceptance of the aperture is restricted the current to the collector tends to be independent of the Debye length. Expressions for the RPA characteristics for both a thin sheath and a thick sheath are derived; and it is shown that as the aperture is narrowed the curves tend toward equivalence.

  12. Topview stereo: combining vehicle-mounted wide-angle cameras to a distance sensor array

    NASA Astrophysics Data System (ADS)

    Houben, Sebastian

    2015-03-01

    The variety of vehicle-mounted sensors in order to fulfill a growing number of driver assistance tasks has become a substantial factor in automobile manufacturing cost. We present a stereo distance method exploiting the overlapping field of view of a multi-camera fisheye surround view system, as they are used for near-range vehicle surveillance tasks, e.g. in parking maneuvers. Hence, we aim at creating a new input signal from sensors that are already installed. Particular properties of wide-angle cameras (e.g. hanging resolution) demand an adaptation of the image processing pipeline to several problems that do not arise in classical stereo vision performed with cameras carefully designed for this purpose. We introduce the algorithms for rectification, correspondence analysis, and regularization of the disparity image, discuss reasons and avoidance of the shown caveats, and present first results on a prototype topview setup.

  13. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  15. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  16. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  17. Fabrication of Ag nanostructures with remarkable narrow plasmonic resonances by glancing angle deposition

    NASA Astrophysics Data System (ADS)

    Abbasian, Sara; Moshaii, Ahmad; Vayghan, Nader Sobhkhiz; Nikkhah, Maryam

    2018-05-01

    Glancing angle deposition (GLAD) is an efficient and inexpensive method to fabricate nanostructures with diverse complexities. However, this method has a limitation in fabrication of plasmonic nanostructures with narrow resonance peaks causing that the GLAD-nanostructures have rarely been used for refractive-index sensing. In this work, we proposed two approaches to overcome this limitation of GLAD and to fabricate Ag nanostructures with narrow plasmonic peaks. In the first approach, we introduce an effective method for seeding modification of the substrate and then growing the Ag nanocolumns on such seeded layer. The optical characterization shows that such pre-seeding of the substrate leads to nearly 40% narrowing of the plasmonic peak. In another approach, the nanostructures are grown by GLAD on a bare substrate and then are annealed at 200-400 °C. Such annealing converts the nanostructures to nanodomes with large inter-particle distances and about 60% reduction of their plasmonic width. Also, the annealing of the nanostructures at 400 °C provides a twofold improvement in figure of merit of sensing of the nanostructures. This improvement makes the GLAD comparative to other expensive alternate methods for fabrication of plasmonic sensors. In addition, the experimental plasmonic peaks are reproduced in a proper numerical simulation for similar nanostructures.

  18. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Wide Angle Movie

    NASA Technical Reports Server (NTRS)

    1999-01-01

    This brief movie illustrates the passage of the Moon through the Saturn-bound Cassini spacecraft's wide-angle camera field of view as the spacecraft passed by the Moon on the way to its closest approach with Earth on August 17, 1999. From beginning to end of the sequence, 25 wide-angle images (with a spatial image scale of about 14 miles per pixel (about 23 kilometers)were taken over the course of 7 and 1/2 minutes through a series of narrow and broadband spectral filters and polarizers, ranging from the violet to the near-infrared regions of the spectrum, to calibrate the spectral response of the wide-angle camera. The exposure times range from 5 milliseconds to 1.5 seconds. Two of the exposures were smeared and have been discarded and replaced with nearby images to make a smooth movie sequence. All images were scaled so that the brightness of Crisium basin, the dark circular region in the upper right, is approximately the same in every image. The imaging data were processed and released by the Cassini Imaging Central Laboratory for Operations (CICLOPS)at the University of Arizona's Lunar and Planetary Laboratory, Tucson, AZ.

    Photo Credit: NASA/JPL/Cassini Imaging Team/University of Arizona

    Cassini, launched in 1997, is a joint mission of NASA, the European Space Agency and Italian Space Agency. The mission is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Space Science, Washington DC. JPL is a division of the California Institute of Technology, Pasadena, CA.

  20. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  1. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  2. Use of a microscope-mounted wide-angle point of view camera to record optimal hand position in ocular surgery.

    PubMed

    Gooi, Patrick; Ahmed, Yusuf; Ahmed, Iqbal Ike K

    2014-07-01

    We describe the use of a microscope-mounted wide-angle point-of-view camera to record optimal hand positions in ocular surgery. The camera is mounted close to the objective lens beneath the surgeon's oculars and faces the same direction as the surgeon, providing a surgeon's view. A wide-angle lens enables viewing of both hands simultaneously and does not require repositioning the camera during the case. Proper hand positioning and instrument placement through microincisions are critical for effective and atraumatic handling of tissue within the eye. Our technique has potential in the assessment and training of optimal hand position for surgeons performing intraocular surgery. It is an innovative way to routinely record instrument and operating hand positions in ophthalmic surgery and has minimal requirements in terms of cost, personnel, and operating-room space. No author has a financial or proprietary interest in any material or method mentioned. Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  3. Narrow Angle Wide Spectral Range Radiometer Design FEANICS/REEFS Radiometer Design Report

    NASA Technical Reports Server (NTRS)

    Camperchioli, William

    2005-01-01

    A critical measurement for the Radiative Enhancement Effects on Flame Spread (REEFS) microgravity combustion experiment is the net radiative flux emitted from the gases and from the solid fuel bed. These quantities are measured using a set of narrow angle, wide spectral range radiometers. The radiometers are required to have an angular field of view of 1.2 degrees and measure over the spectral range of 0.6 to 30 microns, which presents a challenging design effort. This report details the design of this radiometer system including field of view, radiometer response, radiometric calculations, temperature effects, error sources, baffling and amplifiers. This report presents some radiometer specific data but does not present any REEFS experiment data.

  4. Characterization of previously unidentified lunar pyroclastic deposits using Lunar Reconnaissance Orbiter Camera (LROC) data

    USGS Publications Warehouse

    Gustafson, J. Olaf; Bell, James F.; Gaddis, Lisa R.R.; Hawke, B. Ray Ray; Giguere, Thomas A.

    2012-01-01

    We used a Lunar Reconnaissance Orbiter Camera (LROC) global monochrome Wide-angle Camera (WAC) mosaic to conduct a survey of the Moon to search for previously unidentified pyroclastic deposits. Promising locations were examined in detail using LROC multispectral WAC mosaics, high-resolution LROC Narrow Angle Camera (NAC) images, and Clementine multispectral (ultraviolet-visible or UVVIS) data. Out of 47 potential deposits chosen for closer examination, 12 were selected as probable newly identified pyroclastic deposits. Potential pyroclastic deposits were generally found in settings similar to previously identified deposits, including areas within or near mare deposits adjacent to highlands, within floor-fractured craters, and along fissures in mare deposits. However, a significant new finding is the discovery of localized pyroclastic deposits within floor-fractured craters Anderson E and F on the lunar farside, isolated from other known similar deposits. Our search confirms that most major regional and localized low-albedo pyroclastic deposits have been identified on the Moon down to ~100 m/pix resolution, and that additional newly identified deposits are likely to be either isolated small deposits or additional portions of discontinuous, patchy deposits.

  5. The Effect of Mediated Camera Angle on Receiver Evaluations of Source Credibility, Dominance, Attraction and Homophily.

    ERIC Educational Resources Information Center

    Beverly, Robert E.; Young, Thomas J.

    Two hundred forty college undergraduates participated in a study of the effect of camera angle on an audience's perceptual judgments of source credibility, dominance, attraction, and homophily. The subjects were divided into four groups and each group was shown a videotape presentation in which sources had been videotaped according to one of four…

  6. Fabrication of multi-focal microlens array on curved surface for wide-angle camera module

    NASA Astrophysics Data System (ADS)

    Pan, Jun-Gu; Su, Guo-Dung J.

    2017-08-01

    In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.

  7. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  8. Evidence for Itinerant Carriers in an Anisotropic Narrow-Gap Semiconductor by Angle-Resolved Photoemission Spectroscopy.

    PubMed

    Ju, Sailong; Bai, Wei; Wu, Liming; Lin, Hua; Xiao, Chong; Cui, Shengtao; Li, Zhou; Kong, Shuai; Liu, Yi; Liu, Dayong; Zhang, Guobin; Sun, Zhe; Xie, Yi

    2018-01-01

    The ability to accurately determine the electronic structure of solids has become a key prerequisite for modern functional materials. For example, the precise determination of the electronic structure helps to balance the three thermoelectric parameters, which is the biggest challenge to design high-performance thermoelectric materials. Herein, by high-resolution, angle-resolved photoemission spectroscopy (ARPES), the itinerant carriers in CsBi 4 Te 6 (CBT) are revealed for the first time. CBT is a typical anisotropic, narrow-gap semiconductor used as a practical candidate for low-temperature thermoelectric applications, and p-doped CBT series show superconductivity at relatively low carrier concentrations. The ARPES results show a significantly larger bandwidth near the Fermi surface than calculations, which means the carriers transport anisotropically and itinerantly in CBT. It is reasonable to believe that these newly discovered features of carriers in narrow-gap semiconductors are promising for designing optimal thermoelectric materials and superconductors. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  9. Two-Camera Acquisition and Tracking of a Flying Target

    NASA Technical Reports Server (NTRS)

    Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter

    2008-01-01

    A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.

  10. Evaluation of the Quality of Action Cameras with Wide-Angle Lenses in Uav Photogrammetry

    NASA Astrophysics Data System (ADS)

    Hastedt, H.; Ekkel, T.; Luhmann, T.

    2016-06-01

    The application of light-weight cameras in UAV photogrammetry is required due to restrictions in payload. In general, consumer cameras with normal lens type are applied to a UAV system. The availability of action cameras, like the GoPro Hero4 Black, including a wide-angle lens (fish-eye lens) offers new perspectives in UAV projects. With these investigations, different calibration procedures for fish-eye lenses are evaluated in order to quantify their accuracy potential in UAV photogrammetry. Herewith the GoPro Hero4 is evaluated using different acquisition modes. It is investigated to which extent the standard calibration approaches in OpenCV or Agisoft PhotoScan/Lens can be applied to the evaluation processes in UAV photogrammetry. Therefore different calibration setups and processing procedures are assessed and discussed. Additionally a pre-correction of the initial distortion by GoPro Studio and its application to the photogrammetric purposes will be evaluated. An experimental setup with a set of control points and a prospective flight scenario is chosen to evaluate the processing results using Agisoft PhotoScan. Herewith it is analysed to which extent a pre-calibration and pre-correction of a GoPro Hero4 will reinforce the reliability and accuracy of a flight scenario.

  11. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    NASA Astrophysics Data System (ADS)

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-11-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.

  12. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    PubMed Central

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; Horie, Yu; Han, Seunghoon; Faraon, Andrei

    2016-01-01

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° × 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision. PMID:27892454

  13. Faint Object Camera imaging and spectroscopy of NGC 4151

    NASA Technical Reports Server (NTRS)

    Boksenberg, A.; Catchpole, R. M.; Macchetto, F.; Albrecht, R.; Barbieri, C.; Blades, J. C.; Crane, P.; Deharveng, J. M.; Disney, M. J.; Jakobsen, P.

    1995-01-01

    We describe ultraviolet and optical imaging and spectroscopy within the central few arcseconds of the Seyfert galaxy NGC 4151, obtained with the Faint Object Camera on the Hubble Space Telescope. A narrowband image including (O III) lambda(5007) shows a bright nucleus centered on a complex biconical structure having apparent opening angle approximately 65 deg and axis at a position angle along 65 deg-245 deg; images in bands including Lyman-alpha and C IV lambda(1550) and in the optical continuum near 5500 A, show only the bright nucleus. In an off-nuclear optical long-slit spectrum we find a high and a low radial velocity component within the narrow emission lines. We identify the low-velocity component with the bright, extended, knotty structure within the cones, and the high-velocity component with more confined diffuse emission. Also present are strong continuum emission and broad Balmer emission line components, which we attribute to the extended point spread function arising from the intense nuclear emission. Adopting the geometry pointed out by Pedlar et al. (1993) to explain the observed misalignment of the radio jets and the main optical structure we model an ionizing radiation bicone, originating within a galactic disk, with apex at the active nucleus and axis centered on the extended radio jets. We confirm that through density bounding the gross spatial structure of the emission line region can be reproduced with a wide opening angle that includes the line of sight, consistent with the presence of a simple opaque torus allowing direct view of the nucleus. In particular, our modelling reproduces the observed decrease in position angle with distance from the nucleus, progressing initially from the direction of the extended radio jet, through our optical structure, and on to the extended narrow-line region. We explore the kinematics of the narrow-line low- and high-velocity components on the basis of our spectroscopy and adopted model structure.

  14. Miniature optical planar camera based on a wide-angle metasurface doublet corrected for monochromatic aberrations

    DOE PAGES

    Arbabi, Amir; Arbabi, Ehsan; Kamali, Seyedeh Mahsa; ...

    2016-11-28

    Optical metasurfaces are two-dimensional arrays of nano-scatterers that modify optical wavefronts at subwavelength spatial resolution. They are poised to revolutionize optics by enabling complex low-cost systems where multiple metasurfaces are lithographically stacked and integrated with electronics. For imaging applications, metasurface stacks can perform sophisticated image corrections and can be directly integrated with image sensors. Here we demonstrate this concept with a miniature flat camera integrating a monolithic metasurface lens doublet corrected for monochromatic aberrations, and an image sensor. The doublet lens, which acts as a fisheye photographic objective, has a small f-number of 0.9, an angle-of-view larger than 60° ×more » 60°, and operates at 850 nm wavelength with 70% focusing efficiency. The camera exhibits nearly diffraction-limited image quality, which indicates the potential of this technology in the development of optical systems for microscopy, photography, and computer vision.« less

  15. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  16. Pre-hibernation performances of the OSIRIS cameras onboard the Rosetta spacecraft

    NASA Astrophysics Data System (ADS)

    Magrin, S.; La Forgia, F.; Da Deppo, V.; Lazzarin, M.; Bertini, I.; Ferri, F.; Pajola, M.; Barbieri, M.; Naletto, G.; Barbieri, C.; Tubiana, C.; Küppers, M.; Fornasier, S.; Jorda, L.; Sierks, H.

    2015-02-01

    Context. The ESA cometary mission Rosetta was launched in 2004. In the past years and until the spacecraft hibernation in June 2011, the two cameras of the OSIRIS imaging system (Narrow Angle and Wide Angle Camera, NAC and WAC) observed many different sources. On 20 January 2014 the spacecraft successfully exited hibernation to start observing the primary scientific target of the mission, comet 67P/Churyumov-Gerasimenko. Aims: A study of the past performances of the cameras is now mandatory to be able to determine whether the system has been stable through the time and to derive, if necessary, additional analysis methods for the future precise calibration of the cometary data. Methods: The instrumental responses and filter passbands were used to estimate the efficiency of the system. A comparison with acquired images of specific calibration stars was made, and a refined photometric calibration was computed, both for the absolute flux and for the reflectivity of small bodies of the solar system. Results: We found a stability of the instrumental performances within ±1.5% from 2007 to 2010, with no evidence of an aging effect on the optics or detectors. The efficiency of the instrumentation is found to be as expected in the visible range, but lower than expected in the UV and IR range. A photometric calibration implementation was discussed for the two cameras. Conclusions: The calibration derived from pre-hibernation phases of the mission will be checked as soon as possible after the awakening of OSIRIS and will be continuously monitored until the end of the mission in December 2015. A list of additional calibration sources has been determined that are to be observed during the forthcoming phases of the mission to ensure a better coverage across the wavelength range of the cameras and to study the possible dust contamination of the optics.

  17. Easily Accessible Camera Mount

    NASA Technical Reports Server (NTRS)

    Chalson, H. E.

    1986-01-01

    Modified mount enables fast alinement of movie cameras in explosionproof housings. Screw on side and readily reached through side door of housing. Mount includes right-angle drive mechanism containing two miter gears that turn threaded shaft. Shaft drives movable dovetail clamping jaw that engages fixed dovetail plate on camera. Mechanism alines camera in housing and secures it. Reduces installation time by 80 percent.

  18. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  19. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis

  20. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  1. Study on airflow characteristics in the semi-closed irregular narrow flow channel

    NASA Astrophysics Data System (ADS)

    Jin, Yuzhen; Hu, Xiaodong; Zhu, Linhang; Hu, Xudong; Jin, Yingzi

    2016-04-01

    The air-jet loom is widely used in the textile industry. The interaction mechanism of airflow and yarn is not clear in such a narrow flow channel, the gas consumption is relatively large, the yarn motion is unstable and the weft insertion is often interrupted during the operation. In order to study the characteristics of the semi-closed flow field in profiled dents, the momentum conservation equation is modified and the model parameters and boundary conditions are set. Compared with the different r, the ratio of profiled dent's thickness and gap, the results show that the smaller the r is, the smaller the velocity fluctuations of the airflow is. When the angle of profiled dents α is close to zero, the diffusion of the airflow will be less. The experiment is also conducted to verify the result of the simulation with a high-speed camera and pressure sensor in profiled dents. The airflow characteristics in the semi-closed irregular narrow flow channel in the paper would provide the theoretical basis for optimizing the weft insertion process of the air-jet loom.

  2. Ten-Meter Scale Topography and Roughness of Mars Exploration Rovers Landing Sites and Martian Polar Regions

    NASA Technical Reports Server (NTRS)

    Ivanov, Anton B.

    2003-01-01

    The Mars Orbiter Camera (MOC) has been operating on board of the Mars Global Surveyor (MGS) spacecraft since 1998. It consists of three cameras - Red and Blue Wide Angle cameras (FOV=140 deg.) and Narrow Angle camera (FOV=0.44 deg.). The Wide Angle camera allows surface resolution down to 230 m/pixel and the Narrow Angle camera - down to 1.5 m/pixel. This work is a continuation of the project, which we have reported previously. Since then we have refined and improved our stereo correlation algorithm and have processed many more stereo pairs. We will discuss results of our stereo pair analysis located in the Mars Exploration rovers (MER) landing sites and address feasibility of recovering topography from stereo pairs (especially in the polar regions), taken during MGS 'Relay-16' mode.

  3. Hong's grading for evaluating anterior chamber angle width.

    PubMed

    Kim, Seok Hwan; Kang, Ja Heon; Park, Ki Ho; Hong, Chul

    2012-11-01

    To compare Hong's grading method with anterior segment optical coherence tomography (AS-OCT), gonioscopy, and the dark-room prone-position test (DRPT) for evaluating anterior chamber width. The anterior chamber angle was graded using Hong's grading method, and Hong's angle width was calculated from the arctangent of Hong's grades. The correlation between Hong's angle width and AS-OCT parameters was analyzed. The area under the receiver operating characteristic curve (AUC) for Hong's grading method when discriminating between narrow and open angles as determined by gonioscopy was calculated. Correlation analysis was performed between Hong's angle width and intraocular pressure (IOP) changes determined by DRPT. A total of 60 subjects were enrolled. Of these subjects, 53.5 % had a narrow angle. Hong's angle width correlated significantly with the AS-OCT parameters (r = 0.562-0.719, P < 0.01). A Bland-Altman plot showed relatively good agreement between Hong's angle width and the angle width obtained by AS-OCT. The ability of Hong's grading method to discriminate between open and narrow angles was good (AUC = 0.868, 95 % CI 0.756-0.942). A significant linear correlation was found between Hong's angle width and IOP change determined by DRPT (r = -0.761, P < 0.01). Hong's grading method is useful for detecting narrow angles. Hong's grading correlated well with AS-OCT parameters and DRPT.

  4. Cartography of the Luna-21 landing site and Lunokhod-2 traverse area based on Lunar Reconnaissance Orbiter Camera images and surface archive TV-panoramas

    NASA Astrophysics Data System (ADS)

    Karachevtseva, I. P.; Kozlova, N. A.; Kokhanov, A. A.; Zubarev, A. E.; Nadezhdina, I. E.; Patratiy, V. D.; Konopikhin, A. A.; Basilevsky, A. T.; Abdrakhimov, A. M.; Oberst, J.; Haase, I.; Jolliff, B. L.; Plescia, J. B.; Robinson, M. S.

    2017-02-01

    The Lunar Reconnaissance Orbiter Camera (LROC) system consists of a Wide Angle Camera (WAC) and Narrow Angle Camera (NAC). NAC images (∼0.5 to 1.7 m/pixel) reveal details of the Luna-21 landing site and Lunokhod-2 traverse area. We derived a Digital Elevation Model (DEM) and an orthomosaic for the study region using photogrammetric stereo processing techniques with NAC images. The DEM and mosaic allowed us to analyze the topography and morphology of the landing site area and to map the Lunokhod-2 rover route. The total range of topographic elevation along the traverse was found to be less than 144 m; and the rover encountered slopes of up to 20°. With the orthomosaic tied to the lunar reference frame, we derived coordinates of the Lunokhod-2 landing module and overnight stop points. We identified the exact rover route by following its tracks and determined its total length as 39.16 km, more than was estimated during the mission (37 km), which until recently was a distance record for planetary robotic rovers held for more than 40 years.

  5. Dust mass distribution around comet 67P/Churyumov-Gerasimenko determined via parallax measurements using Rosetta's OSIRIS cameras

    NASA Astrophysics Data System (ADS)

    Ott, T.; Drolshagen, E.; Koschny, D.; Güttler, C.; Tubiana, C.; Frattin, E.; Agarwal, J.; Sierks, H.; Bertini, I.; Barbieri, C.; Lamy, P. I.; Rodrigo, R.; Rickman, H.; A'Hearn, M. F.; Barucci, M. A.; Bertaux, J.-L.; Boudreault, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; De Cecco, M.; Deller, J.; Feller, C.; Fornasier, S.; Fulle, M.; Geiger, B.; Gicquel, A.; Groussin, O.; Gutiérrez, P. J.; Hofmann, M.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Keller, H. U.; Knollenberg, J.; Kovacs, G.; Kramm, J. R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Lin, Z.-Y.; López-Moreno, J. J.; Marzari, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Shi, X.; Thomas, N.; Vincent, J.-B.; Poppe, B.

    2017-07-01

    The OSIRIS (optical, spectroscopic and infrared remote imaging system) instrument on board the ESA Rosetta spacecraft collected data of 67P/Churyumov-Gerasimenko for over 2 yr. OSIRIS consists of two cameras, a Narrow Angle Camera and a Wide Angle Camera. For specific imaging sequences related to the observation of dust aggregates in 67P's coma, the two cameras were operating simultaneously. The two cameras are mounted 0.7 m apart from each other, as a result this baseline yields a parallax shift of the apparent particle trails on the analysed images directly proportional to their distance. Thanks to such shifts, the distance between observed dust aggregates and the spacecraft was determined. This method works for particles closer than 6000 m to the spacecraft and requires very few assumptions. We found over 250 particles in a suitable distance range with sizes of some centimetres, masses in the range of 10-6-102 kg and a mean velocity of about 2.4 m s-1 relative to the nucleus. Furthermore, the spectral slope was analysed showing a decrease in the median spectral slope of the particles with time. The further a particle is from the spacecraft the fainter is its signal. For this reason, this was counterbalanced by a debiasing. Moreover, the dust mass-loss rate of the nucleus could be computed as well as the Afρ of the comet around perihelion. The summed-up dust mass-loss rate for the mass bins 10-4-102 kg is almost 8300 kg s-1.

  6. Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Tschimmel, M.; Robinson, M. S.; Humm, D. C.; Denevi, B. W.; Lawrence, S. J.; Brylow, S.; Ravine, M.; Ghaemi, T.

    2008-12-01

    The Lunar Reconnaissance Orbiter Camera (LROC) onboard the NASA Lunar Reconnaissance Orbiter (LRO) spacecraft consists of three cameras: the Wide-Angle Camera (WAC) and two identical Narrow Angle Cameras (NAC-L, NAC-R). The WAC is push-frame imager with 5 visible wavelength filters (415 to 680 nm) at a spatial resolution of 100 m/pixel and 2 UV filters (315 and 360 nm) with a resolution of 400 m/pixel. In addition to the multicolor imaging the WAC can operate in monochrome mode to provide a global large- incidence angle basemap and a time-lapse movie of the illumination conditions at both poles. The WAC has a highly linear response, a read noise of 72 e- and a full well capacity of 47,200 e-. The signal-to-noise ratio in each band is 140 in the worst case. There are no out-of-band leaks and the spectral response of each filter is well characterized. Each NAC is a monochrome pushbroom scanner, providing images with a resolution of 50 cm/pixel from a 50-km orbit. A single NAC image has a swath width of 2.5 km and a length of up to 26 km. The NACs are mounted to acquire side-by-side imaging for a combined swath width of 5 km. The NAC is designed to fully characterize future human and robotic landing sites in terms of topography and hazard risks. The North and South poles will be mapped on a 1-meter-scale poleward of 85.5° latitude. Stereo coverage can be provided by pointing the NACs off-nadir. The NACs are also highly linear. Read noise is 71 e- for NAC-L and 74 e- for NAC-R and the full well capacity is 248,500 e- for NAC-L and 262,500 e- for NAC- R. The focal lengths are 699.6 mm for NAC-L and 701.6 mm for NAC-R; the system MTF is 28% for NAC-L and 26% for NAC-R. The signal-to-noise ratio is at least 46 (terminator scene) and can be higher than 200 (high sun scene). Both NACs exhibit a straylight feature, which is caused by out-of-field sources and is of a magnitude of 1-3%. However, as this feature is well understood it can be greatly reduced during ground

  7. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  8. Narrow Angle Diversity using ACTS Ka-band Signal with Two USAT Ground Stations

    NASA Technical Reports Server (NTRS)

    Kalu, A.; Emrich, C.; Ventre, J.; Wilson, W.; Acosta, R.

    1998-01-01

    Two ultra small aperture terminal (USAT) ground stations, separated by 1.2 km in a narrow angle diversity configuration, received a continuous Ka-band tone sent from Cleveland Link Evaluation Terminal (LET). The signal was transmitted to the USAT ground stations via NASA's Advanced Communications Technology Satellite (ACTS) steerable beam. Received signal power at the two sites was measured and analyzed. A dedicated datalogger at each site recorded time-of-tip data from tipping bucket rain gauges, providing rain amount and instantaneous rain rate. WSR-88D data was also obtained for the collection period. Eleven events with ground-to-satellite slant-path precipitation and resultant signal attenuation were observed during the data collection period. Fade magnitude and duration were compared at the two sites and diversity gain was calculated. These results exceeded standard diversity gain model predictions by several decibels. Rain statistics from tipping bucket data and from radar data were also compared to signal attenuation. The nature of Florida's subtropical rainfall, specifically its impact on signal attenuation at the sites, was addressed.

  9. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  10. MUSIC - Multifunctional stereo imaging camera system for wide angle and high resolution stereo and color observations on the Mars-94 mission

    NASA Astrophysics Data System (ADS)

    Oertel, D.; Jahn, H.; Sandau, R.; Walter, I.; Driescher, H.

    1990-10-01

    Objectives of the multifunctional stereo imaging camera (MUSIC) system to be deployed on the Soviet Mars-94 mission are outlined. A high-resolution stereo camera (HRSC) and wide-angle opto-electronic stereo scanner (WAOSS) are combined in terms of hardware, software, technology aspects, and solutions. Both HRSC and WAOSS are push-button instruments containing a single optical system and focal plates with several parallel CCD line sensors. Emphasis is placed on the MUSIC system's stereo capability, its design, mass memory, and data compression. A 1-Gbit memory is divided into two parts: 80 percent for HRSC and 20 percent for WAOSS, while the selected on-line compression strategy is based on macropixel coding and real-time transform coding.

  11. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    PubMed

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  12. Baffling system for the Wide Angle Camera (WAC) of ROSETTA mission

    NASA Astrophysics Data System (ADS)

    Brunello, Pierfrancesco; Peron, Fabio; Barbieri, Cesare; Fornasier, Sonia

    2000-10-01

    After the experience of GIOTTO fly-by to comet Halley in 1986, the European Space Agency planned to improve the scientific knowledge of these astronomical objects by means of an even more ambitious rendezvous mission with another comet (P/Wirtanen). This mission, named ROSETTA, will go on from 2003 to 2013, ending after the comet perihelion phase and including also the fly-by with two asteroids of the main belt (140 Siwa and 4979 Otawara). Scientific priority of the mission is the in situ investigation of the cometary nucleus, with the aim of better understanding the formation and the composition of planetesimals and their evolution over the last 4.5 billions of years. In this context, the Authors were involved in the design of the baffling for the Wide Angle Camera (WAC) of the imaging system (OSIRIS) carried on board of the spacecraft. Scientific requirements for the WAC are : a large field of view (FOV) of 12 degree(s) x 12 degree(s) with a resolution of 100 (mu) rad per pixel, UV response, and a contrast ratio of 10-4 in order to detect gaseous and dusty features close to the nucleus of the comet. TO achieve these performances, a fairly novel class of optical solutions employing off-axis sections of concentric mirrors was explored. Regarding baffling, the peculiar demand was the rejection of stray-light generated by the optics for sources within the FOV, since the optical entrance aperture is located at the level of the secondary mirror (instead of the primary as usual). This paper describes the baffle design and analyzes its performances, calculated by numerical simulation with ray tracing methods, at different angles of incidence of the light, for sources both outside and inside the field of view.

  13. Reconstruction of truncated TCT and SPECT data from a right-angle dual-camera system for myocardial SPECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tsui, B.M.W.; Frey, E.C.; Lalush, D.S.

    1996-12-31

    We investigated methods to accurately reconstruct 180{degrees} truncated TCT and SPECT projection data obtained from a right-angle dual-camera SPECT system for myocardial SPECT with attenuation compensation. The 180{degrees} data reconstruction methods would permit substantial savings in transmission data acquisition time. Simulation data from the 3D MCAT phantom and clinical data from large patients were used in the evaluation study. Different transmission reconstruction methods including the FBP, transmission ML-EM, transmission ML-SA, and BIT algorithms with and without using the body contour as support, were used in the TCT image reconstructions. The accuracy of both the TCT and attenuation compensated SPECT imagesmore » were evaluated for different degrees of truncation and noise levels. We found that using the FBP reconstructed TCT images resulted in higher count density in the left ventricular (LV) wall of the attenuation compensated SPECT images. The LV wall count density obtained using the iteratively reconstructed TCT images with and without support were similar to each other and were more accurate than that using the FBP. However, the TCT images obtained with support show fewer image artifacts than without support. Among the iterative reconstruction algorithms, the ML-SA algorithm provides the most accurate reconstruction but is the slowest. The BIT algorithm is the fastest but shows the most image artifacts. We conclude that accurate attenuation compensated images can be obtained with truncated 180{degrees} data from large patients using a right-angle dual-camera SPECT system.« less

  14. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  15. Pinhole Cameras: For Science, Art, and Fun!

    ERIC Educational Resources Information Center

    Button, Clare

    2007-01-01

    A pinhole camera is a camera without a lens. A tiny hole replaces the lens, and light is allowed to come in for short amount of time by means of a hand-operated shutter. The pinhole allows only a very narrow beam of light to enter, which reduces confusion due to scattered light on the film. This results in an image that is focused, reversed, and…

  16. Auto-converging stereo cameras for 3D robotic tele-operation

    NASA Astrophysics Data System (ADS)

    Edmondson, Richard; Aycock, Todd; Chenault, David

    2012-06-01

    Polaris Sensor Technologies has developed a Stereovision Upgrade Kit for TALON robot to provide enhanced depth perception to the operator. This kit previously required the TALON Operator Control Unit to be equipped with the optional touchscreen interface to allow for operator control of the camera convergence angle adjustment. This adjustment allowed for optimal camera convergence independent of the distance from the camera to the object being viewed. Polaris has recently improved the performance of the stereo camera by implementing an Automatic Convergence algorithm in a field programmable gate array in the camera assembly. This algorithm uses scene content to automatically adjust the camera convergence angle, freeing the operator to focus on the task rather than adjustment of the vision system. The autoconvergence capability has been demonstrated on both visible zoom cameras and longwave infrared microbolometer stereo pairs.

  17. Wide angle and narrow-band asymmetric absorption in visible and near-infrared regime through lossy Bragg stacks

    PubMed Central

    Shu, Shiwei; Zhan, Yawen; Lee, Chris; Lu, Jian; Li, Yang Yang

    2016-01-01

    Absorber is an important component in various optical devices. Here we report a novel type of asymmetric absorber in the visible and near-infrared spectrum which is based on lossy Bragg stacks. The lossy Bragg stacks can achieve near-perfect absorption at one side and high reflection at the other within the narrow bands (several nm) of resonance wavelengths, whereas display almost identical absorption/reflection responses for the rest of the spectrum. Meanwhile, this interesting wavelength-selective asymmetric absorption behavior persists for wide angles, does not depend on polarization, and can be ascribed to the lossy characteristics of the Bragg stacks. Moreover, interesting Fano resonance with easily tailorable peak profiles can be realized using the lossy Bragg stacks. PMID:27251768

  18. 8. VAL CAMERA CAR, CLOSEUP VIEW OF 'FLARE' OR TRAJECTORY ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    8. VAL CAMERA CAR, CLOSE-UP VIEW OF 'FLARE' OR TRAJECTORY CAMERA ON SLIDING MOUNT. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Miniature Wide-Angle Lens for Small-Pixel Electronic Camera

    NASA Technical Reports Server (NTRS)

    Mouroulils, Pantazis; Blazejewski, Edward

    2009-01-01

    A proposed wideangle lens is shown that would be especially well suited for an electronic camera in which the focal plane is occupied by an image sensor that has small pixels. The design of the lens is intended to satisfy requirements for compactness, high image quality, and reasonably low cost, while addressing issues peculiar to the operation of small-pixel image sensors. Hence, this design is expected to enable the development of a new generation of compact, high-performance electronic cameras. The lens example shown has a 60 degree field of view and a relative aperture (f-number) of 3.2. The main issues affecting the design are also shown.

  20. Cassini Camera Contamination Anomaly: Experiences and Lessons Learned

    NASA Technical Reports Server (NTRS)

    Haemmerle, Vance R.; Gerhard, James H.

    2006-01-01

    We discuss the contamination 'Haze' anomaly for the Cassini Narrow Angle Camera (NAC), one of two optical telescopes that comprise the Imaging Science Subsystem (ISS). Cassini is a Saturn Orbiter with a 4-year nominal mission. The incident occurred in 2001, five months after Jupiter encounter during the Cruise phase and ironically at the resumption of planned maintenance decontamination cycles. The degraded optical performance was first identified by the Instrument Operations Team with the first ISS Saturn imaging six weeks later. A distinct haze of varying size from image to image marred the images of Saturn. A photometric star calibration of the Pleiades, 4 days after the incident, showed stars with halos. Analysis showed that while the halo's intensity was only 1 - 2% of the intensity of the central peak of a star, the halo contained 30 - 70% of its integrated flux. This condition would impact science return. In a review of our experiences, we examine the contamination control plan, discuss the analysis of the limited data available and describe the one-year campaign to remove the haze from the camera. After several long conservative heating activities and interim analysis of their results, the contamination problem as measured by the camera's point spread function was essentially back to preanomaly size and at a point where there would be more risk to continue. We stress the importance of the flexibility of operations and instrument design, the need to do early infight instrument calibration and continual monitoring of instrument performance.

  1. Dynamic calibration of pan-tilt-zoom cameras for traffic monitoring.

    PubMed

    Song, Kai-Tai; Tai, Jen-Chao

    2006-10-01

    Pan-tilt-zoom (PTZ) cameras have been widely used in recent years for monitoring and surveillance applications. These cameras provide flexible view selection as well as a wider observation range. This makes them suitable for vision-based traffic monitoring and enforcement systems. To employ PTZ cameras for image measurement applications, one first needs to calibrate the camera to obtain meaningful results. For instance, the accuracy of estimating vehicle speed depends on the accuracy of camera calibration and that of vehicle tracking results. This paper presents a novel calibration method for a PTZ camera overlooking a traffic scene. The proposed approach requires no manual operation to select the positions of special features. It automatically uses a set of parallel lane markings and the lane width to compute the camera parameters, namely, focal length, tilt angle, and pan angle. Image processing procedures have been developed for automatically finding parallel lane markings. Interesting experimental results are presented to validate the robustness and accuracy of the proposed method.

  2. Prediction of Viking lander camera image quality

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Burcher, E. E.; Jobson, D. J.; Wall, S. D.

    1976-01-01

    Formulations are presented that permit prediction of image quality as a function of camera performance, surface radiance properties, and lighting and viewing geometry. Predictions made for a wide range of surface radiance properties reveal that image quality depends strongly on proper camera dynamic range command and on favorable lighting and viewing geometry. Proper camera dynamic range commands depend mostly on the surface albedo that will be encountered. Favorable lighting and viewing geometries depend mostly on lander orientation with respect to the diurnal sun path over the landing site, and tend to be independent of surface albedo and illumination scattering function. Side lighting with low sun elevation angles (10 to 30 deg) is generally favorable for imaging spatial details and slopes, whereas high sun elevation angles are favorable for measuring spectral reflectances.

  3. Phase angle, frailty and mortality in older adults.

    PubMed

    Wilhelm-Leen, Emilee R; Hall, Yoshio N; Horwitz, Ralph I; Chertow, Glenn M

    2014-01-01

    Frailty is a multidimensional phenotype that describes declining physical function and a vulnerability to adverse outcomes in the setting of physical stress such as illness or hospitalization. Phase angle is a composite measure of tissue resistance and reactance measured via bioelectrical impedance analysis (BIA). Whether phase angle is associated with frailty and mortality in the general population is unknown. To evaluate associations among phase angle, frailty and mortality. Population-based survey. Third National Health and Nutritional Examination Survey (1988-1994). In all, 4,667 persons aged 60 and older. Frailty was defined according to a set of criteria derived from a definition previously described and validated. Narrow phase angle (the lowest quintile) was associated with a four-fold higher odds of frailty among women and a three-fold higher odds of frailty among men, adjusted for age, sex, race-ethnicity and comorbidity. Over a 12-year follow-up period, the adjusted relative hazard for mortality associated with narrow phase angle was 2.4 (95 % confidence interval [95 % CI] 1.8 to 3.1) in women and 2.2 (95 % CI 1.7 to 2.9) in men. Narrow phase angle was significantly associated with mortality even among participants with little or no comorbidity. Analyses of BIA and frailty were cross-sectional; BIA was not measured serially and incident frailty during follow-up was not assessed. Participants examined at home were excluded from analysis because they did not undergo BIA. Narrow phase angle is associated with frailty and mortality independent of age and comorbidity.

  4. Stellar Occultations in the Coma of Comet 67/P Chuyumov-Gerasimenko Observed by the OSIRIS Camera System

    NASA Astrophysics Data System (ADS)

    Moissl, Richard; Kueppers, Michael

    2016-10-01

    In this paper we present the results of an analysis on a large part of the existing Image data from the OSIRIS camera system onboard the Rosetta Spacecraft, in which stars of sufficient brightness (down to a limiting magnitude of 6) have been observed through the coma of Comet 67/P Churyumov-Gerasimenko ("C-G"). Over the course of the Rosetta main mission the Coma of the comet underwent large changes in density and structure, owed to the changing insolation along the orbit of C-G. We report on the changes of the stellar signals in the wavelength ranges, covered by the filters of the OSIRIS Narrow-Angle (NAC) and Wide-Angle (WAC) cameras.Acknowledgements: OSIRIS was built by a consortium led by the Max-Planck-Institut für Sonnensystemforschung, Göttingen, Germany, in collaboration with CISAS, University of Padova, Italy, the Laboratoire d'Astrophysique de Marseille, France, the Instituto de Astrofísica de Andalucia, CSIC, Granada, Spain, the Scientific Support Office of the European Space Agency, Noordwijk, The Netherlands, the Instituto Nacional de Técnica Aeroespacial, Madrid, Spain, the Universidad Politéchnica de Madrid, Spain, the Department of Physics and Astronomy of Uppsala University, Sweden, and the Institut für Datentechnik und Kommunikationsnetze der Technischen Universität Braunschweig, Germany.

  5. The gonial angle stripper: an instrument for the treatment of prominent gonial angle.

    PubMed

    Kyutoku, S; Yanagida, A; Kusumoto, K; Ogawa, Y

    1994-12-01

    In the Orient, a prominent gonial angle, so-called benign masseteric hypertrophy, is rather common and considered unattractive. Therefore, its surgical correction is one of the most popular forms of facial skeletal contouring. For accurate and safe osteotomy of the mandibular angle region, a gonial angle stripper was specially invented. It has a small projection that will ease identification of the osteotomy line in a narrow operative field. The tool has been clinically used in eight patients to prove its usefulness, especially for a posteriorly developed mandibular angle.

  6. Detection of pointing errors with CMOS-based camera in intersatellite optical communications

    NASA Astrophysics Data System (ADS)

    Yu, Si-yuan; Ma, Jing; Tan, Li-ying

    2005-01-01

    For very high data rates, intersatellite optical communications hold a potential performance edge over microwave communications. Acquisition and Tracking problem is critical because of the narrow transmit beam. A single array detector in some systems performs both spatial acquisition and tracking functions to detect pointing errors, so both wide field of view and high update rate is required. The past systems tend to employ CCD-based camera with complex readout arrangements, but the additional complexity reduces the applicability of the array based tracking concept. With the development of CMOS array, CMOS-based cameras can employ the single array detector concept. The area of interest feature of the CMOS-based camera allows a PAT system to specify portion of the array. The maximum allowed frame rate increases as the size of the area of interest decreases under certain conditions. A commercially available CMOS camera with 105 fps @ 640×480 is employed in our PAT simulation system, in which only part pixels are used in fact. Beams angle varying in the field of view can be detected after getting across a Cassegrain telescope and an optical focus system. Spot pixel values (8 bits per pixel) reading out from CMOS are transmitted to a DSP subsystem via IEEE 1394 bus, and pointing errors can be computed by the centroid equation. It was shown in test that: (1) 500 fps @ 100×100 is available in acquisition when the field of view is 1mrad; (2)3k fps @ 10×10 is available in tracking when the field of view is 0.1mrad.

  7. Early direct-injection, low-temperature combustion of diesel fuel in an optical engine utilizing a 15-hole, dual-row, narrow-included-angle nozzle.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gehrke, Christopher R.; Radovanovic, Michael S.; Milam, David M.

    2008-04-01

    Low-temperature combustion of diesel fuel was studied in a heavy-duty, single-cylinder optical engine employing a 15-hole, dual-row, narrow-included-angle nozzle (10 holes x 70/mD and 5 holes x 35/mD) with 103-/gmm-diameter orifices. This nozzle configuration provided the spray targeting necessary to contain the direct-injected diesel fuel within the piston bowl for injection timings as early as 70/mD before top dead center. Spray-visualization movies, acquired using a high-speed camera, show that impingement of liquid fuel on the piston surface can result when the in-cylinder temperature and density at the time of injection are sufficiently low. Seven single- and two-parameter sweeps around amore » 4.82-bar gross indicated mean effective pressure load point were performed to map the sensitivity of the combustion and emissions to variations in injection timing, injection pressure, equivalence ratio, simulated exhaust-gas recirculation, intake temperature, intake boost pressure, and load. High-speed movies of natural luminosity were acquired by viewing through a window in the cylinder wall and through a window in the piston to provide quasi-3D information about the combustion process. These movies revealed that advanced combustion phasing resulted in intense pool fires within the piston bowl, after the end of significant heat release. These pool fires are a result of fuel-films created when the injected fuel impinged on the piston surface. The emissions results showed a strong correlation with pool-fire activity. Smoke and NO/dx emissions rose steadily as pool-fire intensity increased, whereas HC and CO showed a dramatic increase with near-zero pool-fire activity.« less

  8. Wide-angle imaging system with fiberoptic components providing angle-dependent virtual material stops

    NASA Technical Reports Server (NTRS)

    Vaughan, Arthur H. (Inventor)

    1993-01-01

    A strip imaging wide angle optical system is provided. The optical system is provided with a 'virtual' material stop to avoid aberrational effects inherent in wide angle optical systems. The optical system includes a spherical mirror section for receiving light from a 180 deg strip or arc of a target image. Light received by the spherical mirror section is reflected to a frustoconical mirror section for subsequent rereflection to a row of optical fibers. Each optical fiber transmits a portion of the received light to a detector. The optical system exploits the narrow cone of acceptance associated with optical fibers to substantially eliminate vignetting effects inherent in wide angle systems. Further, the optical system exploits the narrow cone of acceptance of the optical fibers to substantially limit spherical aberration. The optical system is ideally suited for any application wherein a 180 deg strip image need be detected, and is particularly well adapted for use in hostile environments such as in planetary exploration.

  9. Performance Characteristics For The Orbiter Camera Payload System's Large Format Camera (LFC)

    NASA Astrophysics Data System (ADS)

    MoIIberg, Bernard H.

    1981-11-01

    The Orbiter Camera Payload System, the OCPS, is an integrated photographic system which is carried into Earth orbit as a payload in the Shuttle Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC) which is a precision wide-angle cartographic instrument that is capable of produc-ing high resolution stereophotography of great geometric fidelity in multiple base to height ratios. The primary design objective for the LFC was to maximize all system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment.

  10. Application of narrow-band television to industrial and commercial communications

    NASA Technical Reports Server (NTRS)

    Embrey, B. C., Jr.; Southworth, G. R.

    1974-01-01

    The development of narrow-band systems for use in space systems is presented. Applications of the technology to future spacecraft requirements are discussed along with narrow-band television's influence in stimulating development within the industry. The transferral of the technology into industrial and commercial communications is described. Major areas included are: (1) medicine; (2) education; (3) remote sensing for traffic control; and (5) weather observation. Applications in data processing, image enhancement, and information retrieval are provided by the combination of the TV camera and the computer.

  11. The canopy camera

    Treesearch

    Harry E. Brown

    1962-01-01

    The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...

  12. Autonomous pedestrian localization technique using CMOS camera sensors

    NASA Astrophysics Data System (ADS)

    Chun, Chanwoo

    2014-09-01

    We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.

  13. 5. VAL CAMERA CAR, DETAIL OF HOIST AT SIDE OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    5. VAL CAMERA CAR, DETAIL OF HOIST AT SIDE OF BRIDGE AND ENGINE CAR ON TRACKS, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. Contemporary Approach to the Diagnosis and Management of Primary Angle-Closure Disease.

    PubMed

    Razeghinejad, M Reza; Myers, Jonathan S

    2018-05-16

    Primary angle closure disease spectrum varies from a narrow angle to advanced glaucoma. A variety of imaging technologies may assist the clinician in determining the pathophysiology and diagnosis of primary angle closure, but gonioscopy remains a mainstay of clinical evaluation. Laser iridotomy effectively eliminates the pupillary block component of angle closure; however, studies show that in many patients the iridocorneal angle remains narrow from underlying anatomic issues, and increasing lens size often leads to further narrowing over time. Recent studies have further characterized the role of the lens in angle closure disease, and cataract or clear lens extraction is increasingly used earlier in its management. As a first surgical step in angle closure glaucoma, lens extraction alone often effectively controls the pressure with less risk of complications than concurrent or stand alone glaucoma surgery, but may not be sufficient in more advanced or severe disease. We provide a comprehensive review on the primary angle-closure disease nomenclature, imaging, and current laser and surgical management. Copyright © 2018. Published by Elsevier Inc.

  15. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  16. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  17. 3-D Flow Visualization with a Light-field Camera

    NASA Astrophysics Data System (ADS)

    Thurow, B.

    2012-12-01

    Light-field cameras have received attention recently due to their ability to acquire photographs that can be computationally refocused after they have been acquired. In this work, we describe the development of a light-field camera system for 3D visualization of turbulent flows. The camera developed in our lab, also known as a plenoptic camera, uses an array of microlenses mounted next to an image sensor to resolve both the position and angle of light rays incident upon the camera. For flow visualization, the flow field is seeded with small particles that follow the fluid's motion and are imaged using the camera and a pulsed light source. The tomographic MART algorithm is then applied to the light-field data in order to reconstruct a 3D volume of the instantaneous particle field. 3D, 3C velocity vectors are then determined from a pair of 3D particle fields using conventional cross-correlation algorithms. As an illustration of the concept, 3D/3C velocity measurements of a turbulent boundary layer produced on the wall of a conventional wind tunnel are presented. Future experiments are planned to use the camera to study the influence of wall permeability on the 3-D structure of the turbulent boundary layer.Schematic illustrating the concept of a plenoptic camera where each pixel represents both the position and angle of light rays entering the camera. This information can be used to computationally refocus an image after it has been acquired. Instantaneous 3D velocity field of a turbulent boundary layer determined using light-field data captured by a plenoptic camera.

  18. 9. COMPLETED ROLLING CAMERA CAR ON RAILROAD TRACK AND BRIDGE ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    9. COMPLETED ROLLING CAMERA CAR ON RAILROAD TRACK AND BRIDGE LOOKING WEST, APRIL 26, 1948. (ORIGINAL PHOTOGRAPH IN POSSESSION OF DAVE WILLIS, SAN DIEGO, CALIFORNIA.) - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  19. Acquisition and visualization techniques for narrow spectral color imaging.

    PubMed

    Neumann, László; García, Rafael; Basa, János; Hegedüs, Ramón

    2013-06-01

    This paper introduces a new approach in narrow-band imaging (NBI). Existing NBI techniques generate images by selecting discrete bands over the full visible spectrum or an even wider spectral range. In contrast, here we perform the sampling with filters covering a tight spectral window. This image acquisition method, named narrow spectral imaging, can be particularly useful when optical information is only available within a narrow spectral window, such as in the case of deep-water transmittance, which constitutes the principal motivation of this work. In this study we demonstrate the potential of the proposed photographic technique on nonunderwater scenes recorded under controlled conditions. To this end three multilayer narrow bandpass filters were employed, which transmit at 440, 456, and 470 nm bluish wavelengths, respectively. Since the differences among the images captured in such a narrow spectral window can be extremely small, both image acquisition and visualization require a novel approach. First, high-bit-depth images were acquired with multilayer narrow-band filters either placed in front of the illumination or mounted on the camera lens. Second, a color-mapping method is proposed, using which the input data can be transformed onto the entire display color gamut with a continuous and perceptually nearly uniform mapping, while ensuring optimally high information content for human perception.

  20. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    USGS Publications Warehouse

    Hobbs, Michael T.; Brehme, Cheryl S.

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  1. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates.

    PubMed

    Hobbs, Michael T; Brehme, Cheryl S

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing.

  2. An improved camera trap for amphibians, reptiles, small mammals, and large invertebrates

    PubMed Central

    2017-01-01

    Camera traps are valuable sampling tools commonly used to inventory and monitor wildlife communities but are challenged to reliably sample small animals. We introduce a novel active camera trap system enabling the reliable and efficient use of wildlife cameras for sampling small animals, particularly reptiles, amphibians, small mammals and large invertebrates. It surpasses the detection ability of commonly used passive infrared (PIR) cameras for this application and eliminates problems such as high rates of false triggers and high variability in detection rates among cameras and study locations. Our system, which employs a HALT trigger, is capable of coupling to digital PIR cameras and is designed for detecting small animals traversing small tunnels, narrow trails, small clearings and along walls or drift fencing. PMID:28981533

  3. Voyager spacecraft images of Jupiter and Saturn

    NASA Technical Reports Server (NTRS)

    Birnbaum, M. M.

    1982-01-01

    The Voyager imaging system is described, noting that it is made up of a narrow-angle and a wide-angle TV camera, each in turn consisting of optics, a filter wheel and shutter assembly, a vidicon tube, and an electronics subsystem. The narrow-angle camera has a focal length of 1500 mm; its field of view is 0.42 deg and its focal ratio is f/8.5. For the wide-angle camera, the focal length is 200 mm, the field of view 3.2 deg, and the focal ratio of f/3.5. Images are exposed by each camera through one of eight filters in the filter wheel on the photoconductive surface of a magnetically focused and deflected vidicon having a diameter of 25 mm. The vidicon storage surface (target) is a selenium-sulfur film having an active area of 11.14 x 11.14 mm; it holds a frame consisting of 800 lines with 800 picture elements per line. Pictures of Jupiter, Saturn, and their moons are presented, with short descriptions given of the area being viewed.

  4. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  5. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  6. 3D bubble reconstruction using multiple cameras and space carving method

    NASA Astrophysics Data System (ADS)

    Fu, Yucheng; Liu, Yang

    2018-07-01

    An accurate measurement of bubble shape and size has a significant value in understanding the behavior of bubbles that exist in many engineering applications. Past studies usually use one or two cameras to estimate bubble volume, surface area, among other parameters. The 3D bubble shape and rotation angle are generally not available in these studies. To overcome this challenge and obtain more detailed information of individual bubbles, a 3D imaging system consisting of four high-speed cameras is developed in this paper, and the space carving method is used to reconstruct the 3D bubble shape based on the recorded high-speed images from different view angles. The proposed method can reconstruct the bubble surface with minimal assumptions. A benchmarking test is performed in a 3 cm  ×  1 cm rectangular channel with stagnant water. The results show that the newly proposed method can measure the bubble volume with an error of less than 2% compared with the syringe reading. The conventional two-camera system has an error around 10%. The one-camera system has an error greater than 25%. The visualization of a 3D bubble rising demonstrates the wall influence on bubble rotation angle and aspect ratio. This also explains the large error that exists in the single camera measurement.

  7. Reductions in injury crashes associated with red light camera enforcement in oxnard, california.

    PubMed

    Retting, Richard A; Kyrychenko, Sergey Y

    2002-11-01

    This study estimated the impact of red light camera enforcement on motor vehicle crashes in one of the first US communities to employ such cameras-Oxnard, California. Crash data were analyzed for Oxnard and for 3 comparison cities. Changes in crash frequencies were compared for Oxnard and control cities and for signalized and nonsignalized intersections by means of a generalized linear regression model. Overall, crashes at signalized intersections throughout Oxnard were reduced by 7% and injury crashes were reduced by 29%. Right-angle crashes, those most associated with red light violations, were reduced by 32%; right-angle crashes involving injuries were reduced by 68%. Because red light cameras can be a permanent component of the transportation infrastructure, crash reductions attributed to camera enforcement should be sustainable.

  8. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  9. Sublimation of icy aggregates in the coma of comet 67P/Churyumov-Gerasimenko detected with the OSIRIS cameras on board Rosetta

    NASA Astrophysics Data System (ADS)

    Gicquel, A.; Vincent, J.-B.; Agarwal, J.; A'Hearn, M. F.; Bertini, I.; Bodewits, D.; Sierks, H.; Lin, Z.-Y.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Koschny, D.; Rickman, H.; Keller, H. U.; Barucci, M. A.; Bertaux, J.-L.; Besse, S.; Cremonese, G.; Da Deppo, V.; Davidsson, B.; Debei, S.; Deller, J.; De Cecco, M.; Frattin, E.; El-Maarry, M. R.; Fornasier, S.; Fulle, M.; Groussin, O.; Gutiérrez, P. J.; Gutiérrez-Marquez, P.; Güttler, C.; Höfner, S.; Hofmann, M.; Hu, X.; Hviid, S. F.; Ip, W.-H.; Jorda, L.; Knollenberg, J.; Kovacs, G.; Kramm, J.-R.; Kührt, E.; Küppers, M.; Lara, L. M.; Lazzarin, M.; Moreno, J. J. Lopez; Lowry, S.; Marzari, F.; Masoumzadeh, N.; Massironi, M.; Moreno, F.; Mottola, S.; Naletto, G.; Oklay, N.; Pajola, M.; Pommerol, A.; Preusker, F.; Scholten, F.; Shi, X.; Thomas, N.; Toth, I.; Tubiana, C.

    2016-11-01

    Beginning in 2014 March, the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras began capturing images of the nucleus and coma (gas and dust) of comet 67P/Churyumov-Gerasimenko using both the wide angle camera (WAC) and the narrow angle camera (NAC). The many observations taken since July of 2014 have been used to study the morphology, location, and temporal variation of the comet's dust jets. We analysed the dust monitoring observations shortly after the southern vernal equinox on 2015 May 30 and 31 with the WAC at the heliocentric distance Rh = 1.53 AU, where it is possible to observe that the jet rotates with the nucleus. We found that the decline of brightness as a function of the distance of the jet is much steeper than the background coma, which is a first indication of sublimation. We adapted a model of sublimation of icy aggregates and studied the effect as a function of the physical properties of the aggregates (composition and size). The major finding of this paper was that through the sublimation of the aggregates of dirty grains (radius a between 5 and 50 μm) we were able to completely reproduce the radial brightness profile of a jet beyond 4 km from the nucleus. To reproduce the data, we needed to inject a number of aggregates between 8.5 × 1013 and 8.5 × 1010 for a = 5 and 50 μm, respectively, or an initial mass of H2O ice around 22 kg.

  10. Note: Simple hysteresis parameter inspector for camera module with liquid lens

    NASA Astrophysics Data System (ADS)

    Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung

    2010-05-01

    A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.

  11. Combined ab interno trabeculotomy and lens extraction: a novel management option for combined uveitic and chronic narrow angle raised intraocular pressure

    PubMed Central

    Lin, Siying; Gupta, Bhaskar; Rossiter, Jonathan

    2016-01-01

    Minimally invasive glaucoma surgery is a developing area that has the potential to replace traditional glaucoma surgery, with its known risk profile, but at present there are no randomised controlled data to validate its use. We report on a case where sequential bilateral combined ab interno trabeculotomy and lens extraction surgery was performed on a 45-year-old woman with combined uveitic and chronic narrow angle raised intraocular pressure. Maximal medical management alone could not control the intraocular pressure. At 12-month follow-up, the patient had achieved stable intraocular pressure in both eyes on a combination of topical ocular antiglaucomatous and steroid therapies. This case demonstrates the effectiveness of trabecular meshwork ablation via ab interno trabeculotomy in a case of complex mixed mechanism glaucoma. PMID:26833953

  12. Multi-Angle View of the Canary Islands

    NASA Technical Reports Server (NTRS)

    2000-01-01

    A multi-angle view of the Canary Islands in a dust storm, 29 February 2000. At left is a true-color image taken by the Multi-angle Imaging SpectroRadiometer (MISR) instrument on NASA's Terra satellite. This image was captured by the MISR camera looking at a 70.5-degree angle to the surface, ahead of the spacecraft. The middle image was taken by the MISR downward-looking (nadir) camera, and the right image is from the aftward 70.5-degree camera. The images are reproduced using the same radiometric scale, so variations in brightness, color, and contrast represent true variations in surface and atmospheric reflectance with angle. Windblown dust from the Sahara Desert is apparent in all three images, and is much brighter in the oblique views. This illustrates how MISR's oblique imaging capability makes the instrument a sensitive detector of dust and other particles in the atmosphere. Data for all channels are presented in a Space Oblique Mercator map projection to facilitate their co-registration. The images are about 400 km (250 miles)wide, with a spatial resolution of about 1.1 kilometers (1,200 yards). North is toward the top. MISR was built and is managed by NASA's Jet Propulsion Laboratory, Pasadena, CA, for NASA's Office of Earth Science, Washington, DC. The Terra satellite is managed by NASA's Goddard Space Flight Center, Greenbelt, MD. JPL is a division of the California Institute of Technology.

  13. Optical design of space cameras for automated rendezvous and docking systems

    NASA Astrophysics Data System (ADS)

    Zhu, X.

    2018-05-01

    Visible cameras are essential components of a space automated rendezvous and docking (AR and D) system, which is utilized in many space missions including crewed or robotic spaceship docking, on-orbit satellite servicing, autonomous landing and hazard avoidance. Cameras are ubiquitous devices in modern time with countless lens designs that focus on high resolution and color rendition. In comparison, space AR and D cameras, while are not required to have extreme high resolution and color rendition, impose some unique requirements on lenses. Fixed lenses with no moving parts and separated lenses for narrow and wide field-of-view (FOV) are normally used in order to meet high reliability requirement. Cemented lens elements are usually avoided due to wide temperature swing and outgassing requirement in space environment. The lenses should be designed with exceptional straylight performance and minimum lens flare given intense sun light and lacking of atmosphere scattering in space. Furthermore radiation resistant glasses should be considered to prevent glass darkening from space radiation. Neptec has designed and built a narrow FOV (NFOV) lens and a wide FOV (WFOV) lens for an AR and D visible camera system. The lenses are designed by using ZEMAX program; the straylight performance and the lens baffles are simulated by using TracePro program. This paper discusses general requirements for space AR and D camera lenses and the specific measures for lenses to meet the space environmental requirements.

  14. Stereo Cameras for Clouds (STEREOCAM) Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Romps, David; Oktem, Rusen

    2017-10-31

    The three pairs of stereo camera setups aim to provide synchronized and stereo calibrated time series of images that can be used for 3D cloud mask reconstruction. Each camera pair is positioned at approximately 120 degrees from the other pair, with a 17o-19o pitch angle from the ground, and at 5-6 km distance from the U.S. Department of Energy (DOE) Central Facility at the Atmospheric Radiation Measurement (ARM) Climate Research Facility Southern Great Plains (SGP) observatory to cover the region from northeast, northwest, and southern views. Images from both cameras of the same stereo setup can be paired together tomore » obtain 3D reconstruction by triangulation. 3D reconstructions from the ring of three stereo pairs can be combined together to generate a 3D mask from surrounding views. This handbook delivers all stereo reconstruction parameters of the cameras necessary to make 3D reconstructions from the stereo camera images.« less

  15. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  16. Camera calibration for multidirectional flame chemiluminescence tomography

    NASA Astrophysics Data System (ADS)

    Wang, Jia; Zhang, Weiguang; Zhang, Yuhong; Yu, Xun

    2017-04-01

    Flame chemiluminescence tomography (FCT), which combines computerized tomography theory and multidirectional chemiluminescence emission measurements, can realize instantaneous three-dimensional (3-D) diagnostics for flames with high spatial and temporal resolutions. One critical step of FCT is to record the projections by multiple cameras from different view angles. For high accuracy reconstructions, it requires that extrinsic parameters (the positions and orientations) and intrinsic parameters (especially the image distances) of cameras be accurately calibrated first. Taking the focus effect of the camera into account, a modified camera calibration method was presented for FCT, and a 3-D calibration pattern was designed to solve the parameters. The precision of the method was evaluated by reprojections of feature points to cameras with the calibration results. The maximum root mean square error of the feature points' position is 1.42 pixels and 0.0064 mm for the image distance. An FCT system with 12 cameras was calibrated by the proposed method and the 3-D CH* intensity of a propane flame was measured. The results showed that the FCT system provides reasonable reconstruction accuracy using the camera's calibration results.

  17. Miranda

    NASA Image and Video Library

    1999-08-24

    One wide-angle and eight narrow-angle camera images of Miranda, taken by NASA Voyager 2, were combined in this view. The controlled mosaic was transformed to an orthographic view centered on the south pole.

  18. Combined ab interno trabeculotomy and lens extraction: a novel management option for combined uveitic and chronic narrow angle raised intraocular pressure.

    PubMed

    Lin, Siying; Gupta, Bhaskar; Rossiter, Jonathan

    2016-02-01

    Minimally invasive glaucoma surgery is a developing area that has the potential to replace traditional glaucoma surgery, with its known risk profile, but at present there are no randomised controlled data to validate its use. We report on a case where sequential bilateral combined ab interno trabeculotomy and lens extraction surgery was performed on a 45-year-old woman with combined uveitic and chronic narrow angle raised intraocular pressure. Maximal medical management alone could not control the intraocular pressure. At 12-month follow-up, the patient had achieved stable intraocular pressure in both eyes on a combination of topical ocular antiglaucomatous and steroid therapies. This case demonstrates the effectiveness of trabecular meshwork ablation via ab interno trabeculotomy in a case of complex mixed mechanism glaucoma. 2016 BMJ Publishing Group Ltd.

  19. Perfect narrow band absorber for sensing applications.

    PubMed

    Luo, Shiwen; Zhao, Jun; Zuo, Duluo; Wang, Xinbing

    2016-05-02

    We design and numerically investigate a perfect narrow band absorber based on a metal-metal-dielectric-metal structure which consists of periodic metallic nanoribbon arrays. The absorber presents an ultra narrow absorption band of 1.11 nm with a nearly perfect absorption of over 99.9% in the infrared region. For oblique incidence, the absorber shows an absorption more than 95% for a wide range of incident angles from 0 to 50°. Structure parameters to the influence of the performance are investigated. The structure shows high sensing performance with a high sensitivity of 1170 nm/RIU and a large figure of merit of 1054. The proposed structure has great potential as a biosensor.

  20. Angle Performance on Optima XE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Jonathan; Satoh, Shu

    2011-01-07

    Angle control on high energy implanters is important due to shrinking device dimensions, and sensitivity to channeling at high beam energies. On Optima XE, beam-to-wafer angles are controlled in both the horizontal and vertical directions. In the horizontal direction, the beam angle is measured through a series of narrow slits, and any angle adjustment is made by steering the beam with the corrector magnet. In the vertical direction, the beam angle is measured through a high aspect ratio mask, and any angle adjustment is made by slightly tilting the wafer platen during implant.Using a sensitive channeling condition, we were ablemore » to quantify the angle repeatability of Optima XE. By quantifying the sheet resistance sensitivity to both horizontal and vertical angle variation, the total angle variation was calculated as 0.04 deg. (1{sigma}). Implants were run over a five week period, with all of the wafers selected from a single boule, in order to control for any crystal cut variation.« less

  1. Iridotomy to slow progression of angle-closure glaucoma

    PubMed Central

    Le, Jimmy T; Rouse, Benjamin; Gazzard, Gus

    2016-01-01

    This is the protocol for a review and there is no abstract. The objectives are as follows: The primary objective is to assess the role of iridotomy-compared with observation-in the prevention of visual field loss for individuals who have primary angle closure or primary angle-closure glaucoma in at least one eye. We will also examine the role of iridotomy in the prevention of elevated intraocular pressure (IOP) in individuals with narrow angles (primary angle-closure suspect) in at least one eye. PMID:27551238

  2. Omni-Directional Viewing-Angle Switching through Control of the Beam Divergence Angle in a Liquid Crystal Panel

    NASA Astrophysics Data System (ADS)

    Baek, Jong-In; Kim, Ki-Han; Kim, Jae Chang; Yoon, Tae-Hoon

    2010-01-01

    This paper proposes a method of omni-directional viewing-angle switching by controlling the beam diverging angle (BDA) in a liquid crystal (LC) panel. The LCs aligned randomly by in-cell polymer structures diffuse the collimated backlight for the bright state of the wide viewing-angle mode. We align the LCs homogeneously by applying an in-plane field for the narrow viewing-angle mode. By doing this the scattering is significantly reduced so that the small BDA is maintained as it passes through the LC layer. The dark state can be obtained by aligning the LCs homeotropically with a vertical electric field. We demonstrated experimentally the omni-directional switching of the viewing-angle, without an additional panel or backlighting system.

  3. Interference-induced angle-independent acoustical transparency

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Qi, Lehua; Yu, Gaokun, E-mail: gkyu@ouc.edu.cn; Wang, Ning

    2014-12-21

    It is revealed that the Fano-like interference leads to the extraordinary acoustic transmission through a slab metamaterial of thickness much smaller than the wavelength, with each unit cell consisting of a Helmholtz resonator and a narrow subwavelength slit. More importantly, both the theoretical analysis and experimental measurement show that the angle-independent acoustical transparency can be realized by grafting a Helmholtz resonator and a quarter-wave resonator to the wall of a narrow subwavelength slit in each unit cell of a slit array. The observed phenomenon results from the interferences between the waves propagating in the slit, those re-radiated by the Helmholtzmore » resonator, and those re-radiated by the quarter-wave resonator. The proposed design may find its applications in designing angle-independent acoustical filters and controlling the phase of the transmitted waves.« less

  4. What convention is used for the illumination and view angles?

    Atmospheric Science Data Center

    2014-12-08

    ... Azimuth angles are measured clockwise from the direction of travel to local north. For both the Sun and cameras, azimuth describes the ... to the equator, because of its morning equator crossing time. Additionally, the difference in view and solar azimuth angle will be near ...

  5. Acapulco, Mexico taken with electronic still camera

    NASA Image and Video Library

    1995-10-29

    STS073-E-5275 (3 Nov. 1995) --- Resort City of Acapulco appears in this north-looking view, photographed from the Earth-orbiting space shuttle Columbia with the Electronic Still Camera (ESC). The airport lies on a narrow neck of land between the sea and a large coastal lagoon. This mission marks the first time NASA has released in mid-flight electronically-downlinked color images that feature geographic subject matter.

  6. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  7. The Orbiter camera payload system's large-format camera and attitude reference system

    NASA Technical Reports Server (NTRS)

    Schardt, B. B.; Mollberg, B. H.

    1985-01-01

    The Orbiter camera payload system (OCPS) is an integrated photographic system carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a large-format camera (LFC), a precision wide-angle cartographic instrument capable of producing high-resolution stereophotography of great geometric fidelity in multiple base-to-height ratios. A secondary and supporting system to the LFC is the attitude reference system (ARS), a dual-lens stellar camera array (SCA) and camera support structure. The SCA is a 70 mm film system that is rigidly mounted to the LFC lens support structure and, through the simultaneous acquisition of two star fields with each earth viewing LFC frame, makes it possible to precisely determine the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high-precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with rocket launch conditions and the on-orbit environment. The full OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on Oct. 5, 1984, as a major payload aboard the STS-41G mission.

  8. Evaluation of the anterior chamber angle in Asian Indian eyes by ultrasound biomicroscopy and gonioscopy.

    PubMed

    Kaushik, Sushmita; Jain, Rajeev; Pandav, Surinder Singh; Gupta, Amod

    2006-09-01

    To compare the ultrasound biomicroscopic measurement of the anterior chamber angle in Asian Indian eyes, with the angle width estimated by gonioscopy. Patients with open and closed angles attending a glaucoma clinic were recruited for the study. Temporal quadrants of the angles of patients were categorized by gonioscopy as Grade 0 to Grade 4, using Shaffer's classification. These angles were quantified by ultrasound biomicroscopy (UBM) using the following biometric characteristics: Angle opening distance at 250 micro (AOD 250) and 500 micro (AOD 500) from the scleral spur and trabecular meshwork-ciliary process distance (TCPD). The angles were further segregated as "narrow angles" (Schaffer's Grade 2 or less) and "open angles" (Schaffer's Grade 3 and 4). The UBM measurements were computed in each case and analyzed in relation to the gonioscopic angle evaluation. One hundred and sixty three eyes of 163 patients were analyzed. One hundred and six eyes had "narrow angles" and 57 eyes had "open angles" on gonioscopy. There was a significant difference among the mean UBM measurements of each angle grade estimated by gonioscopy (P < 0.001). The Pearson correlation coefficient between all UBM parameters and gonioscopy grades was significant at the 0.01 level. The mean AOD 250, AOD 500 and TCPD in narrow angles were 58+/-49 micro, 102+/-84 micro and 653+/-124 respectively, while it was 176+/-47 micro, 291+/-62 micro and 883+/-94 micro in eyes with open angles (P < 0.001) respectively. The angle width estimated by gonioscopy correlated significantly with the angle dimensions measured by UBM. Gonioscopy, though a subjective test, is a reliable method for estimation of the angle width.

  9. Angle imaging: Advances and challenges

    PubMed Central

    Quek, Desmond T L; Nongpiur, Monisha E; Perera, Shamira A; Aung, Tin

    2011-01-01

    Primary angle closure glaucoma (PACG) is a major form of glaucoma in large populous countries in East and South Asia. The high visual morbidity from PACG is related to the destructive nature of the asymptomatic form of the disease. Early detection of anatomically narrow angles is important and the subsequent prevention of visual loss from PACG depends on an accurate assessment of the anterior chamber angle (ACA). This review paper discusses the advantages and limitations of newer ACA imaging technologies, namely ultrasound biomicroscopy, Scheimpflug photography, anterior segment optical coherence tomography and EyeCam, highlighting the current clinical evidence comparing these devices with each other and with clinical dynamic indentation gonioscopy, the current reference standard. PMID:21150037

  10. Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows

    NASA Astrophysics Data System (ADS)

    Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.

    2016-10-01

    A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.

  11. Integrated calibration between digital camera and laser scanner from mobile mapping system for land vehicles

    NASA Astrophysics Data System (ADS)

    Zhao, Guihua; Chen, Hong; Li, Xingquan; Zou, Xiaoliang

    The paper presents the concept of lever arm and boresight angle, the design requirements of calibration sites and the integrated calibration method of boresight angles of digital camera or laser scanner. Taking test data collected by Applanix's LandMark system as an example, the camera calibration method is introduced to be piling three consecutive stereo images and OTF-Calibration method using ground control points. The laser calibration of boresight angle is proposed to use a manual and automatic method with ground control points. Integrated calibration between digital camera and laser scanner is introduced to improve the systemic precision of two sensors. By analyzing the measurement value between ground control points and its corresponding image points in sequence images, a conclusion is that position objects between camera and images are within about 15cm in relative errors and 20cm in absolute errors. By comparing the difference value between ground control points and its corresponding laser point clouds, the errors is less than 20cm. From achieved results of these experiments in analysis, mobile mapping system is efficient and reliable system for generating high-accuracy and high-density road spatial data more rapidly.

  12. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  13. Omnidirectional narrow optical filters for circularly polarized light in a nanocomposite structurally chiral medium.

    PubMed

    Avendaño, Carlos G; Palomares, Laura O

    2018-04-20

    We consider the propagation of electromagnetic waves throughout a nanocomposite structurally chiral medium consisting of metallic nanoballs randomly dispersed in a structurally chiral material whose dielectric properties can be represented by a resonant effective uniaxial tensor. It is found that an omnidirectional narrow pass band and two omnidirectional narrow band gaps are created in the blue optical spectrum for right and left circularly polarized light, as well as narrow reflection bands for right circularly polarized light that can be controlled by varying the light incidence angle and the filling fraction of metallic inclusions.

  14. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  15. Augmented reality glass-free three-dimensional display with the stereo camera

    NASA Astrophysics Data System (ADS)

    Pang, Bo; Sang, Xinzhu; Chen, Duo; Xing, Shujun; Yu, Xunbo; Yan, Binbin; Wang, Kuiru; Yu, Chongxiu

    2017-10-01

    An improved method for Augmented Reality (AR) glass-free three-dimensional (3D) display based on stereo camera used for presenting parallax contents from different angle with lenticular lens array is proposed. Compared with the previous implementation method of AR techniques based on two-dimensional (2D) panel display with only one viewpoint, the proposed method can realize glass-free 3D display of virtual objects and real scene with 32 virtual viewpoints. Accordingly, viewers can get abundant 3D stereo information from different viewing angles based on binocular parallax. Experimental results show that this improved method based on stereo camera can realize AR glass-free 3D display, and both of virtual objects and real scene have realistic and obvious stereo performance.

  16. Testing of the Apollo 15 Metric Camera System.

    NASA Technical Reports Server (NTRS)

    Helmering, R. J.; Alspaugh, D. H.

    1972-01-01

    Description of tests conducted (1) to assess the quality of Apollo 15 Metric Camera System data and (2) to develop production procedures for total block reduction. Three strips of metric photography over the Hadley Rille area were selected for the tests. These photographs were utilized in a series of evaluation tests culminating in an orbitally constrained block triangulation solution. Results show that film deformations up to 25 and 5 microns are present in the mapping and stellar materials, respectively. Stellar reductions can provide mapping camera orientations with an accuracy that is consistent with the accuracies of other parameters in the triangulation solutions. Pointing accuracies of 4 to 10 microns can be expected for the mapping camera materials, depending on variations in resolution caused by changing sun angle conditions.

  17. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the

  18. Evaluation of modified portable digital camera for screening of diabetic retinopathy.

    PubMed

    Chalam, Kakarla V; Brar, Vikram S; Keshavamurthy, Ravi

    2009-01-01

    To describe a portable wide-field noncontact digital camera for posterior segment photography. The digital camera has a compound lens consisting of two optical elements (a 90-dpt and a 20-dpt lens) attached to a 7.2-megapixel camera. White-light-emitting diodes are used to illuminate the fundus and reduce source reflection. The camera settings are set to candlelight mode, the optic zoom standardized to x2.4 and the focus is manually set to 3.0 m. The new technique provides quality wide-angle digital images of the retina (60 degrees ) in patients with dilated pupils, at a fraction of the cost of established digital fundus photography. The modified digital camera is a useful alternative technique to acquire fundus images and provides a tool for screening posterior segment conditions, including diabetic retinopathy in a variety of clinical settings.

  19. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  20. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  1. Angle performance on optima MDxt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    David, Jonathan; Kamenitsa, Dennis

    2012-11-06

    Angle control on medium current implanters is important due to the high angle-sensitivity of typical medium current implants, such as halo implants. On the Optima MDxt, beam-to-wafer angles are controlled in both the horizontal and vertical directions. In the horizontal direction, the beam angle is measured through six narrow slits, and any angle adjustment is made by electrostatically steering the beam, while cross-wafer beam parallelism is adjusted by changing the focus of the electrostatic parallelizing lens (P-lens). In the vertical direction, the beam angle is measured through a high aspect ratio mask, and any angle adjustment is made by slightlymore » tilting the wafer platen prior to implant. A variety of tests were run to measure the accuracy and repeatability of Optima MDxt's angle control. SIMS profiles of a high energy, channeling sensitive condition show both the cross-wafer angle uniformity, along with the small-angle resolution of the system. Angle repeatability was quantified by running a channeling sensitive implant as a regular monitor over a seven month period and measuring the sheet resistance-to-angle sensitivity. Even though crystal cut error was not controlled for in this case, when attributing all Rs variation to angle changes, the overall angle repeatability was measured as 0.16 Degree-Sign (1{sigma}). A separate angle repeatability test involved running a series of V-curves tests over a four month period using low crystal cut wafers selected from the same boule. The results of this test showed the angle repeatability to be <0.1 Degree-Sign (1{sigma}).« less

  2. Solar System Portrait - 60 Frame Mosaic

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever 'portrait' of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size

  3. Solar System Portrait - View of the Sun, Earth and Venus

    NASA Technical Reports Server (NTRS)

    1990-01-01

    This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The 'rays' around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun

  4. Plenoptic particle image velocimetry with multiple plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Thurow, Brian S.

    2018-07-01

    Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.

  5. Method for shaping and aiming narrow beams. [sonar mapping and target identification

    NASA Technical Reports Server (NTRS)

    Heyser, R. C. (Inventor)

    1981-01-01

    A sonar method and apparatus is discribed which utilizes a linear frequency chirp in a transmitter/receiver having a correlator to synthesize a narrow beamwidth pattern from otherwise broadbeam transducers when there is relative velocity between the transmitter/receiver and the target. The chirp is so produced in a generator in bandwidth, B, and time, T, as to produce a time bandwidth product, TB, that is increased for a narrower angle. A replica of the chirp produced in a generator is time delayed and Doppler shifted for use as a reference in the receiver for correlation of received chirps from targets. This reference is Doppler shifted to select targets preferentially, thereby to not only synthesize a narrow beam but also aim the beam in azimuth and elevation.

  6. CCD Camera Lens Interface for Real-Time Theodolite Alignment

    NASA Technical Reports Server (NTRS)

    Wake, Shane; Scott, V. Stanley, III

    2012-01-01

    Theodolites are a common instrument in the testing, alignment, and building of various systems ranging from a single optical component to an entire instrument. They provide a precise way to measure horizontal and vertical angles. They can be used to align multiple objects in a desired way at specific angles. They can also be used to reference a specific location or orientation of an object that has moved. Some systems may require a small margin of error in position of components. A theodolite can assist with accurately measuring and/or minimizing that error. The technology is an adapter for a CCD camera with lens to attach to a Leica Wild T3000 Theodolite eyepiece that enables viewing on a connected monitor, and thus can be utilized with multiple theodolites simultaneously. This technology removes a substantial part of human error by relying on the CCD camera and monitors. It also allows image recording of the alignment, and therefore provides a quantitative means to measure such error.

  7. A state observer for using a slow camera as a sensor for fast control applications

    NASA Astrophysics Data System (ADS)

    Gahleitner, Reinhard; Schagerl, Martin

    2013-03-01

    This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.

  8. Optical Enhancement of Exoskeleton-Based Estimation of Glenohumeral Angles

    PubMed Central

    Cortés, Camilo; Unzueta, Luis; de los Reyes-Guzmán, Ana; Ruiz, Oscar E.; Flórez, Julián

    2016-01-01

    In Robot-Assisted Rehabilitation (RAR) the accurate estimation of the patient limb joint angles is critical for assessing therapy efficacy. In RAR, the use of classic motion capture systems (MOCAPs) (e.g., optical and electromagnetic) to estimate the Glenohumeral (GH) joint angles is hindered by the exoskeleton body, which causes occlusions and magnetic disturbances. Moreover, the exoskeleton posture does not accurately reflect limb posture, as their kinematic models differ. To address the said limitations in posture estimation, we propose installing the cameras of an optical marker-based MOCAP in the rehabilitation exoskeleton. Then, the GH joint angles are estimated by combining the estimated marker poses and exoskeleton Forward Kinematics. Such hybrid system prevents problems related to marker occlusions, reduced camera detection volume, and imprecise joint angle estimation due to the kinematic mismatch of the patient and exoskeleton models. This paper presents the formulation, simulation, and accuracy quantification of the proposed method with simulated human movements. In addition, a sensitivity analysis of the method accuracy to marker position estimation errors, due to system calibration errors and marker drifts, has been carried out. The results show that, even with significant errors in the marker position estimation, method accuracy is adequate for RAR. PMID:27403044

  9. The role of mental rotation and memory scanning on the performance of laparoscopic skills: a study on the effect of camera rotational angle.

    PubMed

    Conrad, J; Shah, A H; Divino, C M; Schluender, S; Gurland, B; Shlasko, E; Szold, A

    2006-03-01

    The rotational angle of the laparoscopic image relative to the true horizon has an unknown influence on performance in laparoscopic procedures. This study evaluates the effect of increasing rotational angle on surgical performance. Surgical residents (group 1) (n = 6) and attending surgeons (group 2) (n = 4) were tested on two laparoscopic skills. The tasks consisted of passing a suture through an aperture, and laparoscopic knot tying. These tasks were assessed at 15 degrees intervals between 0 degrees and 90 degrees , on three consecutive repetitions. The participant's performance was evaluated based on the time required to complete the tasks and number of errors incurred. There was an increasing deterioration in suturing performance as the degree of image rotation was increased. Participants showed a statistically significant 20-120% progressive increase in time to completion of the tasks (p = 0.004), with error rates increasing from 10% to 30% (p = 0.04) as the angle increased from 0 degrees to 90 degrees. Knot-tying performance similarly showed a decrease in performance that was evident in the less experienced surgeons (p = 0.02) but with no obvious effect on the advanced laparoscopic surgeons. When evaluated independently and as a group, both novice and experienced laparoscopic surgeons showed significant prolongation to completion of suturing tasks with increased errors as the rotational angle increased. The knot-tying task shows that experienced surgeons may be able to overcome rotational effects to some extent. This is consistent with results from cognitive neuroscience research evaluating the processing of directional information in spatial motor tasks. It appears that these tasks utilize the time-consuming processes of mental rotation and memory scanning. Optimal performance during laparoscopic procedures requires that the rotation of the camera, and thus the image, be kept to a minimum to maintain a stable horizon. New technology that corrects the

  10. Generation of tunable narrow-band surface-emitted terahertz radiation in periodically poled lithium niobate.

    PubMed

    Weiss, C; Torosyan, G; Avetisyan, Y; Beigang, R

    2001-04-15

    Generation of tunable narrow-band terahertz (THz) radiation perpendicular to the surface of periodically poled lithium niobate by optical rectification of femtosecond pulses is reported. The generated THz radiation can be tuned by use of different poling periods and different observation angles, limited only by the available bandwidth of the pump pulse. Typical bandwidths were 50-100 GHz, depending on the collection angle and the number of periods involved.

  11. Payload topography camera of Chang'e-3

    NASA Astrophysics Data System (ADS)

    Yu, Guo-Bin; Liu, En-Hai; Zhao, Ru-Jin; Zhong, Jie; Zhou, Xiang-Dong; Zhou, Wu-Lin; Wang, Jin; Chen, Yuan-Pei; Hao, Yong-Jie

    2015-11-01

    Chang'e-3 was China's first soft-landing lunar probe that achieved a successful roving exploration on the Moon. A topography camera functioning as the lander's “eye” was one of the main scientific payloads installed on the lander. It was composed of a camera probe, an electronic component that performed image compression, and a cable assembly. Its exploration mission was to obtain optical images of the lunar topography in the landing zone for investigation and research. It also observed rover movement on the lunar surface and finished taking pictures of the lander and rover. After starting up successfully, the topography camera obtained static images and video of rover movement from different directions, 360° panoramic pictures of the lunar surface around the lander from multiple angles, and numerous pictures of the Earth. All images of the rover, lunar surface, and the Earth were clear, and those of the Chinese national flag were recorded in true color. This paper describes the exploration mission, system design, working principle, quality assessment of image compression, and color correction of the topography camera. Finally, test results from the lunar surface are provided to serve as a reference for scientific data processing and application.

  12. Radiometric stability of the Multi-angle Imaging SpectroRadiometer (MISR) following 15 years on-orbit

    NASA Astrophysics Data System (ADS)

    Bruegge, Carol J.; Val, Sebastian; Diner, David J.; Jovanovic, Veljko; Gray, Ellyn; Di Girolamo, Larry; Zhao, Guangyu

    2014-09-01

    The Multi-angle Imaging SpectroRadiometer (MISR) has successfully operated on the EOS/ Terra spacecraft since 1999. It consists of nine cameras pointing from nadir to 70.5° view angle with four spectral channels per camera. Specifications call for a radiometric uncertainty of 3% absolute and 1% relative to the other cameras. To accomplish this, MISR utilizes an on-board calibrator (OBC) to measure camera response changes. Once every two months the two Spectralon panels are deployed to direct solar-light into the cameras. Six photodiode sets measure the illumination level that are compared to MISR raw digital numbers, thus determining the radiometric gain coefficients used in Level 1 data processing. Although panel stability is not required, there has been little detectable change in panel reflectance, attributed to careful preflight handling techniques. The cameras themselves have degraded in radiometric response by 10% since launch, but calibration updates using the detector-based scheme has compensated for these drifts and allowed the radiance products to meet accuracy requirements. Validation using Sahara desert observations show that there has been a drift of ~1% in the reported nadir-view radiance over a decade, common to all spectral bands.

  13. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  14. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  15. Lens and Camera Arrays for Sky Surveys and Space Surveillance

    NASA Astrophysics Data System (ADS)

    Ackermann, M.; Cox, D.; McGraw, J.; Zimmer, P.

    2016-09-01

    In recent years, a number of sky survey projects have chosen to use arrays of commercial cameras coupled with commercial photographic lenses to enable low-cost, wide-area observation. Projects such as SuperWASP, FAVOR, RAPTOR, Lotis, PANOPTES, and DragonFly rely on multiple cameras with commercial lenses to image wide areas of the sky each night. The sensors are usually commercial astronomical charge coupled devices (CCDs) or digital single reflex (DSLR) cameras, while the lenses are large-aperture, highend consumer items intended for general photography. While much of this equipment is very capable and relatively inexpensive, this approach comes with a number of significant limitations that reduce sensitivity and overall utility of the image data. The most frequently encountered limitations include lens vignetting, narrow spectral bandpass, and a relatively large point spread function. Understanding these limits helps to assess the utility of the data, and identify areas where advanced optical designs could significantly improve survey performance.

  16. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User's Head Movement.

    PubMed

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-08-31

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.

  17. ARC-1990-AC79-7127

    NASA Image and Video Library

    1990-02-14

    Range : 4 billion miles from Earth, at 32 degrees to the ecliptic. P-36057C This color image of the Sun, Earth, and Venus is one of the first, and maybe, only images that show are solar system from such a vantage point. The image is a portion of a wide angle image containing the sun and the region of space where the Earth and Venus were at the time, with narrow angle cameras centered on each planet. The wide angle was taken with the cameras darkest filter, a methane absorption band, and the shortest possible exposure, one two-hundredth of a second, to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky, as seen from Voyager's perpective at the edge of the solar system. Yet, it is still 8xs brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics of the camera. The rays around th sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. the 2 narrow angle frames containing the images of the Earth and Venus have been digitally mosaicked into the wide angle image at the appropriate scale. These images were taken through three color filters and recombined to produce the color image. The violet, green, and blue filters used , as well as exposure times of .72,.48, and .72 for Earth, and .36, .24, and .36 for Venus.The images also show long linear streaks resulting from scatering of sulight off parts of the camera and its shade.

  18. RaptorX-Angle: real-value prediction of protein backbone dihedral angles through a hybrid method of clustering and deep learning.

    PubMed

    Gao, Yujuan; Wang, Sheng; Deng, Minghua; Xu, Jinbo

    2018-05-08

    Protein dihedral angles provide a detailed description of protein local conformation. Predicted dihedral angles can be used to narrow down the conformational space of the whole polypeptide chain significantly, thus aiding protein tertiary structure prediction. However, direct angle prediction from sequence alone is challenging. In this article, we present a novel method (named RaptorX-Angle) to predict real-valued angles by combining clustering and deep learning. Tested on a subset of PDB25 and the targets in the latest two Critical Assessment of protein Structure Prediction (CASP), our method outperforms the existing state-of-art method SPIDER2 in terms of Pearson Correlation Coefficient (PCC) and Mean Absolute Error (MAE). Our result also shows approximately linear relationship between the real prediction errors and our estimated bounds. That is, the real prediction error can be well approximated by our estimated bounds. Our study provides an alternative and more accurate prediction of dihedral angles, which may facilitate protein structure prediction and functional study.

  19. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  20. Effect of primary iris and ciliary body cyst on anterior chamber angle in patients with shallow anterior chamber*

    PubMed Central

    Wang, Bing-hong; Yao, Yu-feng

    2012-01-01

    Objective: To evaluate the prevalence of primary iris and/or ciliary body cysts in eyes with shallow anterior chamber and their effect on the narrowing of the anterior chamber angle. Methods: Among the general physical check-up population, subjects with shallow anterior chambers, as judged by van Herick technique, were recruited for further investigation. Ultrasound biomicroscope (UBM) was used to detect and measure the cysts located in the iris and/or ciliary body, the anterior chamber depth (ACD), the angle opening distance at 500 μm (AOD500), and the trabecular-iris angle (TIA). A-scan ultrasonography was used to measure the ocular biometry, including lens thickness, axial length, lens/axial length factor (LAF), and relative lens position (RLP). The effect of the cyst on narrowing the corresponding anterior chamber angle and the entire angle was evaluated by the UBM images, ocular biometry, and gonioscopic grading. The eye with unilateral cyst was compared with the eye without the cyst for further analysis. Results: Among the 727 subjects with shallow anterior chamber, primary iris and ciliary body cysts were detected in 250 (34.4%) patients; among them 96 (38.4%) patients showed unilateral single cyst, 21 (8.4%) patients had unilateral double cysts, and 42 (16.8%) patients manifested unilateral multiple and multi-quadrants cysts. Plateau iris configuration was found in 140 of 361 (38.8%) eyes with cysts. The mean size of total cysts was (0.6547±0.2319) mm. In evaluation of the effect of the cyst size and location on narrowing the corresponding angle to their position, the proportion of the cysts causing corresponding angle narrowing or closure among the cysts larger than 0.8 mm (113/121, 93.4%) was found to be significantly higher than that of the cysts smaller than 0.8 mm (373/801, 46.6%), and a significant higher proportion was also found in the cysts located at iridociliary sulcus (354/437, 81.0%) than in that at the pars plicata (131/484, 27.1%). In

  1. Mars Orbiter Camera Views the 'Face on Mars' - Best View from Viking

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Shortly after midnight Sunday morning (5 April 1998 12:39 AM PST), the Mars Orbiter Camera (MOC) on the Mars Global Surveyor (MGS) spacecraft successfully acquired a high resolution image of the 'Face on Mars' feature in the Cydonia region. The image was transmitted to Earth on Sunday, and retrieved from the mission computer data base Monday morning (6 April 1998). The image was processed at the Malin Space Science Systems (MSSS) facility 9:15 AM and the raw image immediately transferred to the Jet Propulsion Laboratory (JPL) for release to the Internet. The images shown here were subsequently processed at MSSS.

    The picture was acquired 375 seconds after the spacecraft's 220th close approach to Mars. At that time, the 'Face', located at approximately 40.8o N, 9.6o W, was 275 miles (444 km) from the spacecraft. The 'morning' sun was 25o above the horizon. The picture has a resolution of 14.1 feet (4.3 meters) per pixel, making it ten times higher resolution than the best previous image of the feature, which was taken by the Viking Mission in the mid-1970's. The full image covers an area 2.7 miles (4.4 km) wide and 25.7 miles (41.5 km) long.

    This Viking Orbiter image is one of the best Viking pictures of the area Cydonia where the 'Face' is located. Marked on the image are the 'footprint' of the high resolution (narrow angle) Mars Orbiter Camera image and the area seen in enlarged views (dashed box). See PIA01440-1442 for these images in raw and processed form.

    Malin Space Science Systems and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  2. ARC-1986-A86-7011

    NASA Image and Video Library

    1986-01-14

    Range : 2.52 million miles (1.56 million miles) P-29481B/W Voyager 2 returned this photograph with all nine known Uranus rings visible from a 15 sec. exposure through the narrow angle camera. The rings are quite dark and very narrow. The most prominent and outermost of the nine, Epsilon, is seen at top. The next three in toward Uranus, called Delta, Gamma, and Eta, are much fainter and more narrow than Epsilon ring. Then come Beta and Alpha rings, and finally, the innermost grouping, known simply as the 4,5, & 6 rings. The last three are very faint and are at the limit of detection for the Voyager camera. Uranus' rings range in width from about 100 km. (60 mi.) at the widest part of the Epsilon ring, to only a few kilometers for most of the others. this iamge was processed to enhance narrow features; the bright dots are imperfections on the camera detector. The resolution scale is about 50 km. (30 mi.)

  3. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    NASA Astrophysics Data System (ADS)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  4. Saturnian Snowman

    NASA Image and Video Library

    2015-10-15

    NASA's Cassini spacecraft spied this tight trio of craters as it approached Saturn's icy moon Enceladus for a close flyby on Oct. 14, 2015. The craters, located at high northern latitudes, are sliced through by thin fractures -- part of a network of similar cracks that wrap around the snow-white moon. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 at a distance of approximately 6,000 miles (10,000 kilometers) from Enceladus. Image scale is 197 feet (60 meters) per pixel. The image was taken with the Cassini spacecraft narrow-angle camera on Oct. 14, 2015 using a spectral filter which preferentially admits wavelengths of ultraviolet light centered at 338 nanometers. http://photojournal.jpl.nasa.gov/catalog/PIA20011

  5. CMOS image sensor with organic photoconductive layer having narrow absorption band and proposal of stack type solid-state image sensors

    NASA Astrophysics Data System (ADS)

    Takada, Shunji; Ihama, Mikio; Inuiya, Masafumi

    2006-02-01

    Digital still cameras overtook film cameras in Japanese market in 2000 in terms of sales volume owing to their versatile functions. However, the image-capturing capabilities such as sensitivity and latitude of color films are still superior to those of digital image sensors. In this paper, we attribute the cause for the high performance of color films to their multi-layered structure, and propose the solid-state image sensors with stacked organic photoconductive layers having narrow absorption bands on CMOS read-out circuits.

  6. A Wide-Angle Camera for the Mobile Asteroid Surface Scout (MASCOT) on Hayabusa-2

    NASA Astrophysics Data System (ADS)

    Schmitz, N.; Koncz, A.; Jaumann, R.; Hoffmann, H.; Jobs, D.; Kachlicki, J.; Michaelis, H.; Mottola, S.; Pforte, B.; Schroeder, S.; Terzer, R.; Trauthan, F.; Tschentscher, M.; Weisse, S.; Ho, T.-M.; Biele, J.; Ulamec, S.; Broll, B.; Kruselburger, A.; Perez-Prieto, L.

    2014-04-01

    JAXA's Hayabusa-2 mission, an asteroid sample return mission, is scheduled for launch in December 2014, for a rendezvous with the C-type asteroid 1999 JU3 in 2018. MASCOT, the Mobile Asteroid Surface Scout [1], is a small lander, designed to deliver ground truth for the orbiter remote measurements, support the selection of sampling sites, and provide context for the returned samples.MASCOT's main objective is to investigate the landing site's geomorphology, the internal structure, texture and composition of the regolith (dust, soil and rocks), and the thermal, mechanical, and magnetic properties of the surface. MASCOT comprises a payload of four scientific instruments: camera, radiometer, magnetometer and hyper-spectral microscope. The camera (MASCOT CAM) was designed and built by DLR's Institute of Planetary Research, together with Airbus DS Germany.

  7. Empirical Study on Designing of Gaze Tracking Camera Based on the Information of User’s Head Movement

    PubMed Central

    Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung

    2016-01-01

    Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768

  8. Optical performance analysis of plenoptic camera systems

    NASA Astrophysics Data System (ADS)

    Langguth, Christin; Oberdörster, Alexander; Brückner, Andreas; Wippermann, Frank; Bräuer, Andreas

    2014-09-01

    Adding an array of microlenses in front of the sensor transforms the capabilities of a conventional camera to capture both spatial and angular information within a single shot. This plenoptic camera is capable of obtaining depth information and providing it for a multitude of applications, e.g. artificial re-focusing of photographs. Without the need of active illumination it represents a compact and fast optical 3D acquisition technique with reduced effort in system alignment. Since the extent of the aperture limits the range of detected angles, the observed parallax is reduced compared to common stereo imaging systems, which results in a decreased depth resolution. Besides, the gain of angular information implies a degraded spatial resolution. This trade-off requires a careful choice of the optical system parameters. We present a comprehensive assessment of possible degrees of freedom in the design of plenoptic systems. Utilizing a custom-built simulation tool, the optical performance is quantified with respect to particular starting conditions. Furthermore, a plenoptic camera prototype is demonstrated in order to verify the predicted optical characteristics.

  9. Solar System Portrait - View of the Sun, Earth and Venus

    NASA Image and Video Library

    1996-09-13

    This color image of the sun, Earth and Venus was taken by the Voyager 1 spacecraft Feb. 14, 1990, when it was approximately 32 degrees above the plane of the ecliptic and at a slant-range distance of approximately 4 billion miles. It is the first -- and may be the only -- time that we will ever see our solar system from such a vantage point. The image is a portion of a wide-angle image containing the sun and the region of space where the Earth and Venus were at the time with two narrow-angle pictures centered on each planet. The wide-angle was taken with the camera's darkest filter (a methane absorption band), and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large in the sky as seen from Voyager's perspective at the edge of the solar system but is still eight million times brighter than the brightest star in Earth's sky, Sirius. The image of the sun you see is far larger than the actual dimension of the solar disk. The result of the brightness is a bright burned out image with multiple reflections from the optics in the camera. The "rays" around the sun are a diffraction pattern of the calibration lamp which is mounted in front of the wide angle lens. The two narrow-angle frames containing the images of the Earth and Venus have been digitally mosaiced into the wide-angle image at the appropriate scale. These images were taken through three color filters and recombined to produce a color image. The violet, green and blue filters were used; exposure times were, for the Earth image, 0.72, 0.48 and 0.72 seconds, and for the Venus frame, 0.36, 0.24 and 0.36, respectively. Although the planetary pictures were taken with the narrow-angle camera (1500 mm focal length) and were not pointed directly at the sun, they show the effects of the glare from the nearby sun, in the form of long linear streaks resulting from the scattering of sunlight off parts of the camera and its sun

  10. Reflectance characteristics of the Viking lander camera reference test charts

    NASA Technical Reports Server (NTRS)

    Wall, S. D.; Burcher, E. E.; Jabson, D. J.

    1975-01-01

    Reference test charts provide radiometric, colorimetric, and spatial resolution references for the Viking lander cameras on Mars. Reflectance measurements of these references are described, including the absolute bidirectional reflectance of the radiometric references and the relative spectral reflectance of both radiometric and colorimetric references. Results show that the bidirection reflectance of the radiometric references is Lambertian to within + or - 7% for incidence angles between 20 deg and 60 deg, and that their spectral reflectance is constant with wavelength to within + or - 5% over the spectral range of the cameras. Estimated accuracy of the measurements is + or - 0.05 in relative spectral reflectance.

  11. MESSENGER Reveals Mercury in New Detail

    NASA Image and Video Library

    2008-01-16

    As NASA MESSENGER approached Mercury on January 14, 2008, the spacecraft Narrow-Angle Camera on the Mercury Dual Imaging System MDIS instrument captured this view of the planet rugged, cratered landscape illuminated obliquely by the Sun.

  12. A view of the ET camera on STS-112

    NASA Technical Reports Server (NTRS)

    2002-01-01

    KENNEDY SPACE CENTER, FLA. - A closeup view of the camera mounted on the external tank of Space Shuttle Atlantis. The color video camera mounted to the top of Atlantis' external tank will provide a view of the front and belly of the orbiter and a portion of the solid rocket boosters (SRBs) and external tank during the launch of Atlantis on mission STS-112. It will offer the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. The camera will be turned on fifteen minutes prior to launch and will show the orbiter and solid rocket boosters on the launch pad. The video will be downlinked from the external tank during flight to several NASA data-receiving sites and then relayed to the live television broadcast. The camera is expected to operate for about 15 minutes following liftoff. At liftoff, viewers will see the shuttle clearing the launch tower and, at two minutes after liftoff, see the right SRB separate from the external tank. When the external tank separates from Atlantis about eight minutes into the flight, the camera is expected to continue its live feed for about six more minutes although NASA may be unable to pick up the camera's signal because the tank may have moved out of range.

  13. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  14. Stray light lessons learned from the Mars reconnaissance orbiter's optical navigation camera

    NASA Astrophysics Data System (ADS)

    Lowman, Andrew E.; Stauder, John L.

    2004-10-01

    The Optical Navigation Camera (ONC) is a technical demonstration slated to fly on NASA"s Mars Reconnaissance Orbiter in 2005. Conventional navigation methods have reduced accuracy in the days immediately preceding Mars orbit insertion. The resulting uncertainty in spacecraft location limits rover landing sites to relatively safe areas, away from interesting features that may harbor clues to past life on the planet. The ONC will provide accurate navigation on approach for future missions by measuring the locations of the satellites of Mars relative to background stars. Because Mars will be a bright extended object just outside the camera"s field of view, stray light control at small angles is essential. The ONC optomechanical design was analyzed by stray light experts and appropriate baffles were implemented. However, stray light testing revealed significantly higher levels of light than expected at the most critical angles. The primary error source proved to be the interface between ground glass surfaces (and the paint that had been applied to them) and the polished surfaces of the lenses. This paper will describe troubleshooting and correction of the problem, as well as other lessons learned that affected stray light performance.

  15. Observation of Planetary Motion Using a Digital Camera

    ERIC Educational Resources Information Center

    Meyn, Jan-Peter

    2008-01-01

    A digital SLR camera with a standard lens (50 mm focal length, f/1.4) on a fixed tripod is used to obtain photographs of the sky which contain stars up to 8[superscript m] apparent magnitude. The angle of view is large enough to ensure visual identification of the photograph with a large sky region in a stellar map. The resolution is sufficient to…

  16. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full

  17. Solar System Portrait - 60 Frame Mosaic

    NASA Image and Video Library

    1996-09-13

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever portrait of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size

  18. MESSENGER Departs Mercury

    NASA Image and Video Library

    2008-01-30

    After NASA MESSENGER spacecraft completed its successful flyby of Mercury, the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS, took these images of the receding planet. This is a frame from an animation.

  19. Still from Red Spot Movie

    NASA Image and Video Library

    2000-11-21

    This image is one of seven from the narrow-angle camera on NASA Cassini spacecraft assembled as a brief movie of cloud movements on Jupiter. The smallest features visible are about 500 kilometers about 300 miles across.

  20. Functional range of movement of the hand: declination angles to reachable space.

    PubMed

    Pham, Hai Trieu; Pathirana, Pubudu N; Caelli, Terry

    2014-01-01

    The measurement of the range of hand joint movement is an essential part of clinical practice and rehabilitation. Current methods use three finger joint declination angles of the metacarpophalangeal, proximal interphalangeal and distal interphalangeal joints. In this paper we propose an alternate form of measurement for the finger movement. Using the notion of reachable space instead of declination angles has significant advantages. Firstly, it provides a visual and quantifiable method that therapists, insurance companies and patients can easily use to understand the functional capabilities of the hand. Secondly, it eliminates the redundant declination angle constraints. Finally, reachable space, defined by a set of reachable fingertip positions, can be measured and constructed by using a modern camera such as Creative Senz3D or built-in hand gesture sensors such as the Leap Motion Controller. Use of cameras or optical-type sensors for this purpose have considerable benefits such as eliminating and minimal involvement of therapist errors, non-contact measurement in addition to valuable time saving for the clinician. A comparison between using declination angles and reachable space were made based on Hume's experiment on functional range of movement to prove the efficiency of this new approach.

  1. Foot Placement Modification for a Biped Humanoid Robot with Narrow Feet

    PubMed Central

    Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion. PMID:24592154

  2. Foot placement modification for a biped humanoid robot with narrow feet.

    PubMed

    Hashimoto, Kenji; Hattori, Kentaro; Otani, Takuya; Lim, Hun-Ok; Takanishi, Atsuo

    2014-01-01

    This paper describes a walking stabilization control for a biped humanoid robot with narrow feet. Most humanoid robots have larger feet than human beings to maintain their stability during walking. If robot's feet are as narrow as humans, it is difficult to realize a stable walk by using conventional stabilization controls. The proposed control modifies a foot placement according to the robot's attitude angle. If a robot tends to fall down, a foot angle is modified about the roll axis so that a swing foot contacts the ground horizontally. And a foot-landing point is also changed laterally to inhibit the robot from falling to the outside. To reduce a foot-landing impact, a virtual compliance control is applied to the vertical axis and the roll and pitch axes of the foot. Verification of the proposed method is conducted through experiments with a biped humanoid robot WABIAN-2R. WABIAN-2R realized a knee-bended walking with 30 mm breadth feet. Moreover, WABIAN-2R mounted on a human-like foot mechanism mimicking a human's foot arch structure realized a stable walking with the knee-stretched, heel-contact, and toe-off motion.

  3. A Three-Line Stereo Camera Concept for Planetary Exploration

    NASA Technical Reports Server (NTRS)

    Sandau, Rainer; Hilbert, Stefan; Venus, Holger; Walter, Ingo; Fang, Wai-Chi; Alkalai, Leon

    1997-01-01

    This paper presents a low-weight stereo camera concept for planetary exploration. The camera uses three CCD lines within the image plane of one single objective. Some of the main features of the camera include: focal length-90 mm, FOV-18.5 deg, IFOV-78 (mu)rad, convergence angles-(+/-)10 deg, radiometric dynamics-14 bit, weight-2 kg, and power consumption-12.5 Watts. From an orbit altitude of 250 km the ground pixel size is 20m x 20m and the swath width is 82 km. The CCD line data is buffered in the camera internal mass memory of 1 Gbit. After performing radiometric correction and application-dependent preprocessing the data is compressed and ready for downlink. Due to the aggressive application of advanced technologies in the area of microelectronics and innovative optics, the low mass and power budgets of 2 kg and 12.5 Watts is achieved, while still maintaining high performance. The design of the proposed light-weight camera is also general purpose enough to be applicable to other planetary missions such as the exploration of Mars, Mercury, and the Moon. Moreover, it is an example of excellent international collaboration on advanced technology concepts developed at DLR, Germany, and NASA's Jet Propulsion Laboratory, USA.

  4. Anterior segment parameters as predictors of intraocular pressure reduction after phacoemulsification in eyes with open-angle glaucoma.

    PubMed

    Hsia, Yen C; Moghimi, Sasan; Coh, Paul; Chen, Rebecca; Masis, Marisse; Lin, Shan C

    2017-07-01

    To evaluate intraocular pressure (IOP) change after cataract surgery in eyes with open-angle glaucoma (OAG) and its relationship to angle and anterior segment parameters measured by anterior segment optical coherence tomography (AS-OCT). University of California, San Francisco, California, USA. Prospective case series. Eyes were placed into a narrow-angle group or open-angle group based on gonioscopy grading. Biometric parameters were measured using AS-OCT (Visante) preoperatively, and IOP 4 months after surgery was obtained. The IOP change and its relationship to AS-OCT parameters were evaluated. Eighty-one eyes of 69 patients were enrolled. The mean age of the patients was 76.8 years. The preoperative IOP was 15.02 mm Hg on 1.89 glaucoma medications. The average mean deviation of preoperative visual field was -4.58 dB. The mean IOP reduction was 2.1 mm Hg (12.8%) from a preoperative mean of 15.0 mm Hg. The IOP reduction was significantly greater in eyes with narrow angles than in eyes with open angles (20.4% versus 8.0%) (P = .002). In multivariate analysis, preoperative IOP (β = -0.53, P < .001, R 2  = 0.40), angle-opening distance at 500 mm (β = 5.83, P = .02, R 2  = 0.45), angle-opening distance at 750 mm (β = 5.82, P = .001, R 2  = 0.52), and lens vault (β = -0.002, P = .009, R 2  = 0.47) were associated with IOP reduction postoperatively. In eyes with OAG, IOP reduction after cataract surgery was greater in eyes with narrower angles. Preoperative IOP, angle-opening distance, and lens vault were predictors for IOP reduction. Copyright © 2017 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.

  5. Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.

    PubMed

    Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John

    2018-01-01

    In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.

  6. Ridges and Cliffs on Mercury Surface

    NASA Image and Video Library

    2008-01-20

    A complex history of geological evolution is recorded in this frame from the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS instrument, taken during NASA MESSENGER close flyby of Mercury on January 14, 2008.

  7. The Tactile Vision Substitution System: Applications in Education and Employment

    ERIC Educational Resources Information Center

    Scadden, Lawrence A.

    1974-01-01

    The Tactile Vision Substitution System converts the visual image from a narrow-angle television camera to a tactual image on a 5-inch square, 100-point display of vibrators placed against the abdomen of the blind person. (Author)

  8. A LEGO Mindstorms Brewster angle microscope

    NASA Astrophysics Data System (ADS)

    Fernsler, Jonathan; Nguyen, Vincent; Wallum, Alison; Benz, Nicholas; Hamlin, Matthew; Pilgram, Jessica; Vanderpoel, Hunter; Lau, Ryan

    2017-09-01

    A Brewster Angle Microscope (BAM) built from a LEGO Mindstorms kit, additional LEGO bricks, and several standard optics components, is described. The BAM was built as part of an undergraduate senior project and was designed, calibrated, and used to image phospholipid, cholesterol, soap, and oil films on the surface of water. A BAM uses p-polarized laser light reflected off a surface at the Brewster angle, which ideally yields zero reflectivity. When a film of different refractive index is added to the surface a small amount of light is reflected, which can be imaged in a microscope camera. Films of only one molecule (approximately 1 nm) thick, a monolayer, can be observed easily in the BAM. The BAM was used in a junior-level Physical Chemistry class to observe phase transitions of a monolayer and the collapse of a monolayer deposited on the water surface in a Langmuir trough. Using a photometric calculation, students observed a change in thickness of a monolayer during a phase transition of 7 Å, which was accurate to within 1 Å of the value determined by more advanced methods. As supplementary material, we provide a detailed manual on how to build the BAM, software to control the BAM and camera, and image processing software.

  9. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.

  10. Dual-mode switching of a liquid crystal panel for viewing angle control

    NASA Astrophysics Data System (ADS)

    Baek, Jong-In; Kwon, Yong-Hoan; Kim, Jae Chang; Yoon, Tae-Hoon

    2007-03-01

    The authors propose a method to control the viewing angle of a liquid crystal (LC) panel using dual-mode switching. To realize both wide viewing angle (WVA) characteristics and narrow viewing angle (NVA) characteristics with a single LC panel, the authors use two different dark states. The LC layer can be aligned homogeneously parallel to the transmission axis of the bottom polarizer for WVA dark state operation, while it can be aligned vertically for NVA dark state operation. The authors demonstrated that viewing angle control can be achieved with a single panel without any loss of contrast at the front.

  11. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  12. Neptune Great Dark Spot in High Resolution

    NASA Image and Video Library

    1999-08-30

    This photograph shows the last face on view of the Great Dark Spot that Voyager will make with the narrow angle camera. The image was shuttered 45 hours before closest approach at a distance of 2.8 million kilometers (1.7 million miles). The smallest structures that can be seen are of an order of 50 kilometers (31 miles). The image shows feathery white clouds that overlie the boundary of the dark and light blue regions. The pinwheel (spiral) structure of both the dark boundary and the white cirrus suggest a storm system rotating counterclockwise. Periodic small scale patterns in the white cloud, possibly waves, are short lived and do not persist from one Neptunian rotation to the next. This color composite was made from the clear and green filters of the narrow-angle camera. http://photojournal.jpl.nasa.gov/catalog/PIA00052

  13. Enabling High Fidelity Measurements of Energy and Pitch Angle for Escaping Energetic Ions with a Fast Ion Loss Detector

    NASA Astrophysics Data System (ADS)

    Chaban, R.; Pace, D. C.; Marcy, G. R.; Taussig, D.

    2016-10-01

    Energetic ion losses must be minimized in burning plasmas to maintain fusion power, and existing tokamaks provide access to energetic ion parameter regimes that are relevant to burning machines. A new Fast Ion Loss Detector (FILD) probe on the DIII-D tokamak has been optimized to resolve beam ion losses across a range of 30 - 90 keV in energy and 40° to 80° in pitch angle, thereby providing valuable measurements during many different experiments. The FILD is a magnetic spectrometer; once inserted into the tokamak, the magnetic field allows energetic ions to pass through a collimating aperture and strike a scintillator plate that is imaged by a wide view camera and narrow view photomultiplier tubes (PMTs). The design involves calculating scintillator strike patterns while varying probe geometry. Calculated scintillator patterns are then used to design an optical system that allows adjustment of the focus regions for the 1 MS/s resolved PMTs. A synthetic diagnostic will be used to determine the energy and pitch angle resolution that can be attained in DIII-D experiments. Work supported in part by US DOE under the Science Undergraduate Laboratory Internship (SULI) program and under DE-FC02-04ER54698.

  14. Research on Geometric Calibration of Spaceborne Linear Array Whiskbroom Camera

    PubMed Central

    Sheng, Qinghong; Wang, Qi; Xiao, Hui; Wang, Qing

    2018-01-01

    The geometric calibration of a spaceborne thermal-infrared camera with a high spatial resolution and wide coverage can set benchmarks for providing an accurate geographical coordinate for the retrieval of land surface temperature. The practice of using linear array whiskbroom Charge-Coupled Device (CCD) arrays to image the Earth can help get thermal-infrared images of a large breadth with high spatial resolutions. Focusing on the whiskbroom characteristics of equal time intervals and unequal angles, the present study proposes a spaceborne linear-array-scanning imaging geometric model, whilst calibrating temporal system parameters and whiskbroom angle parameters. With the help of the YG-14—China’s first satellite equipped with thermal-infrared cameras of high spatial resolution—China’s Anyang Imaging and Taiyuan Imaging are used to conduct an experiment of geometric calibration and a verification test, respectively. Results have shown that the plane positioning accuracy without ground control points (GCPs) is better than 30 pixels and the plane positioning accuracy with GCPs is better than 1 pixel. PMID:29337885

  15. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    PubMed

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.

  16. ARC-1986-A86-7024

    NASA Image and Video Library

    1986-01-24

    P-29508BW Range: 1.12 million kilometers (690,000 miles) This clear-filter view of the Uranian rings delta, gamma, eta, beta and alpha (from top) was taken with Voyager 2's narrow-angle camera and clearly illustrates the broad outer component and narrow inner component of the eta ring, which orbits Uranus at a radius of some 47,000 km (29,000 mi). The broad component is considerably more transparent than the dense, narrow inner eta component, as well as the other narrow rings shown. Resolution here is about 10 km (6 mi).

  17. Reflecting on Icy Rhea

    NASA Image and Video Library

    2009-11-03

    Bright sunlight on Rhea shows off the cratered surface of Saturn second largest moon in this image captured by NASA Cassini Orbiter. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 21, 2009.

  18. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  19. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  20. Limbus Impact on Off-angle Iris Degradation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Karakaya, Mahmut; Barstow, Del R; Santos-Villalobos, Hector J

    The accuracy of iris recognition depends on the quality of data capture and is negatively affected by several factors such as angle, occlusion, and dilation. Off-angle iris recognition is a new research focus in biometrics that tries to address several issues including corneal refraction, complex 3D iris texture, and blur. In this paper, we present an additional significant challenge that degrades the performance of the off-angle iris recognition systems, called the limbus effect . The limbus is the region at the border of the cornea where the cornea joins the sclera. The limbus is a semitransparent tissue that occludes amore » side portion of the iris plane. The amount of occluded iris texture on the side nearest the camera increases as the image acquisition angle increases. Without considering the role of the limbus effect, it is difficult to design an accurate off-angle iris recognition system. To the best of our knowledge, this is the first work that investigates the limbus effect in detail from a biometrics perspective. Based on results from real images and simulated experiments with real iris texture, the limbus effect increases the hamming distance score between frontal and off-angle iris images ranging from 0.05 to 0.2 depending upon the limbus height.« less

  1. Mid-wave infrared narrow bandwidth guided mode resonance notch filter.

    PubMed

    Zhong, Y; Goldenfeld, Z; Li, K; Streyer, W; Yu, L; Nordin, L; Murphy, N; Wasserman, D

    2017-01-15

    We have designed, fabricated, and characterized a guided mode resonance notch filter operating in the technologically vital mid-wave infrared (MWIR) region of the electromagnetic spectrum. The filter provides a bandstop at λ≈4.1  μm, with a 12 dB extinction on resonance. In addition, we demonstrate a high transmission background (>80%), less than 6% transmission on resonance, and an ultra-narrow bandwidth transmission notch (10  cm-1). Our filter is optically characterized using angle- and polarization-dependent Fourier transform infrared spectroscopy, and simulated using rigorous coupled-wave analysis (RCWA) with excellent agreement between simulations and our experimental results. Using our RCWA simulations, we are able to identify the optical modes associated with the transmission dips of our filter. The presented structure offers a potential route toward narrow-band laser filters in the MWIR.

  2. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  3. Contact Angle Measurements Using a Simplified Experimental Setup

    ERIC Educational Resources Information Center

    Lamour, Guillaume; Hamraoui, Ahmed; Buvailo, Andrii; Xing, Yangjun; Keuleyan, Sean; Prakash, Vivek; Eftekhari-Bafrooei, Ali; Borguet, Eric

    2010-01-01

    A basic and affordable experimental apparatus is described that measures the static contact angle of a liquid drop in contact with a solid. The image of the drop is made with a simple digital camera by taking a picture that is magnified by an optical lens. The profile of the drop is then processed with ImageJ free software. The ImageJ contact…

  4. Example of Weathering And Sun Angle

    NASA Image and Video Library

    1996-12-12

    The letter 'B' or perhaps the figure '8' appears to have been etched into the Mars rock at the left edge of this picture taken yesterday by NASA's Viking 1 Lander. It is believed to be an illusion caused by weathering processes and the angle of the sun as it illuminated the scene for the spacecraft camera. The object at lower left is the housing containing the surface sampler scoop. http://photojournal.jpl.nasa.gov/catalog/PIA00386

  5. Anterior Segment Morphology in Primary Angle Closure Glaucoma using Ultrasound Biomicroscopy

    PubMed Central

    Balakrishna, Nagalla

    2017-01-01

    Aim To evaluate the configuration of the anterior chamber angle quantitatively and study the morphological changes in the eye with ultrasound biomicroscopy (UBM) in primary angle closure glaucoma (PACG) patients after laser peripheral iridotomy (LPI). Materials and methods A total of 185 eyes of 185 PACG patients post-LPI and 126 eyes of 126 normal subjects were included in this prospective study. All subjects underwent complete ophthalmic evaluation, A-scan biometry, and UBM. The anterior segment and angle parameters were measured quantitatively and compared in both groups using Student’s t-test. Results The PACG patients had shorter axial length, shallower central anterior chamber depth anterior chamber depth (ACD), and anteriorly located lens when compared with normal subjects. Trabecular iris angle (TIA) was significantly narrow (5.73 ± 7.76°) in patients with PACG when compared with normal subjects (23.75 ± 9.38°). The angle opening distance at 500 pm from scleral spur (AOD 500), trabecular-ciliary process distance (TCPD), iris-ciliary process distance (ICPD), and iris-zonule distance (IZD) were significantly shorter in patients with PACG than in normal subjects (p < 0.0001). The iris lens angle (ILA), scleral-iris angle (SIA), and scleral-ciliary process angle (SCPA) were significantly narrower in patients with PACG than in normal subjects (p < 0.0001). The iris-lens contact distance (ILCD) was greater in PACG group than in normal (p = 0.001). Plateau iris was seen in 57/185 (30.8%) of the eyes. Anterior positioned ciliary processes were seen in 130/185 eyes (70.3%) of eyes. Conclusion In PACG patients, persistent apposition angle closure is common even after LPI, which could be due to anterior rotation of ciliary body and plateau iris and overcrowding of anterior segment due to shorter axial length and relative anterior lens position. How to cite this article: Mansoori T, Balakrishna N. Anterior Segment Morphology in Primary Angle Closure Glaucoma

  6. A multi-camera system for real-time pose estimation

    NASA Astrophysics Data System (ADS)

    Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin

    2007-04-01

    This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.

  7. Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout

    NASA Technical Reports Server (NTRS)

    Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.

    1997-01-01

    The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.

  8. InfraCAM (trade mark): A Hand-Held Commercial Infrared Camera Modified for Spaceborne Applications

    NASA Technical Reports Server (NTRS)

    Manitakos, Daniel; Jones, Jeffrey; Melikian, Simon

    1996-01-01

    In 1994, Inframetrics introduced the InfraCAM(TM), a high resolution hand-held thermal imager. As the world's smallest, lightest and lowest power PtSi based infrared camera, the InfraCAM is ideal for a wise range of industrial, non destructive testing, surveillance and scientific applications. In addition to numerous commercial applications, the light weight and low power consumption of the InfraCAM make it extremely valuable for adaptation to space borne applications. Consequently, the InfraCAM has been selected by NASA Lewis Research Center (LeRC) in Cleveland, Ohio, for use as part of the DARTFire (Diffusive and Radiative Transport in Fires) space borne experiment. In this experiment, a solid fuel is ignited in a low gravity environment. The combustion period is recorded by both visible and infrared cameras. The infrared camera measures the emission from polymethyl methacrylate, (PMMA) and combustion products in six distinct narrow spectral bands. Four cameras successfully completed all qualification tests at Inframetrics and at NASA Lewis. They are presently being used for ground based testing in preparation for space flight in the fall of 1995.

  9. Exploring the Moon at High-Resolution: First Results From the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Robinson, Mark; Hiesinger, Harald; McEwen, Alfred; Jolliff, Brad; Thomas, Peter C.; Turtle, Elizabeth; Eliason, Eric; Malin, Mike; Ravine, A.; Bowman-Cisneros, Ernest

    The Lunar Reconnaissance Orbiter (LRO) spacecraft was launched on an Atlas V 401 rocket from the Cape Canaveral Air Force Station Launch Complex 41 on June 18, 2009. After spending four days in Earth-Moon transit, the spacecraft entered a three month commissioning phase in an elliptical 30×200 km orbit. On September 15, 2009, LRO began its planned one-year nominal mapping mission in a quasi-circular 50 km orbit. A multi-year extended mission in a fixed 30×200 km orbit is optional. The Lunar Reconnaissance Orbiter Camera (LROC) consists of a Wide Angle Camera (WAC) and two Narrow Angle Cameras (NACs). The WAC is a 7-color push-frame camera, which images the Moon at 100 and 400 m/pixel in the visible and UV, respectively, while the two NACs are monochrome narrow-angle linescan imagers with 0.5 m/pixel spatial resolution. LROC was specifically designed to address two of the primary LRO mission requirements and six other key science objectives, including 1) assessment of meter-and smaller-scale features in order to select safe sites for potential lunar landings near polar resources and elsewhere on the Moon; 2) acquire multi-temporal synoptic 100 m/pixel images of the poles during every orbit to unambiguously identify regions of permanent shadow and permanent or near permanent illumination; 3) meter-scale mapping of regions with permanent or near-permanent illumination of polar massifs; 4) repeat observations of potential landing sites and other regions to derive high resolution topography; 5) global multispectral observations in seven wavelengths to characterize lunar resources, particularly ilmenite; 6) a global 100-m/pixel basemap with incidence angles (60° -80° ) favorable for morphological interpretations; 7) sub-meter imaging of a variety of geologic units to characterize their physical properties, the variability of the regolith, and other key science questions; 8) meter-scale coverage overlapping with Apollo-era panoramic images (1-2 m/pixel) to document

  10. Angle dependent defect modes in a photonic crystal filter doped by high and low temperature superconductor defects

    NASA Astrophysics Data System (ADS)

    Sreejith K., P.; Mathew, Vincent

    2018-05-01

    We have theoretically investigated the incident angle dependent defect modes in a dual channel photonic crystal filter composed of a high and low temperature superconductor defects. It is observed that the defect mode wavelength can be significantly tuned by incident angle for both polarizations. The angle sensitive defect mode property is of particular application in designing narrow band transmission filter.

  11. Effectiveness of Variable-Gain Kalman Filter Based on Angle Error Calculated from Acceleration Signals in Lower Limb Angle Measurement with Inertial Sensors

    PubMed Central

    Watanabe, Takashi

    2013-01-01

    The wearable sensor system developed by our group, which measured lower limb angles using Kalman-filtering-based method, was suggested to be useful in evaluation of gait function for rehabilitation support. However, it was expected to reduce variations of measurement errors. In this paper, a variable-Kalman-gain method based on angle error that was calculated from acceleration signals was proposed to improve measurement accuracy. The proposed method was tested comparing to fixed-gain Kalman filter and a variable-Kalman-gain method that was based on acceleration magnitude used in previous studies. First, in angle measurement in treadmill walking, the proposed method measured lower limb angles with the highest measurement accuracy and improved significantly foot inclination angle measurement, while it improved slightly shank and thigh inclination angles. The variable-gain method based on acceleration magnitude was not effective for our Kalman filter system. Then, in angle measurement of a rigid body model, it was shown that the proposed method had measurement accuracy similar to or higher than results seen in other studies that used markers of camera-based motion measurement system fixing on a rigid plate together with a sensor or on the sensor directly. The proposed method was found to be effective in angle measurement with inertial sensors. PMID:24282442

  12. Boiling Visualization and Critical Heat Flux Phenomena In Narrow Rectangular Gap

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J. J. Kim; Y. H. Kim; S. J. Kim

    2004-12-01

    An experimental study was performed to investifate the pool boling critical hear flux (CHF) on one-dimensional inclined rectangular channels with narrow gaps by changing the orientation of a copper test heater assembly. In a pool of saturated water at atmospheric pressure, the test parameters include the gap sizes of 1,2,5, and 10 mm, andthe surface orientation angles from the downward facing position (180 degrees) to the vertical position (90 degress) respectively.

  13. Effect of camera angulation on adaptation of CAD/CAM restorations.

    PubMed

    Parsell, D E; Anderson, B C; Livingston, H M; Rudd, J I; Tankersley, J D

    2000-01-01

    A significant concern with computer-assisted design/computer-assisted manufacturing (CAD/CAM)-produced prostheses is the accuracy of adaptation of the restoration to the preparation. The objective of this study is to determine the effect of operator-controlled camera misalignment on restoration adaptation. A CEREC 2 CAD/CAM unit (Sirona Dental Systems, Bensheim, Germany) was used to capture the optical impressions and machine the restorations. A Class I preparation was used as the standard preparation for optical impressions. Camera angles along the mesio-distal and buccolingual alignment were varied from the ideal orientation. Occlusal marginal gaps and sample height, width, and length were measured and compared to preparation dimensions. For clinical correlation, clinicians were asked to take optical impressions of mesio-occlusal preparations (Class II) on all four second molar sites, using a patient simulator. On the adjacent first molar occlusal surfaces, a preparation was machined such that camera angulation could be calculated from information taken from the optical impression. Degree of tilt and plane of tilt were compared to the optimum camera positions for those preparations. One-way analysis of variance and Dunnett C post hoc testing (alpha = 0.01) revealed little significant degradation in fit with camera angulation. Only the apical length fit was significantly degraded by excessive angulation. The CEREC 2 CAD/CAM system was found to be relatively insensitive to operator-induced errors attributable to camera misalignments of less than 5 degrees in either the buccolingual or the mesiodistal plane. The average camera tilt error generated by clinicians for all sites was 1.98 +/- 1.17 degrees.

  14. Vision and spectroscopic sensing for joint tracing in narrow gap laser butt welding

    NASA Astrophysics Data System (ADS)

    Nilsen, Morgan; Sikström, Fredrik; Christiansson, Anna-Karin; Ancona, Antonio

    2017-11-01

    The automated laser beam butt welding process is sensitive to positioning the laser beam with respect to the joint because a small offset may result in detrimental lack of sidewall fusion. This problem is even more pronounced in case of narrow gap butt welding, where most of the commercial automatic joint tracing systems fail to detect the exact position and size of the gap. In this work, a dual vision and spectroscopic sensing approach is proposed to trace narrow gap butt joints during laser welding. The system consists of a camera with suitable illumination and matched optical filters and a fast miniature spectrometer. An image processing algorithm of the camera recordings has been developed in order to estimate the laser spot position relative to the joint position. The spectral emissions from the laser induced plasma plume have been acquired by the spectrometer, and based on the measurements of the intensities of selected lines of the spectrum, the electron temperature signal has been calculated and correlated to variations of process conditions. The individual performances of these two systems have been experimentally investigated and evaluated offline by data from several welding experiments, where artificial abrupt as well as gradual deviations of the laser beam out of the joint were produced. Results indicate that a combination of the information provided by the vision and spectroscopic systems is beneficial for development of a hybrid sensing system for joint tracing.

  15. Plume propagation direction determination with SO2 cameras

    NASA Astrophysics Data System (ADS)

    Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich

    2017-03-01

    SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux

  16. Colors of active regions on comet 67P

    NASA Astrophysics Data System (ADS)

    Oklay, N.; Vincent, J.-B.; Sierks, H.; Besse, S.; Fornasier, S.; Barucci, M. A.; Lara, L.; Scholten, F.; Preusker, F.; Lazzarin, M.; Pajola, M.; La Forgia, F.

    2015-10-01

    The OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) scientific imager (Keller et al. 2007) is successfully delivering images of comet 67P/Churyumov-Gerasimenko from its both wide angle camera (WAC) and narrow angle camera (NAC) since ESA's spacecraft Rosetta's arrival to the comet. Both cameras are equipped with filters covering the wavelength range of about 200 nm to 1000 nm. The comet nucleus is mapped with different combination of the filters in resolutions up to 15 cm/px. Besides the determination of the surface morphology in great details (Thomas et al. 2015), such high resolution images provided us a mean to unambiguously link some activity in the coma to a series of pits on the nucleus surface (Vincent et al. 2015).

  17. 2D Measurements of the Balmer Series in Proto-MPEX using a Fast Visible Camera Setup

    NASA Astrophysics Data System (ADS)

    Lindquist, Elizabeth G.; Biewer, Theodore M.; Ray, Holly B.

    2017-10-01

    The Prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device with densities up to 1020 m-3 and temperatures up to 20 eV. Broadband spectral measurements show the visible emission spectra are solely due to the Balmer lines of deuterium. Monochromatic and RGB color Sanstreak SC1 Edgertronic fast visible cameras capture high speed video of plasmas in Proto-MPEX. The color camera is equipped with a long pass 450 nm filter and an internal Bayer filter to view the Dα line at 656 nm on the red channel and the Dβ line at 486 nm on the blue channel. The monochromatic camera has a 434 nm narrow bandpass filter to view the Dγ intensity. In the setup, a 50/50 beam splitter is used so both cameras image the same region of the plasma discharge. Camera images were aligned to each other by viewing a grid ensuring 1 pixel registration between the two cameras. A uniform intensity calibrated white light source was used to perform a pixel-to-pixel relative and an absolute intensity calibration for both cameras. Python scripts that combined the dual camera data, rendering the Dα, Dβ, and Dγ intensity ratios. Observations from Proto-MPEX discharges will be presented. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  18. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  19. Effect of Impingement Angle on landfalling Atmospheric River precipitation efficiency

    NASA Astrophysics Data System (ADS)

    Mehran, A.; Cao, Q.; Wang, K.; Cannon, F.; Ralph, M.; Lettenmaier, D. P.

    2017-12-01

    Atmospheric Rivers (ARs) along the western coast of North America in wintertime are associated with heavy winter precipitation and most flood events. ARs are narrow, elongated, synoptic jets of water vapor that transport moisture from the eastern Pacific to North Pacific coast of North America. Furthermore, the lowest levels of the atmosphere account for almost 75% of the water vapor transport through these rivers. The combination of high integrated water vapor in AR events and strong upslope winds results in heavy orographic precipitation in regions where the narrow AR jets make landfall. We analyzed 19 years (1997 2015) of landfalling ARs over a transect along the U.S. West Coast consisting of two river basins from coastal Washington and Northern California (Chehalis basin and the Russian River basin) to highlight the impingement angle impact on precipitation rainout efficiency. We have studied water vapor data from Climate Forecast System reanalysis (CFSR) on AR dates to calculate the impingement angle and associated total amount of water vapor. Rainout efficiency is defined and calculated as the ratio of total amount of water vapor that has precipitated over each basin. Our results show that extreme AR events which impingement angle is orthogonal to basin exposure, have greater rainout efficiency.

  20. Junocam: Juno's Outreach Camera

    NASA Astrophysics Data System (ADS)

    Hansen, C. J.; Caplinger, M. A.; Ingersoll, A.; Ravine, M. A.; Jensen, E.; Bolton, S.; Orton, G.

    2017-11-01

    Junocam is a wide-angle camera designed to capture the unique polar perspective of Jupiter offered by Juno's polar orbit. Junocam's four-color images include the best spatial resolution ever acquired of Jupiter's cloudtops. Junocam will look for convective clouds and lightning in thunderstorms and derive the heights of the clouds. Junocam will support Juno's radiometer experiment by identifying any unusual atmospheric conditions such as hotspots. Junocam is on the spacecraft explicitly to reach out to the public and share the excitement of space exploration. The public is an essential part of our virtual team: amateur astronomers will supply ground-based images for use in planning, the public will weigh in on which images to acquire, and the amateur image processing community will help process the data.

  1. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  2. Bilateral acute angle closure glaucoma after hyperopic LASIK correction

    PubMed Central

    Osman, Essam A.; Alsaleh, Ahmed A.; Al Turki, Turki; AL Obeidan, Saleh A.

    2009-01-01

    Acute angle closure glaucoma is unexpected complication following laser in situ keratomileusis (LASIK). We are reporting a 49-years-old lady that was presented to the emergency department with acute glaucoma in both eyes soon after LASIK correction. Diagnosis was made on detailed clinical history and examination, slit lamp examination, intraocular pressure measurement and gonioscopy. Laser iridotomy in both eyes succeeded in controlling the attack and normalizing the intraocular pressure (IOP) more than 6 months of follow-up. Prophylactic laser iridotomy is essential for narrow angle patients before LASIK surgery if refractive laser surgery is indicated. PMID:23960863

  3. A remote camera operation system using a marker attached cap

    NASA Astrophysics Data System (ADS)

    Kawai, Hironori; Hama, Hiromitsu

    2005-12-01

    In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.

  4. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  5. Using a digital video camera to examine coupled oscillations

    NASA Astrophysics Data System (ADS)

    Greczylo, T.; Debowska, E.

    2002-07-01

    In our previous paper (Debowska E, Jakubowicz S and Mazur Z 1999 Eur. J. Phys. 20 89-95), thanks to the use of an ultrasound distance sensor, experimental verification of the solution of Lagrange equations for longitudinal oscillations of the Wilberforce pendulum was shown. In this paper the sensor and a digital video camera were used to monitor and measure the changes of both the pendulum's coordinates (vertical displacement and angle of rotation) simultaneously. The experiments were performed with the aid of the integrated software package COACH 5. Fourier analysis in Microsoft^{\\circledR} Excel 97 was used to find normal modes in each case of the measured oscillations. Comparison of the results with those presented in our previous paper (as given above) leads to the conclusion that a digital video camera is a powerful tool for measuring coupled oscillations of a Wilberforce pendulum. The most important conclusion is that a video camera is able to do something more than merely register interesting physical phenomena - it can be used to perform measurements of physical quantities at an advanced level.

  6. Mission Report on the Orbiter Camera Payload System (OCPS) Large Format Camera (LFC) and Attitude Reference System (ARS)

    NASA Technical Reports Server (NTRS)

    Mollberg, Bernard H.; Schardt, Bruton B.

    1988-01-01

    The Orbiter Camera Payload System (OCPS) is an integrated photographic system which is carried into earth orbit as a payload in the Space Transportation System (STS) Orbiter vehicle's cargo bay. The major component of the OCPS is a Large Format Camera (LFC), a precision wide-angle cartographic instrument that is capable of producing high resolution stereo photography of great geometric fidelity in multiple base-to-height (B/H) ratios. A secondary, supporting system to the LFC is the Attitude Reference System (ARS), which is a dual lens Stellar Camera Array (SCA) and camera support structure. The SCA is a 70-mm film system which is rigidly mounted to the LFC lens support structure and which, through the simultaneous acquisition of two star fields with each earth-viewing LFC frame, makes it possible to determine precisely the pointing of the LFC optical axis with reference to the earth nadir point. Other components complete the current OCPS configuration as a high precision cartographic data acquisition system. The primary design objective for the OCPS was to maximize system performance characteristics while maintaining a high level of reliability compatible with Shuttle launch conditions and the on-orbit environment. The full-up OCPS configuration was launched on a highly successful maiden voyage aboard the STS Orbiter vehicle Challenger on October 5, 1984, as a major payload aboard mission STS 41-G. This report documents the system design, the ground testing, the flight configuration, and an analysis of the results obtained during the Challenger mission STS 41-G.

  7. Still from Processed Movie of Zonal Jets

    NASA Image and Video Library

    2000-11-21

    This image is one frame from a movie clip of cloud motions on Jupiter, from the side of the planet opposite to the Great Red Spot. It was taken in the first week of October 2000 by the narrow-angle camera on NASA Cassini spacecraft,

  8. Neptune Through a Clear Filter

    NASA Image and Video Library

    1999-07-25

    On July 23, 1989, NASA Voyager 2 spacecraft took this picture of Neptune through a clear filter on its narrow-angle camera. The image on the right has a latitude and longitude grid added for reference. Neptune Great Dark Spot is visible on the left.

  9. Calibration Procedures in Mid Format Camera Setups

    NASA Astrophysics Data System (ADS)

    Pivnicka, F.; Kemper, G.; Geissler, S.

    2012-07-01

    A growing number of mid-format cameras are used for aerial surveying projects. To achieve a reliable and geometrically precise result also in the photogrammetric workflow, awareness on the sensitive parts is important. The use of direct referencing systems (GPS/IMU), the mounting on a stabilizing camera platform and the specific values of the mid format camera make a professional setup with various calibration and misalignment operations necessary. An important part is to have a proper camera calibration. Using aerial images over a well designed test field with 3D structures and/or different flight altitudes enable the determination of calibration values in Bingo software. It will be demonstrated how such a calibration can be performed. The direct referencing device must be mounted in a solid and reliable way to the camera. Beside the mechanical work especially in mounting the camera beside the IMU, 2 lever arms have to be measured in mm accuracy. Important are the lever arms from the GPS Antenna to the IMU's calibrated centre and also the lever arm from the IMU centre to the Camera projection centre. In fact, the measurement with a total station is not a difficult task but the definition of the right centres and the need for using rotation matrices can cause serious accuracy problems. The benefit of small and medium format cameras is that also smaller aircrafts can be used. Like that, a gyro bases stabilized platform is recommended. This causes, that the IMU must be mounted beside the camera on the stabilizer. The advantage is, that the IMU can be used to control the platform, the problematic thing is, that the IMU to GPS antenna lever arm is floating. In fact we have to deal with an additional data stream, the values of the movement of the stabiliser to correct the floating lever arm distances. If the post-processing of the GPS-IMU data by taking the floating levers into account, delivers an expected result, the lever arms between IMU and camera can be applied

  10. New narrow baryons and dibaryons observed in inelastic pp scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatischeff, B.; Willis, N.; Comets, M. P.

    Several narrow exotic baryonic states have been recently observed at 1004, 1044, and possibly at 1094 MeV, from the study of pp{yields}p{pi}{sup +}X reaction at different energies (T{sub p}=1520, 1805 and 2100 MeV) and angles from 0 deg. up to 17 deg. (lab.). The small widths: a few MeV, indicate a possible interpretation within multiquark baryons or baryonic resonances. A phenomonological mass formula for two clusters of quarks, predicts masses, quite close to the experimental ones.

  11. Thermal design and simulation of an attitude-varied space camera

    NASA Astrophysics Data System (ADS)

    Wang, Chenjie; Yang, Wengang; Feng, Liangjie; Li, XuYang; Wang, Yinghao; Fan, Xuewu; Wen, Desheng

    2015-10-01

    An attitude-varied space camera changes attitude continually when it is working, its attitude changes with large angle in short time leads to the significant change of heat flux; Moreover, the complicated inner heat sources, other payloads and the satellite platform will also bring thermal coupling effects to the space camera. According to a space camera which is located on a two dimensional rotating platform, detailed thermal design is accomplished by means of thermal isolation, thermal transmission and temperature compensation, etc. Then the ultimate simulation cases of both high temperature and low temperature are chosen considering the obscuration of the satellite platform and other payloads, and also the heat flux analysis of light entrance and radiator surface of the camera. NEVEDA and SindaG are used to establish the simulation model of the camera and the analysis is carried out. The results indicate that, under both passive and active thermal control, the temperature of optical components is 20+/-1°C,both their radial and axial temperature gradient are less than 0.3°C, while the temperature of the main structural components is 20+/-2°C, and the temperature fluctuation of the focal plane assemblies is 3.0-9.5°C The simulation shows that the thermal control system can meet the need of the mission, and the thermal design is efficient and reasonable.

  12. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  13. The role of contact angle on unstable flow formation during infiltration and drainage in wettable porous media

    NASA Astrophysics Data System (ADS)

    Wallach, Rony; Margolis, Michal; Graber, Ellen R.

    2013-10-01

    The impact of contact angle on 2-D spatial and temporal water-content distribution during infiltration and drainage was experimentally studied. The 0.3-0.5 mm fraction of a quartz dune sand was treated and turned subcritically repellent (contact angle of 33°, 48°, 56°, and 75° for S33, S48, S56, and S75, respectively). The media were packed uniformly in transparent flow chambers and water was supplied to the surface as a point source at different rates (1-20 ml/min). A sequence of gray-value images was taken by CCD camera during infiltration and subsequent drainage; gray values were converted to volumetric water content by water volume balance. Narrow and long plumes with water accumulation behind the downward moving wetting front (tip) and negative water gradient above it (tail) developed in the S56 and S75 media during infiltration at lower water application rates. The plumes became bulbous with spatially uniform water-content distribution as water application rates increased. All plumes in these media propagated downward at a constant rate during infiltration and did not change their shape during drainage. In contrast, regular plume shapes were observed in the S33 and S48 media at all flow rates, and drainage profiles were nonmonotonic with a transition plane at the depth that water reached during infiltration. Given that the studied media have similar pore-size distributions, the conclusion is that imbibition hindered by the nonzero contact angle induced pressure buildup at the wetting front (dynamic water-entry value) that controlled the plume shape and internal water-content distribution during infiltration and drainage.

  14. Comparing Laser Peripheral Iridotomy to Cataract Extraction in Narrow Angle Eyes Using Anterior Segment Optical Coherence Tomography

    PubMed Central

    Melese, Ephrem; Peterson, Jeffrey R.; Feldman, Robert M.; Baker, Laura A.; Bell, Nicholas P.; Chuang, Alice Z.

    2016-01-01

    Purpose To evaluate the changes in anterior chamber angle (ACA) parameters in primary angle closure (PAC) spectrum eyes before and after cataract extraction (CE) and compare to the changes after laser peripheral iridotomy (LPI) using anterior segment optical coherence tomography (ASOCT). Methods Twenty-eight PAC spectrum eyes of 18 participants who underwent CE and 34 PAC spectrum eyes of 21 participants who underwent LPI were included. ASOCT images with 3-dimensional mode angle analysis scans were taken with the CASIA SS-1000 (Tomey Corp., Nagoya, Japan) before and after CE or LPI. Mixed-effect model analysis was used to 1) compare best-corrected visual acuity, intraocular pressure, and ACA parameters before and after CE; 2) identify and estimate the effects of potential contributing factors affecting changes in ACA parameters; and 3) compare CE and LPI treatment groups. Results The increase in average angle parameters (TISA750 and TICV750) was significantly greater after CE than LPI. TICV750 increased by 102% (2.114 [±1.203] μL) after LPI and by 174% (4.546 [± 1.582] μL) after CE (P < 0.001). Change of TICV750 in the CE group was significantly affected by age (P = 0.002), race (P = 0.006), and intraocular lens power (P = 0.037). Conclusions CE results in greater anatomic changes in the ACA than LPI in PAC spectrum eyes. ASOCT may be used to follow anatomic changes in the angle after intervention. PMID:27606482

  15. Adjustable-Viewing-Angle Endoscopic Tool for Skull Base and Brain Surgery

    NASA Technical Reports Server (NTRS)

    Bae, Youngsam; Liao, Anna; Manohara, Harish; Shahinian, Hrayr

    2008-01-01

    The term Multi-Angle and Rear Viewing Endoscopic tooL (MARVEL) denotes an auxiliary endoscope, now undergoing development, that a surgeon would use in conjunction with a conventional endoscope to obtain additional perspective. The role of the MARVEL in endoscopic brain surgery would be similar to the role of a mouth mirror in dentistry. Such a tool is potentially useful for in-situ planetary geology applications for the close-up imaging of unexposed rock surfaces in cracks or those not in the direct line of sight. A conventional endoscope provides mostly a frontal view that is, a view along its longitudinal axis and, hence, along a straight line extending from an opening through which it is inserted. The MARVEL could be inserted through the same opening as that of the conventional endoscope, but could be adjusted to provide a view from almost any desired angle. The MARVEL camera image would be displayed, on the same monitor as that of the conventional endoscopic image, as an inset within the conventional endoscopic image. For example, while viewing a tumor from the front in the conventional endoscopic image, the surgeon could simultaneously view the tumor from the side or the rear in the MARVEL image, and could thereby gain additional visual cues that would aid in precise three-dimensional positioning of surgical tools to excise the tumor. Indeed, a side or rear view through the MARVEL could be essential in a case in which the object of surgical interest was not visible from the front. The conceptual design of the MARVEL exploits the surgeon s familiarity with endoscopic surgical tools. The MARVEL would include a miniature electronic camera and miniature radio transmitter mounted on the tip of a surgical tool derived from an endo-scissor (see figure). The inclusion of the radio transmitter would eliminate the need for wires, which could interfere with manipulation of this and other surgical tools. The handgrip of the tool would be connected to a linkage similar to

  16. Neptune

    NASA Image and Video Library

    1999-07-25

    This image of Neptune was taken through the clear filter of the narrow-angle camera on July 16, 1989 by NASA Voyager 2 spacecraft. The image was processed by computer to show the newly resolved dark oval feature embedded in the middle of the dusky south

  17. Geomorphologic mapping of the lunar crater Tycho and its impact melt deposits

    NASA Astrophysics Data System (ADS)

    Krüger, T.; van der Bogert, C. H.; Hiesinger, H.

    2016-07-01

    Using SELENE/Kaguya Terrain Camera and Lunar Reconnaissance Orbiter Camera (LROC) data, we produced a new, high-resolution (10 m/pixel), geomorphological and impact melt distribution map for the lunar crater Tycho. The distal ejecta blanket and crater rays were investigated using LROC wide-angle camera (WAC) data (100 m/pixel), while the fine-scale morphologies of individual units were documented using high resolution (∼0.5 m/pixel) LROC narrow-angle camera (NAC) frames. In particular, Tycho shows a large coherent melt sheet on the crater floor, melt pools and flows along the terraced walls, and melt pools on the continuous ejecta blanket. The crater floor of Tycho exhibits three distinct units, distinguishable by their elevation and hummocky surface morphology. The distribution of impact melt pools and ejecta, as well as topographic asymmetries, support the formation of Tycho as an oblique impact from the W-SW. The asymmetric ejecta blanket, significantly reduced melt emplacement uprange, and the depressed uprange crater rim at Tycho suggest an impact angle of ∼25-45°.

  18. Surface compositional variation on the comet 67P/Churyumov-Gerasimenko by OSIRIS data

    NASA Astrophysics Data System (ADS)

    Barucci, M. A.; Fornasier, S.; Feller, C.; Perna, D.; Hasselmann, H.; Deshapriya, J. D. P.; Fulchignoni, M.; Besse, S.; Sierks, H.; Forgia, F.; Lazzarin, M.; Pommerol, A.; Oklay, N.; Lara, L.; Scholten, F.; Preusker, F.; Leyrat, C.; Pajola, M.; Osiris-Rosetta Team

    2015-10-01

    Since the Rosetta mission arrived at the comet 67P/Churyumov-Gerasimenko (67/P C-G) on July 2014, the comet nucleus has been mapped by both OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System, [1]) NAC (Narrow Angle Camera) and WAC (Wide Angle Camera) acquiring a huge quantity of surface's images at different wavelength bands, under variable illumination conditions and spatial resolution, and producing the most detailed maps at the highest spatial resolution of a comet nucleus surface.67/P C-G's nucleus shows an irregular bi-lobed shape of complex morphology with terrains showing intricate features [2, 3] and a heterogeneity surface at different scales.

  19. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  20. Optimized fan-shaped chiral metamaterial as an ultrathin narrow-band circular polarizer at visible frequencies

    NASA Astrophysics Data System (ADS)

    He, Yizhuo; Wang, Xinghai; Ingram, Whitney; Ai, Bin; Zhao, Yiping

    2018-04-01

    Chiral metamaterials have the great ability to manipulate the circular polarizations of light, which can be utilized to build ultrathin circular polarizers. Here we build a narrow-band circular polarizer at visible frequencies based on plasmonic fan-shaped chiral nanostructures. In order to achieve the best optical performance, we systematically investigate how different fabrication factors affect the chiral optical response of the fan-shaped chiral nanostructures, including incident angle of vapor depositions, nanostructure thickness, and post-deposition annealing. The optimized fan-shaped nanostructures show two narrow bands for different circular polarizations with the maximum extinction ratios 7.5 and 6.9 located at wavelength 687 nm and 774 nm, respectively.

  1. A novel camera localization system for extending three-dimensional digital image correlation

    NASA Astrophysics Data System (ADS)

    Sabato, Alessandro; Reddy, Narasimha; Khan, Sameer; Niezrecki, Christopher

    2018-03-01

    The monitoring of civil, mechanical, and aerospace structures is important especially as these systems approach or surpass their design life. Often, Structural Health Monitoring (SHM) relies on sensing techniques for condition assessment. Advancements achieved in camera technology and optical sensors have made three-dimensional (3D) Digital Image Correlation (DIC) a valid technique for extracting structural deformations and geometry profiles. Prior to making stereophotogrammetry measurements, a calibration has to be performed to obtain the vision systems' extrinsic and intrinsic parameters. It means that the position of the cameras relative to each other (i.e. separation distance, cameras angle, etc.) must be determined. Typically, cameras are placed on a rigid bar to prevent any relative motion between the cameras. This constraint limits the utility of the 3D-DIC technique, especially as it is applied to monitor large-sized structures and from various fields of view. In this preliminary study, the design of a multi-sensor system is proposed to extend 3D-DIC's capability and allow for easier calibration and measurement. The suggested system relies on a MEMS-based Inertial Measurement Unit (IMU) and a 77 GHz radar sensor for measuring the orientation and relative distance of the stereo cameras. The feasibility of the proposed combined IMU-radar system is evaluated through laboratory tests, demonstrating its ability in determining the cameras position in space for performing accurate 3D-DIC calibration and measurements.

  2. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  3. Astronaut John Young in shadow of Lunar Module behind ultraviolet camera

    NASA Image and Video Library

    1972-04-22

    AS16-114-18439 (22 April 1972) --- Astronaut Charles M. Duke Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, commander, during the mission's second extravehicular activity (EVA). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.

  4. Measurement of the surface wavelength distribution of narrow-band radiation by a colorimetric method

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kraiskii, A V; Mironova, T V; Sultanov, T T

    2010-09-10

    A method is suggested for determining the wavelength of narrow-band light from a digital photograph of a radiating surface. The digital camera used should be appropriately calibrated. The accuracy of the wavelength measurement is better than 1 nm. The method was tested on the yellow doublet of mercury spectrum and on the adjacent continuum of the incandescent lamp radiation spectrum. By means of the method suggested the homogeneity of holographic sensor swelling was studied in stationary and transient cases. (laser applications and other topics in quantum electronics)

  5. Wide-Field Optic for Autonomous Acquisition of Laser Link

    NASA Technical Reports Server (NTRS)

    Page, Norman A.; Charles, Jeffrey R.; Biswas, Abhijit

    2011-01-01

    An innovation reported in Two-Camera Acquisition and Tracking of a Flying Target, NASA Tech Briefs, Vol. 32, No. 8 (August 2008), p. 20, used a commercial fish-eye lens and an electronic imaging camera for initially locating objects with subsequent handover to an actuated narrow-field camera. But this operated against a dark-sky background. An improved solution involves an optical design based on custom optical components for the wide-field optical system that directly addresses the key limitations in acquiring a laser signal from a moving source such as an aircraft or a spacecraft. The first challenge was to increase the light collection entrance aperture diameter, which was approximately 1 mm in the first prototype. The new design presented here increases this entrance aperture diameter to 4.2 mm, which is equivalent to a more than 16 times larger collection area. One of the trades made in realizing this improvement was to restrict the field-of-view to +80 deg. elevation and 360 azimuth. This trade stems from practical considerations where laser beam propagation over the excessively high air mass, which is in the line of sight (LOS) at low elevation angles, results in vulnerability to severe atmospheric turbulence and attenuation. An additional benefit of the new design is that the large entrance aperture is maintained even at large off-axis angles when the optic is pointed at zenith. The second critical limitation for implementing spectral filtering in the design was tackled by collimating the light prior to focusing it onto the focal plane. This allows the placement of the narrow spectral filter in the collimated portion of the beam. For the narrow band spectral filter to function properly, it is necessary to adequately control the range of incident angles at which received light intercepts the filter. When this angle is restricted via collimation, narrower spectral filtering can be implemented. The collimated beam (and the filter) must be relatively large to

  6. An ordinary camera in an extraordinary location: Outreach with the Mars Webcam

    NASA Astrophysics Data System (ADS)

    Ormston, T.; Denis, M.; Scuka, D.; Griebel, H.

    2011-09-01

    The European Space Agency's Mars Express mission was launched in 2003 and was Europe's first mission to Mars. On-board was a small camera designed to provide ‘visual telemetry’ of the separation of the Beagle-2 lander. After achieving its goal it was shut down while the primary science mission of Mars Express got underway. In 2007 this camera was reactivated by the flight control team of Mars Express for the purpose of providing public education and outreach—turning it into the ‘Mars Webcam’.The camera is a small, 640×480 pixel colour CMOS camera with a wide-angle 30°×40° field of view. This makes it very similar in almost every way to the average home PC webcam. The major difference is that this webcam is not in an average location but is instead in orbit around Mars. On a strict basis of non-interference with the primary science activities, the camera is turned on to provide unique wide-angle views of the planet below.A highly automated process ensures that the observations are scheduled on the spacecraft and then uploaded to the internet as rapidly as possible. There is no intermediate stage, so that visitors to the Mars Webcam blog serve as ‘citizen scientists’. Full raw datasets and processing instructions are provided along with a mechanism to allow visitors to comment on the blog. Members of the public are encouraged to use this in either a personal or an educational context and work with the images. We then take their excellent work and showcase it back on the blog. We even apply techniques developed by them to improve the data and webcam experience for others.The accessibility and simplicity of the images also makes the data ideal for educational use, especially as educational projects can then be showcased on the site as inspiration for others. The oft-neglected target audience of space enthusiasts is also important as this allows them to participate as part of an interplanetary instrument team.This paper will cover the history of the

  7. Development of compact Compton camera for 3D image reconstruction of radioactive contamination

    NASA Astrophysics Data System (ADS)

    Sato, Y.; Terasaka, Y.; Ozawa, S.; Nakamura Miyamura, H.; Kaburagi, M.; Tanifuji, Y.; Kawabata, K.; Torii, T.

    2017-11-01

    The Fukushima Daiichi Nuclear Power Station (FDNPS), operated by Tokyo Electric Power Company Holdings, Inc., went into meltdown after the large tsunami caused by the Great East Japan Earthquake of March 11, 2011. Very large amounts of radionuclides were released from the damaged plant. Radiation distribution measurements inside FDNPS buildings are indispensable to execute decommissioning tasks in the reactor buildings. We have developed a compact Compton camera to measure the distribution of radioactive contamination inside the FDNPS buildings three-dimensionally (3D). The total weight of the Compton camera is lower than 1.0 kg. The gamma-ray sensor of the Compton camera employs Ce-doped GAGG (Gd3Al2Ga3O12) scintillators coupled with a multi-pixel photon counter. Angular correction of the detection efficiency of the Compton camera was conducted. Moreover, we developed a 3D back-projection method using the multi-angle data measured with the Compton camera. We successfully observed 3D radiation images resulting from the two 137Cs radioactive sources, and the image of the 9.2 MBq source appeared stronger than that of the 2.7 MBq source.

  8. Feasibility study for the application of the large format camera as a payload for the Orbiter program

    NASA Technical Reports Server (NTRS)

    1978-01-01

    The large format camera (LFC) designed as a 30 cm focal length cartographic camera system that employs forward motion compensation in order to achieve the full image resolution provided by its 80 degree field angle lens is described. The feasibility of application of the current LFC design to deployment in the orbiter program as the Orbiter Camera Payload System was assessed and the changes that are necessary to meet such a requirement are discussed. Current design and any proposed design changes were evaluated relative to possible future deployment of the LFC on a free flyer vehicle or in a WB-57F. Preliminary mission interface requirements for the LFC are given.

  9. Variance-reduction normalization technique for a compton camera system

    NASA Astrophysics Data System (ADS)

    Kim, S. M.; Lee, J. S.; Kim, J. H.; Seo, H.; Kim, C. H.; Lee, C. S.; Lee, S. J.; Lee, M. C.; Lee, D. S.

    2011-01-01

    For an artifact-free dataset, pre-processing (known as normalization) is needed to correct inherent non-uniformity of detection property in the Compton camera which consists of scattering and absorbing detectors. The detection efficiency depends on the non-uniform detection efficiency of the scattering and absorbing detectors, different incidence angles onto the detector surfaces, and the geometry of the two detectors. The correction factor for each detected position pair which is referred to as the normalization coefficient, is expressed as a product of factors representing the various variations. The variance-reduction technique (VRT) for a Compton camera (a normalization method) was studied. For the VRT, the Compton list-mode data of a planar uniform source of 140 keV was generated from a GATE simulation tool. The projection data of a cylindrical software phantom were normalized with normalization coefficients determined from the non-uniformity map, and then reconstructed by an ordered subset expectation maximization algorithm. The coefficient of variations and percent errors of the 3-D reconstructed images showed that the VRT applied to the Compton camera provides an enhanced image quality and the increased recovery rate of uniformity in the reconstructed image.

  10. Theodolite with CCD Camera for Safe Measurement of Laser-Beam Pointing

    NASA Technical Reports Server (NTRS)

    Crooke, Julie A.

    2003-01-01

    The simple addition of a charge-coupled-device (CCD) camera to a theodolite makes it safe to measure the pointing direction of a laser beam. The present state of the art requires this to be a custom addition because theodolites are manufactured without CCD cameras as standard or even optional equipment. A theodolite is an alignment telescope equipped with mechanisms to measure the azimuth and elevation angles to the sub-arcsecond level. When measuring the angular pointing direction of a Class ll laser with a theodolite, one could place a calculated amount of neutral density (ND) filters in front of the theodolite s telescope. One could then safely view and measure the laser s boresight looking through the theodolite s telescope without great risk to one s eyes. This method for a Class ll visible wavelength laser is not acceptable to even consider tempting for a Class IV laser and not applicable for an infrared (IR) laser. If one chooses insufficient attenuation or forgets to use the filters, then looking at the laser beam through the theodolite could cause instant blindness. The CCD camera is already commercially available. It is a small, inexpensive, blackand- white CCD circuit-board-level camera. An interface adaptor was designed and fabricated to mount the camera onto the eyepiece of the specific theodolite s viewing telescope. Other equipment needed for operation of the camera are power supplies, cables, and a black-and-white television monitor. The picture displayed on the monitor is equivalent to what one would see when looking directly through the theodolite. Again, the additional advantage afforded by a cheap black-and-white CCD camera is that it is sensitive to infrared as well as to visible light. Hence, one can use the camera coupled to a theodolite to measure the pointing of an infrared as well as a visible laser.

  11. An effective rectification method for lenselet-based plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Jin, Jing; Cao, Yiwei; Cai, Weijia; Zheng, Wanlu; Zhou, Ping

    2016-10-01

    The Lenselet-Based Plenoptic has recently drawn a lot of attention in the field of computational photography. The additional information inherent in light field allows a wide range of applications, but some preliminary processing of the raw image is necessary before further operations. In this paper, an effective method is presented for the rotation rectification of the raw image. The rotation is caused by imperfectly position of micro-lens array relative to the sensor plane in commercially available Lytro plenoptic cameras. The key to our method is locating the center of each microlens image, which is projected by a micro-lens. Because of vignetting, the pixel values at centers of the micro-lens image are higher than those at the peripheries. A mask is applied to probe the micro-lens image to locate the center area by finding the local maximum response. The error of the center coordinate estimate is corrected and the angle of rotation is computed via a subsequent line fitting. The algorithm is performed on two images captured by different Lytro cameras. The angles of rotation are -0.3600° and -0.0621° respectively and the rectified raw image is useful and reliable for further operations, such as extraction of the sub-aperture images. The experimental results demonstrate that our method is efficient and accurate.

  12. Calibration of the venµs super-spectral camera

    NASA Astrophysics Data System (ADS)

    Topaz, Jeremy; Sprecher, Tuvia; Tinto, Francesc; Echeto, Pierre; Hagolle, Olivier

    2017-11-01

    A high-resolution super-spectral camera is being developed by Elbit Systems in Israel for the joint CNES- Israel Space Agency satellite, VENμS (Vegetation and Environment monitoring on a new Micro-Satellite). This camera will have 12 narrow spectral bands in the Visible/NIR region and will give images with 5.3 m resolution from an altitude of 720 km, with an orbit which allows a two-day revisit interval for a number of selected sites distributed over some two-thirds of the earth's surface. The swath width will be 27 km at this altitude. To ensure the high radiometric and geometric accuracy needed to fully exploit such multiple data sampling, careful attention is given in the design to maximize characteristics such as signal-to-noise ratio (SNR), spectral band accuracy, stray light rejection, inter- band pixel-to-pixel registration, etc. For the same reasons, accurate calibration of all the principle characteristics is essential, and this presents some major challenges. The methods planned to achieve the required level of calibration are presented following a brief description of the system design. A fuller description of the system design is given in [2], [3] and [4].

  13. 3. Elevation view of entire midsection using ultrawide angle lens. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. Elevation view of entire midsection using ultrawide angle lens. Note opened south doors and closed north doors. The following photo WA-203-C-4 is similar except the camera position was moved right to include the slope of the south end. - Puget Sound Naval Shipyard, Munitions Storage Bunker, Naval Ammunitions Depot, South of Campbell Trail, Bremerton, Kitsap County, WA

  14. Effects of Implantable Collamer Lens V4c Placement on Iridocorneal Angle Measurements by Fourier-Domain Optical Coherence Tomography.

    PubMed

    Fernández-Vigo, José Ignacio; Macarro-Merino, Ana; Fernández-Vigo, Cristina; Fernández-Vigo, José Ángel; Martínez-de-la-Casa, José María; Fernández-Pérez, Cristina; García-Feijóo, Julián

    2016-02-01

    To assess by Fourier-domain optical coherence tomography (FDOCT) changes produced in iridocorneal angle measurements in patients undergoing Visian Implantable Collamer Lens (ICL) V4c (STAAR Surgical AG) placement. Prospective interventional case series. In 50 eyes of 25 myopic subjects consecutively scheduled for ICL implant, FDOCT (RTVue; Optovue Inc) iridocorneal angle measurements were made before and 1 and 3 months after surgery. Trabecular-iris angle (TIA) and angle opening distance 500 μm anterior to the scleral spur (AOD500) were compared among the quadrants nasal, temporal, and inferior, and correlations with ocular variables including lens vault were examined. Preoperative TIA was 48.7 ± 8.7, 48.2 ± 8.7, and 48.7 ± 9.3 degrees for the nasal, temporal, and inferior quadrants, with no differences (P = 1.000). Following ICL implant, corresponding values fell to 31.2 ± 11.5, 30.0 ± 10.7, and 29.7 ± 8.1 degrees at 1 month postsurgery, indicating angle narrowing of 34%-42%, and to 30.6 ± 12.3, 30.1 ± 11.9, and 29.8 ± 12.3 degrees, respectively, at 3 months postsurgery. Angle measurements failed to vary between 1 month and 3 months postsurgery (P = .481). In 8 eyes, iridotrabecular contact attributable to surgery was observed. One month after surgery, vault measurements correlated with TIA (R = -.309; P = .048). Six variables were identified as predictors of TIA at 1 month postsurgery (R(2) = .907). Although considerable angle narrowing was detected 1 month after ICL V4c implant, this narrowing remained stable at 3 months postsurgery. Factors predictive of TIA could serve to identify suitable candidates for ICL placement. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Image quality testing of assembled IR camera modules

    NASA Astrophysics Data System (ADS)

    Winters, Daniel; Erichsen, Patrik

    2013-10-01

    Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.

  16. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  17. Steering Dynamics of Tilting Narrow Track Vehicle with Passive Front Wheel Design

    NASA Astrophysics Data System (ADS)

    TAN, Jeffrey Too Chuan; ARAKAWA, Hiroki; SUDA, Yoshihiro

    2016-09-01

    In recent years, narrow track vehicle has been emerged as a potential candidate for the next generation of urban transportation system, which is greener and space effective. Vehicle body tilting has been a symbolic characteristic of such vehicle, with the purpose to maintain its stability with the narrow track body. However, the coordination between active steering and vehicle tilting requires considerable driving skill in order to achieve effective stability. In this work, we propose an alternative steering method with a passive front wheel that mechanically follows the vehicle body tilting. The objective of this paper is to investigate the steering dynamics of the vehicle under various design parameters of the passive front wheel. Modeling of a three-wheel tilting narrow track vehicle and multibody dynamics simulations were conducted to study the effects of two important front wheel design parameters, i.e. caster angle and trail toward the vehicle steering dynamics in steering response time, turning radius, steering stability and resiliency towards external disturbance. From the results of the simulation studies, we have verified the relationships of these two front wheel design parameters toward the vehicle steering dynamics.

  18. Alpha and Omega

    NASA Image and Video Library

    2017-11-27

    These two images illustrate just how far Cassini traveled to get to Saturn. On the left is one of the earliest images Cassini took of the ringed planet, captured during the long voyage from the inner solar system. On the right is one of Cassini's final images of Saturn, showing the site where the spacecraft would enter the atmosphere on the following day. In the left image, taken in 2001, about six months after the spacecraft passed Jupiter for a gravity assist flyby, the best view of Saturn using the spacecraft's high-resolution (narrow-angle) camera was on the order of what could be seen using the Earth-orbiting Hubble Space Telescope. At the end of the mission (at right), from close to Saturn, even the lower resolution (wide-angle) camera could capture just a tiny part of the planet. The left image looks toward Saturn from 20 degrees below the ring plane and was taken on July 13, 2001 in wavelengths of infrared light centered at 727 nanometers using the Cassini spacecraft narrow-angle camera. The view at right is centered on a point 6 degrees north of the equator and was taken in visible light using the wide-angle camera on Sept. 14, 2017. The view on the left was acquired at a distance of approximately 317 million miles (510 million kilometers) from Saturn. Image scale is about 1,900 miles (3,100 kilometers) per pixel. The view at right was acquired at a distance of approximately 360,000 miles (579,000 kilometers) from Saturn. Image scale is 22 miles (35 kilometers) per pixel. The Cassini spacecraft ended its mission on Sept. 15, 2017. https://photojournal.jpl.nasa.gov/catalog/PIA21353

  19. Normal Q-angle in an adult Nigerian population.

    PubMed

    Omololu, Bade B; Ogunlade, Olusegun S; Gopaldasani, Vinod K

    2009-08-01

    The Q-angle has been studied among the adult Caucasian population with the establishment of reference values. Scientists are beginning to accept the concept of different human races. Physical variability exists between various African ethnic groups and Caucasians as exemplified by differences in anatomic features such as a flat nose compared with a pointed nose, wide rather than narrow faces, and straight rather than curly hair. Therefore, we cannot assume the same Q-angle values will be applicable to Africans and Caucasians. We established a baseline reference value for normal Q-angles among asymptomatic Nigerian adults. The Q-angles of the left and right knees were measured using a goniometer in 477 Nigerian adults (354 males; 123 females) in the supine and standing positions. The mean Q-angles for men were 10.7 degrees +/- 2.2 degrees in the supine position and 12.3 degrees +/- 2.2 degrees in the standing position in the right knee. The left knee Q-angles in men were 10.5 degrees +/- 2.6 degrees in the supine position and 11.7 degrees +/- 2.8 degrees in the standing position. In women, the mean Q-angles for the right knee were 21 degrees +/- 4.8 degrees in the supine position and 22.8 degrees +/- 4.7 degrees in the standing position. The mean Q-angles for the left knee in women were 20.9 degrees +/- 4.6 degrees in the supine position and 22.7 degrees +/- 4.6 degrees in the standing position. We observed a difference in Q-angles in the supine and standing positions for all participants. The Q-angle in adult Nigerian men is comparable to that of adult Caucasian men, but the Q-angle of Nigerian women is greater than that of their Caucasian counterparts.

  20. Tunable polarization plasma channel undulator for narrow bandwidth photon emission

    DOE PAGES

    Rykovanov, S. G.; Wang, J. W.; Kharin, V. Yu.; ...

    2016-09-09

    The theory of a plasma undulator excited by a short intense laser pulse in a parabolic plasma channel is presented. The undulator fields are generated either by the laser pulse incident off-axis and/or under the angle with respect to the channel axis. Linear plasma theory is used to derive the wakefield structure. It is shown that the electrons injected into the plasma wakefields experience betatron motion and undulator oscillations. Optimal electron beam injection conditions are derived for minimizing the amplitude of the betatron motion, producing narrow-bandwidth undulator radiation. Polarization control is readily achieved by varying the laser pulse injection conditions.

  1. Retrieving Atmospheric Dust Loading on Mars Using Engineering Cameras and MSL's Mars Hand Lens Imager (MAHLI)

    NASA Astrophysics Data System (ADS)

    Wolfe, C. A.; Lemmon, M. T.

    2015-12-01

    Dust in the Martian atmosphere influences energy deposition, dynamics, and the viability of solar powered exploration vehicles. The Viking, Pathfinder, Spirit, Opportunity, Phoenix, and Curiosity landers and rovers each included the ability to image the Sun with a science camera equipped with a neutral density filter. Direct images of the Sun not only provide the ability to measure extinction by dust and ice in the atmosphere, but also provide a variety of constraints on the Martian dust and water cycles. These observations have been used to characterize dust storms, to provide ground truth sites for orbiter-based global measurements of dust loading, and to help monitor solar panel performance. In the cost-constrained environment of Mars exploration, future missions may omit such cameras, as the solar-powered InSight mission has. We seek to provide a robust capability of determining atmospheric opacity from sky images taken with cameras that have not been designed for solar imaging, such as the engineering cameras onboard Opportunity and the Mars Hand Lens Imager (MAHLI) on Curiosity. Our investigation focuses primarily on the accuracy of a method that determines optical depth values using scattering models that implement the ratio of sky radiance measurements at different elevation angles, but at the same scattering angle. Operational use requires the ability to retrieve optical depth on a timescale useful to mission planning, and with an accuracy and precision sufficient to support both mission planning and validating orbital measurements. We will present a simulation-based assessment of imaging strategies and their error budgets, as well as a validation based on the comparison of direct extinction measurements from archival Navcam, Hazcam, and MAHLI camera data.

  2. Camera characterization for all-sky polarization measurements during the 2017 solar eclipse

    NASA Astrophysics Data System (ADS)

    Hashimoto, Taiga; Dahl, Laura M.; Laurie, Seth A.; Shaw, Joseph A.

    2017-08-01

    A solar eclipse provides a rare opportunity to observe skylight polarization during conditions that are fundamentally different than what we see every day. On 21 August 2017 we will measure the skylight polarization during a total solar eclipse in Rexburg, Idaho, USA. Previous research has shown that during totality the sky polarization pattern is altered significantly to become nominally symmetric about the zenith. However, there are still questions remaining about the details of how surface reflectance near the eclipse observation site and optical properties of aerosols in the atmosphere influence the totality sky polarization pattern. We will study how skylight polarization in a solar eclipse changes through each phase and how surface and atmospheric features affect the measured polarization signatures. To accomplish this, fully characterizing the cameras and fisheye lenses is critical. This paper reports measurements that include finding the camera sensitivity and its relationship to the required short exposure times, measuring the camera's spectral response function, mapping the angles of each camera pixel with the fisheye lens, and taking test measurements during daytime and twilight conditions. The daytime polarimetric images were compared to images from an existing all-sky polarization imager and a polarimetric radiative transfer model.

  3. Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

    PubMed

    Lee, Sing Chun; Fuerst, Bernhard; Fotouhi, Javad; Fischer, Marius; Osgood, Greg; Navab, Nassir

    2016-06-01

    This work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient's surface without the need to move the C-arm. An RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm. Several experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries. To the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.

  4. Kinematics of a vertical axis wind turbine with a variable pitch angle

    NASA Astrophysics Data System (ADS)

    Jakubowski, Mateusz; Starosta, Roman; Fritzkowski, Pawel

    2018-01-01

    A computational model for the kinematics of a vertical axis wind turbine (VAWT) is presented. A H-type rotor turbine with a controlled pitch angle is considered. The aim of this solution is to improve the VAWT productivity. The discussed method is related to a narrow computational branch based on the Blade Element Momentum theory (BEM theory). The paper can be regarded as a theoretical basis and an introduction to further studies with the application of BEM. The obtained torque values show the main advantage of using the variable pitch angle.

  5. Is Perceptual Narrowing Too Narrow?

    ERIC Educational Resources Information Center

    Cashon, Cara H.; Denicola, Christopher A.

    2011-01-01

    There is a growing list of examples illustrating that infants are transitioning from having earlier abilities that appear more "universal," "broadly tuned," or "unconstrained" to having later abilities that appear more "specialized," "narrowly tuned," or "constrained." Perceptual narrowing, a well-known phenomenon related to face, speech, and…

  6. Stereoscopic determination of all-sky altitude map of aurora using two ground-based Nikon DSLR cameras

    NASA Astrophysics Data System (ADS)

    Kataoka, R.; Miyoshi, Y.; Shigematsu, K.; Hampton, D.; Mori, Y.; Kubo, T.; Yamashita, A.; Tanaka, M.; Takahei, T.; Nakai, T.; Miyahara, H.; Shiokawa, K.

    2013-09-01

    A new stereoscopic measurement technique is developed to obtain an all-sky altitude map of aurora using two ground-based digital single-lens reflex (DSLR) cameras. Two identical full-color all-sky cameras were set with an 8 km separation across the Chatanika area in Alaska (Poker Flat Research Range and Aurora Borealis Lodge) to find localized emission height with the maximum correlation of the apparent patterns in the localized pixels applying a method of the geographical coordinate transform. It is found that a typical ray structure of discrete aurora shows the broad altitude distribution above 100 km, while a typical patchy structure of pulsating aurora shows the narrow altitude distribution of less than 100 km. Because of its portability and low cost of the DSLR camera systems, the new technique may open a unique opportunity not only for scientists but also for night-sky photographers to complementarily attend the aurora science to potentially form a dense observation network.

  7. Rosetta/OSIRIS - Nucleus morphology and activity of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Sierks, Holger; Barbieri, Cesare; Lamy, Philippe; Rickman, Hans; Rodrigo, Rafael; Koschny, Detlef

    2015-04-01

    ESA's Rosetta mission arrived on August 6, 2014, at target comet 67P/Churyumov-Gerasimenko after 10 years of cruise. OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) is the scientific imaging system onboard Rosetta. It comprises a Narrow Angle Camera (NAC) for nucleus surface and dust studies and a Wide Angle Camera (WAC) for the wide field coma investigations. OSIRIS imaged the nucleus and coma of the comet from the arrival throughout the mapping phase, PHILAE landing, early escort phase and close fly-by. The overview paper will discuss the surface morpholo-gy and activity of the nucleus as seen in gas, dust, and local jets as well as small scale structures in the local topography.

  8. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  9. Towards fish-eye camera based in-home activity assessment.

    PubMed

    Bas, Erhan; Erdogmus, Deniz; Ozertem, Umut; Pavel, Misha

    2008-01-01

    Indoors localization, activity classification, and behavioral modeling are increasingly important for surveillance applications including independent living and remote health monitoring. In this paper, we study the suitability of fish-eye cameras (high-resolution CCD sensors with very-wide-angle lenses) for the purpose of monitoring people in indoors environments. The results indicate that these sensors are very useful for automatic activity monitoring and people tracking. We identify practical and mathematical problems related to information extraction from these video sequences and identify future directions to solve these issues.

  10. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    NASA Astrophysics Data System (ADS)

    Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.

    2010-11-01

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.

  11. Narrow-Band Organic Photodiodes for High-Resolution Imaging.

    PubMed

    Han, Moon Gyu; Park, Kyung-Bae; Bulliard, Xavier; Lee, Gae Hwang; Yun, Sungyoung; Leem, Dong-Seok; Heo, Chul-Joon; Yagi, Tadao; Sakurai, Rie; Ro, Takkyun; Lim, Seon-Jeong; Sul, Sangchul; Na, Kyoungwon; Ahn, Jungchak; Jin, Yong Wan; Lee, Sangyoon

    2016-10-05

    There are growing opportunities and demands for image sensors that produce higher-resolution images, even in low-light conditions. Increasing the light input areas through 3D architecture within the same pixel size can be an effective solution to address this issue. Organic photodiodes (OPDs) that possess wavelength selectivity can allow for advancements in this regard. Here, we report on novel push-pull D-π-A dyes specially designed for Gaussian-shaped, narrow-band absorption and the high photoelectric conversion. These p-type organic dyes work both as a color filter and as a source of photocurrents with linear and fast light responses, high sensitivity, and excellent stability, when combined with C60 to form bulk heterojunctions (BHJs). The effectiveness of the OPD composed of the active color filter was demonstrated by obtaining a full-color image using a camera that contained an organic/Si hybrid complementary metal-oxide-semiconductor (CMOS) color image sensor.

  12. Keyboard before Head Tracking Depresses User Success in Remote Camera Control

    NASA Astrophysics Data System (ADS)

    Zhu, Dingyun; Gedeon, Tom; Taylor, Ken

    In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.

  13. Addressing challenges of modulation transfer function measurement with fisheye lens cameras

    NASA Astrophysics Data System (ADS)

    Deegan, Brian M.; Denny, Patrick E.; Zlokolica, Vladimir; Dever, Barry; Russell, Laura

    2015-03-01

    Modulation transfer function (MTF) is a well defined and accepted method of measuring image sharpness. The slanted edge test, as defined in ISO12233 is a standard method of calculating MTF, and is widely used for lens alignment and auto-focus algorithm verification. However, there are a number of challenges which should be considered when measuring MTF in cameras with fisheye lenses. Due to trade-offs related Petzval curvature, planarity of the optical plane is difficult to achieve in fisheye lenses. It is therefore critical to have the ability to accurately measure sharpness throughout the entire image, particularly for lens alignment. One challenge for fisheye lenses is that, because of the radial distortion, the slanted edges will have different angles, depending on the location within the image and on the distortion profile of the lens. Previous work in the literature indicates that MTF measurements are robust for angles between 2 and 10 degrees. Outside of this range, MTF measurements become unreliable. Also, the slanted edge itself will be curved by the lens distortion, causing further measurement problems. This study summarises the difficulties in the use of MTF for sharpness measurement in fisheye lens cameras, and proposes mitigations and alternative methods.

  14. Robotic Camera Assistance and Its Benefit in 1033 Traditional Laparoscopic Procedures: Prospective Clinical Trial Using a Joystick-guided Camera Holder.

    PubMed

    Holländer, Sebastian W; Klingen, Hans Joachim; Fritz, Marliese; Djalali, Peter; Birk, Dieter

    2014-11-01

    Despite advances in instruments and techniques in laparoscopic surgery, one thing remains uncomfortable: the camera assistance. The aim of this study was to investigate the benefit of a joystick-guided camera holder (SoloAssist®, Aktormed, Barbing, Germany) for laparoscopic surgery and to compare the robotic assistance to human assistance. 1033 consecutive laparoscopic procedures were performed assisted by the SoloAssist®. Failures and aborts were documented and nine surgeons were interviewed by questionnaire regarding their experiences. In 71 of 1033 procedures, robotic assistance was aborted and the procedure was continued manually, mostly because of frequent changes of position, narrow spaces, and adverse angular degrees. One case of short circuit was reported. Emergency stop was necessary in three cases due to uncontrolled movement into the abdominal cavity. Eight of nine surgeons prefer robotic to human assistance, mostly because of a steady image and self-control. The SoloAssist® robot is a reliable system for laparoscopic procedures. Emergency shutdown was necessary in only three cases. Some minor weak spots could have been identified. Most surgeons prefer robotic assistance to human assistance. We feel that the SoloAssist® makes standard laparoscopic surgery more comfortable and further development is desirable, but it cannot fully replace a human assistant.

  15. Quantification of Finger-Tapping Angle Based on Wearable Sensors

    PubMed Central

    Djurić-Jovičić, Milica; Jovičić, Nenad S.; Roby-Brami, Agnes; Popović, Mirjana B.; Kostić, Vladimir S.; Djordjević, Antonije R.

    2017-01-01

    We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems. PMID:28125051

  16. Quantification of Finger-Tapping Angle Based on Wearable Sensors.

    PubMed

    Djurić-Jovičić, Milica; Jovičić, Nenad S; Roby-Brami, Agnes; Popović, Mirjana B; Kostić, Vladimir S; Djordjević, Antonije R

    2017-01-25

    We propose a novel simple method for quantitative and qualitative finger-tapping assessment based on miniature inertial sensors (3D gyroscopes) placed on the thumb and index-finger. We propose a simplified description of the finger tapping by using a single angle, describing rotation around a dominant axis. The method was verified on twelve subjects, who performed various tapping tasks, mimicking impaired patterns. The obtained tapping angles were compared with results of a motion capture camera system, demonstrating excellent accuracy. The root-mean-square (RMS) error between the two sets of data is, on average, below 4°, and the intraclass correlation coefficient is, on average, greater than 0.972. Data obtained by the proposed method may be used together with scores from clinical tests to enable a better diagnostic. Along with hardware simplicity, this makes the proposed method a promising candidate for use in clinical practice. Furthermore, our definition of the tapping angle can be applied to all tapping assessment systems.

  17. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  18. Investigating at the Moon With new Eyes: The Lunar Reconnaissance Orbiter Mission Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Hiesinger, H.; Robinson, M. S.; McEwen, A. S.; Turtle, E. P.; Eliason, E. M.; Jolliff, B. L.; Malin, M. C.; Thomas, P. C.

    The Lunar Reconnaissance Orbiter Mission Camera (LROC) H. Hiesinger (1,2), M.S. Robinson (3), A.S. McEwen (4), E.P. Turtle (4), E.M. Eliason (4), B.L. Jolliff (5), M.C. Malin (6), and P.C. Thomas (7) (1) Brown Univ., Dept. of Geological Sciences, Providence RI 02912, Harald_Hiesinger@brown.edu, (2) Westfaelische Wilhelms-University, (3) Northwestern Univ., (4) LPL, Univ. of Arizona, (5) Washington Univ., (6) Malin Space Science Systems, (7) Cornell Univ. The Lunar Reconnaissance Orbiter (LRO) mission is scheduled for launch in October 2008 as a first step to return humans to the Moon by 2018. The main goals of the Lunar Reconnaissance Orbiter Camera (LROC) are to: 1) assess meter and smaller- scale features for safety analyses for potential lunar landing sites near polar resources, and elsewhere on the Moon; and 2) acquire multi-temporal images of the poles to characterize the polar illumination environment (100 m scale), identifying regions of permanent shadow and permanent or near permanent illumination over a full lunar year. In addition, LROC will return six high-value datasets such as 1) meter-scale maps of regions of permanent or near permanent illumination of polar massifs; 2) high resolution topography through stereogrammetric and photometric stereo analyses for potential landing sites; 3) a global multispectral map in 7 wavelengths (300-680 nm) to characterize lunar resources, in particular ilmenite; 4) a global 100-m/pixel basemap with incidence angles (60-80 degree) favorable for morphologic interpretations; 5) images of a variety of geologic units at sub-meter resolution to investigate physical properties and regolith variability; and 6) meter-scale coverage overlapping with Apollo Panoramic images (1-2 m/pixel) to document the number of small impacts since 1971-1972, to estimate hazards for future surface operations. LROC consists of two narrow-angle cameras (NACs) which will provide 0.5-m scale panchromatic images over a 5-km swath, a wide-angle

  19. Dependence of astigmatism, far-field pattern, and spectral envelope width on active layer thickness of gain guided lasers with narrow stripe geometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mamine, T.

    1984-06-15

    The effects of active layer thickness on the astigmatism, the angle of far-field pattern width parallel to the junction, and the spectral envelope width of a gain guided laser with a narrow stripe geometry have been investigated analytically and experimentally. It is concluded that a large level of astigmatism, a narrow far-field pattern width, and a rapid convergence of the spectral envelope width are inherent to the gain guided lasers with thin active layers.

  20. TIFR Near Infrared Imaging Camera-II on the 3.6 m Devasthal Optical Telescope

    NASA Astrophysics Data System (ADS)

    Baug, T.; Ojha, D. K.; Ghosh, S. K.; Sharma, S.; Pandey, A. K.; Kumar, Brijesh; Ghosh, Arpan; Ninan, J. P.; Naik, M. B.; D’Costa, S. L. A.; Poojary, S. S.; Sandimani, P. R.; Shah, H.; Krishna Reddy, B.; Pandey, S. B.; Chand, H.

    Tata Institute of Fundamental Research (TIFR) Near Infrared Imaging Camera-II (TIRCAM2) is a closed-cycle Helium cryo-cooled imaging camera equipped with a Raytheon 512×512 pixels InSb Aladdin III Quadrant focal plane array (FPA) having sensitivity to photons in the 1-5μm wavelength band. In this paper, we present the performance of the camera on the newly installed 3.6m Devasthal Optical Telescope (DOT) based on the calibration observations carried out during 2017 May 11-14 and 2017 October 7-31. After the preliminary characterization, the camera has been released to the Indian and Belgian astronomical community for science observations since 2017 May. The camera offers a field-of-view (FoV) of ˜86.5‧‧×86.5‧‧ on the DOT with a pixel scale of 0.169‧‧. The seeing at the telescope site in the near-infrared (NIR) bands is typically sub-arcsecond with the best seeing of ˜0.45‧‧ realized in the NIR K-band on 2017 October 16. The camera is found to be capable of deep observations in the J, H and K bands comparable to other 4m class telescopes available world-wide. Another highlight of this camera is the observational capability for sources up to Wide-field Infrared Survey Explorer (WISE) W1-band (3.4μm) magnitudes of 9.2 in the narrow L-band (nbL; λcen˜ 3.59μm). Hence, the camera could be a good complementary instrument to observe the bright nbL-band sources that are saturated in the Spitzer-Infrared Array Camera (IRAC) ([3.6] ≲ 7.92 mag) and the WISE W1-band ([3.4] ≲ 8.1 mag). Sources with strong polycyclic aromatic hydrocarbon (PAH) emission at 3.3μm are also detected. Details of the observations and estimated parameters are presented in this paper.

  1. Determination of Turning Characteristics of an Airship by Means of a Camera Obscura

    NASA Technical Reports Server (NTRS)

    Crowley, J W , Jr; Freeman, R G

    1925-01-01

    This investigation was carried out by the National Advisory Committee at Langley Field for the purpose of determining the adaptability of the camera obscura to the securing of turning characteristics of airships, and also of obtaining some of those characteristics of the C-7 airship. The method consisted in flying the airship in circling flight over a camera obscura and photographing it at known time intervals. The results show that the method used is highly satisfactory and that for the particular maneuver employed the turning diameter is 1,240 feet, corresponding to a turning coefficient of 6.4, and that the position of zero angle of yaw is at the nose of the airship.

  2. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  3. Wide-angle Spectrally Selective Perfect Absorber by Utilizing Dispersionless Tamm Plasmon Polaritons

    PubMed Central

    Xue, Chun-hua; Wu, Feng; Jiang, Hai-tao; Li, Yunhui; Zhang, Ye-wen; Chen, Hong

    2016-01-01

    We theoretically investigate wide-angle spectrally selective absorber by utilizing dispersionless Tamm plasmon polaritons (TPPs) under TM polarization. TPPs are resonant tunneling effects occurring on the interface between one-dimensional photonic crystals (1DPCs) and metal slab, and their dispersion properties are essentially determined by that of 1DPCs. Our investigations show that dispersionless TPPs can be excited in 1DPCs containing hyperbolic metamaterials (HMMs) on metal substrate. Based on dispersionless TPPs, electromagnetic waves penetrate into metal substrate and are absorbed entirely by lossy metal, exhibiting a narrow-band and wide-angle perfect absorption for TM polarization. Our results exhibit nearly perfect absorption with a value over 98% in the angle of incidence region of 0–80 degree. PMID:27991565

  4. II-VI Narrow-Bandgap Semiconductors for Optoelectronics

    NASA Astrophysics Data System (ADS)

    Baker, Ian

    The field of narrow-gap II-VI materials is dominated by the compound semiconductor mercury cadmium telluride, (Hg1-x Cd x Te or MCT), which supports a large industry in infrared detectors, cameras and infrared systems. It is probably true to say that HgCdTe is the third most studied semiconductor after silicon and gallium arsenide. Hg1-x Cd x Te is the material most widely used in high-performance infrared detectors at present. By changing the composition x the spectral response of the detector can be made to cover the range from 1 μm to beyond 17 μm. The advantages of this system arise from a number of features, notably: close lattice matching, high optical absorption coefficient, low carrier generation rate, high electron mobility and readily available doping techniques. These advantages mean that very sensitive infrared detectors can be produced at relatively high operating temperatures. Hg1-x Cd x Te multilayers can be readily grown in vapor-phase epitaxial processes. This provides the device engineer with complex doping and composition profiles that can be used to further enhance the electro-optic performance, leading to low-cost, large-area detectors in the future. The main purpose of this chapter is to describe the applications, device physics and technology of II-VI narrow-bandgap devices, focusing on HgCdTe but also including Hg1-x Mn x Te and Hg1-x Zn x Te. It concludes with a review of the research and development programs into third-generation infrared detector technology (so-called GEN III detectors) being performed in centers around the world.

  5. Image dissector camera system study

    NASA Technical Reports Server (NTRS)

    Howell, L.

    1984-01-01

    Various aspects of a rendezvous and docking system using an image dissector detector as compared to a GaAs detector were discussed. Investigation into a gimbled scanning system is also covered and the measured video response curves from the image dissector camera are presented. Rendezvous will occur at ranges greater than 100 meters. The maximum range considered was 1000 meters. During docking, the range, range-rate, angle, and angle-rate to each reflector on the satellite must be measured. Docking range will be from 3 to 100 meters. The system consists of a CW laser diode transmitter and an image dissector receiver. The transmitter beam is amplitude modulated with three sine wave tones for ranging. The beam is coaxially combined with the receiver beam. Mechanical deflection of the transmitter beam, + or - 10 degrees in both X and Y, can be accomplished before or after it is combined with the receiver beam. The receiver will have a field-of-view (FOV) of 20 degrees and an instantaneous field-of-view (IFOV) of two milliradians (mrad) and will be electronically scanned in the image dissector. The increase in performance obtained from the GaAs photocathode is not needed to meet the present performance requirements.

  6. Development of a stiffness-angle law for simplifying the measurement of human hair stiffness.

    PubMed

    Jung, I K; Park, S C; Lee, Y R; Bin, S A; Hong, Y D; Eun, D; Lee, J H; Roh, Y S; Kim, B M

    2018-04-01

    This research examines the benefits of caffeine absorption on hair stiffness. To test hair stiffness, we have developed an evaluation method that is not only accurate, but also inexpensive. Our evaluation method for measuring hair stiffness culminated in a model, called the Stiffness-Angle Law, which describes the elastic properties of hair and can be widely applied to the development of hair care products. Small molecules (≤500 g mol -1 ) such as caffeine can be absorbed into hair. A common shampoo containing 4% caffeine was formulated and applied to hair 10 times, after which the hair stiffness was measured. The caffeine absorption of the treated hair was observed using Fourier-transform infrared spectroscopy (FTIR) with a focal plane array (FPA) detector. Our evaluation method for measuring hair stiffness consists of a regular camera and a support for single strands of hair. After attaching the hair to the support, the bending angle of the hair was observed with a camera and measured. Then, the hair strand was weighed. The stiffness of the hair was calculated based on our proposed Stiffness-Angle Law using three variables: angle, weight of hair and the distance the hair was pulled across the support. The caffeine absorption was confirmed by FTIR analysis. The concentration of amide bond in the hair certainly increased due to caffeine absorption. After caffeine was absorbed into the hair, the bending angle and weight of the hair changed. Applying these measured changes to the Stiffness-Angle Law, it was confirmed that the hair stiffness increased by 13.2% due to caffeine absorption. The theoretical results using the Stiffness-Angle Law agree with the visual examinations of hair exposed to caffeine and also the known results of hair stiffness from a previous report. Our evaluation method combined with our proposed Stiffness-Angle Law effectively provides an accurate and inexpensive evaluation technique for measuring bending stiffness of human hair. © 2018

  7. Astronaut Charles M. Duke, Jr., in shadow of Lunar Module behind ultraviolet camera

    NASA Technical Reports Server (NTRS)

    1972-01-01

    Astronaut Charles M. Duke, Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, mission commander, during the mission's second extravehicular activity (EVA-2). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (lm) 'Orion' to explore the Descartes highlands landing site on the Moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (csm) 'Casper' in lunar orbit.

  8. Modelling of the outburst on July 29th , 2015 observed with OSIRIS in the southern hemisphere of comet 67P/Churyumov-Gerasimenko

    NASA Astrophysics Data System (ADS)

    Gicquel, Adeline; Vincent, Jean-Baptiste; Sierks, Holger; Rose, Martin; Agarwal, Jessica; Deller, Jakob; Guettler, Carsten; Hoefner, Sebastian; Hofmann, Marc; Hu, Xuanyu; Kovacs, Gabor; Oklay Vincent, Nilda; Shi, Xian; Tubiana, Cecilia; Barbieri, Cesare; Lamy, Phylippe; Rodrigo, Rafael; Koschny, Detlef; Rickman, Hans; OSIRIS Team

    2016-10-01

    Images of the nucleus and the coma (gas and dust) of comet 67P/Churyumov- Gerasimenko have been acquired by the OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) cameras system since March 2014 using both the wide angle camera (WAC) and the narrow angle camera (NAC). We are using the NAC camera to study the bright outburst observed on July 29th, 2015 in the southern hemisphere. The NAC camera's wavelength ranges between 250-1000 nm with a combination of 12 filters. The high spatial resolution is needed to localize the source point of the outburst on the surface of the nucleus. At the time of the observations, the heliocentric distance was 1.25AU and the distance between the spacecraft and the comet was 126 km. We aim to understand the physics leading to such outgassing: Is the jet associated to the outbursts controlled by the micro-topography? Or by ice suddenly exposed? We are using the Direct Simulation Monte Carlo (DSMC) method to study the gas flow close to the nucleus. The goal of the DSMC code is to reproduce the opening angle of the jet, and constrain the outgassing ratio between outburst source and local region. The results of this model will be compared to the images obtained with the NAC camera.

  9. Fundamentals of in Situ Digital Camera Methodology for Water Quality Monitoring of Coast and Ocean

    PubMed Central

    Goddijn-Murphy, Lonneke; Dailloux, Damien; White, Martin; Bowers, Dave

    2009-01-01

    Conventional digital cameras, the Nikon Coolpix885® and the SeaLife ECOshot®, were used as in situ optical instruments for water quality monitoring. Measured response spectra showed that these digital cameras are basically three-band radiometers. The response values in the red, green and blue bands, quantified by RGB values of digital images of the water surface, were comparable to measurements of irradiance levels at red, green and cyan/blue wavelengths of water leaving light. Different systems were deployed to capture upwelling light from below the surface, while eliminating direct surface reflection. Relationships between RGB ratios of water surface images, and water quality parameters were found to be consistent with previous measurements using more traditional narrow-band radiometers. This current paper focuses on the method that was used to acquire digital images, derive RGB values and relate measurements to water quality parameters. Field measurements were obtained in Galway Bay, Ireland, and in the Southern Rockall Trough in the North Atlantic, where both yellow substance and chlorophyll concentrations were successfully assessed using the digital camera method. PMID:22346729

  10. Modified slanted-edge method for camera modulation transfer function measurement using nonuniform fast Fourier transform technique

    NASA Astrophysics Data System (ADS)

    Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin

    2018-01-01

    ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.

  11. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  12. Structured light system calibration method with optimal fringe angle.

    PubMed

    Li, Beiwen; Zhang, Song

    2014-11-20

    For structured light system calibration, one popular approach is to treat the projector as an inverse camera. This is usually performed by projecting horizontal and vertical sequences of patterns to establish one-to-one mapping between camera points and projector points. However, for a well-designed system, either horizontal or vertical fringe images are not sensitive to depth variation and thus yield inaccurate mapping. As a result, the calibration accuracy is jeopardized if a conventional calibration method is used. To address this limitation, this paper proposes a novel calibration method based on optimal fringe angle determination. Experiments demonstrate that our calibration approach can increase the measurement accuracy up to 38% compared to the conventional calibration method with a calibration volume of 300(H)  mm×250(W)  mm×500(D)  mm.

  13. Evidence for Narrow Baryon Resonances in Inelastic {ital pp} Scattering

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tatischeff, B.; Willis, N.; Comets, M.P.

    The reaction pp{r_arrow}p{pi}{sup +}N has been studied at three energies (T{sub p}=1520 , 1805, and 2100MeV) and six angles from 0{degree} up to 17{degree} (laboratory). Several narrow states have been observed in missing mass spectra at 1004, 1044, and 1094MeV. Their widths are typically 1 order of magnitude smaller than the widths of N{sup {asterisk}} or {Delta} . Possible biases are discussed. These masses are in good agreement with those calculated within a simple phenomenological mass formula based on color magnetic interaction between two colored quark clusters. {copyright} {ital 1997} {ital The American Physical Society}

  14. Narrow-field-of-view bathymetrical lidar: theory and field test

    NASA Astrophysics Data System (ADS)

    Feygels, Viktor I.; Wright, C. Wayne; Kopilevich, Yuri I.; Surkov, Alexey I.

    2003-11-01

    The purpose of this paper is to derive a reliable theory to predict the performance of a narrow-FOV bathymetric lidar. A fundamental discrepancy between the theoretical estimate and experimental results was the inspiration for the work presented here Meeting oceanographic mapping requirements is a critically important goal for littoral laser bathymetry. In contrast to traditional airborne lidar system which are optimized for recovering signals from the deepest possible waters , the above challenge may be met with a radical narrowing to the lidar transmit beam and receiver field of view (FOV) employed in EAARL (Experimental Advanced Airborne Research Lidar, NASA). In this paper we discuss theoretical analysis carried out on the basis of a sophisticated "multiple-forward scattering and single-backscattering model" for lidar return signals allows a quantitative estimation of the advantages of a narrow-FOV system over traditional bathymetric lidars (SHOALS-400, SHOALS-100, LADS Mk II) when used in clear shallow-water cases. Some of those advantages are: ¸ Increase in bottom definition (or reduced false-alarm probability) due to the enhanced contrast of the bottom return over the background backscatter from the water column, ¸ Enhancement in depth measurement accuracy resulting from narrower bottom return pulse width, ¸ Reduction of post-surface return effects in the lidar photo-multiplier detector due to a more rapid decay of water column backscatter, ¸ Greatly improved rejection of ambient light permitting lidar operations in all zenith sun angles and flight directions. The model computations make it possible to estimate the maximal operational depth for the system under consideration by the implementation of statistical theory of detectability. These computations depend on the prevailing seawater optical properties and lidar parameters. The theoretical predictions are compared with results obtained in the field test of the EAARL system carried out in Florida Keys

  15. Frequency-Domain Streak Camera and Tomography for Ultrafast Imaging of Evolving and Channeled Plasma Accelerator Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming

    2010-11-04

    We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less

  16. Effect of EMIC Wave Normal Angle Distribution on Relativistic Electron Scattering in Outer RB

    NASA Technical Reports Server (NTRS)

    Khazanov, G. V.; Gamayunov, K. V.

    2007-01-01

    We present the equatorial and bounce average pitch angle diffusion coefficients for scattering of relativistic electrons by the H+ mode of EMIC waves. Both the model (prescribed) and self consistent distributions over the wave normal angle are considered. The main results of our calculation can be summarized as follows: First, in comparison with field aligned waves, the intermediate and highly oblique waves reduce the pitch angle range subject to diffusion, and strongly suppress the scattering rate for low energy electrons (E less than 2 MeV). Second, for electron energies greater than 5 MeV, the |n| = 1 resonances operate only in a narrow region at large pitch-angles, and despite their greatest contribution in case of field aligned waves, cannot cause electron diffusion into the loss cone. For those energies, oblique waves at |n| greater than 1 resonances are more effective, extending the range of pitch angle diffusion down to the loss cone boundary, and increasing diffusion at small pitch angles by orders of magnitude.

  17. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  18. Visual imaging control systems of the Mariner to Jupiter and Saturn spacecraft

    NASA Technical Reports Server (NTRS)

    Larks, L.

    1979-01-01

    Design and fabrication of optical systems for the Mariner Jupiter Saturn (Voyager) mission is described. Because of the long distances of these planets from the sun, the spacecraft was designed without solar panels with the electricity generated on-board by radio-isotope thermal generators (RTG). The presence of RTG's and Jupiter radiation environment required that the optical systems be fabricated out of radiation stabilized materials. A narrow angle and a wide angle camera are located on the spacecraft scan platform, with the narrow angle lens a modification of the Mariner 10 lens. The optical system is described, noting that the lens was modified by moving the aperture correctors forward and placing a spider mounted secondary mirror in the original back surface of the second aperture corrector. The wide angle lens was made out of cerium doped, radiation stabilized optical glass with greatest blue transmittance, which would be resistant to RTG and Jupiter radiation.

  19. Controlling the angle range in acoustic low-frequency forbidden transmission in solid-fluid superlattice

    NASA Astrophysics Data System (ADS)

    Zhang, Sai; Xu, Bai-qiang; Cao, Wenwu

    2018-03-01

    We have investigated low-frequency forbidden transmission (LFT) of acoustic waves with frequency lower than the first Bragg bandgap in a solid-fluid superlattice (SFSL). LFT is formed when the acoustic planar wave impinges on the interface of a SFSL within a certain angle range. However, for the SFSL comprised of metallic material and water, the angle range of LFT is extremely narrow, which restricts its practical applications. The variation characteristics of the angle range have been comprehensively studied here by the control variable method. The results suggest that the filling ratio, layer number, wave velocity, and mass density of the constituent materials have a significant impact on the angle range. Based on our results, an effective strategy for obtaining LFT with a broad angle range is provided, which will be useful for potential applications of LFT in various devices, such as low frequency filters and subwavelength one-way diodes.

  20. Regolith Gardening Caused by Recent Lunar Impacts Observed by the Lunar Reconnaissance Obiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.

    2016-12-01

    Temporal observations by the Lunar Reconnaissance Obiter Camera (LROC) Narrow Angle Camera (NAC) enable us to map and measure the spatial distribution of ejecta as well as quantify faint distal zones that may be the result of early stage jetting caused by meteoroid impacts. These detailed before and after observations enable the examination of surface reflectance changes as well as the analysis of nearby features (i.e. highly degraded craters, secondary craters, and new/spatially shifted boulders). In addition, NAC temporal pairs reveal numerous areas where the regolith has been churned and modified. These features, which we refer to as splotches, are most likely caused by small secondary impacts due to their high population near recent impact events [Robinson et al., 2015]. Using over 14,000 NAC temporal pairs, we identified over 47,000 splotches and quantified their spatial coverage and rate of formation. Based on the observed size frequency distribution, our models indicate that 99% of the entire lunar surface is modified by 1 m in diameter and larger splotches over a period of 8.1x10^4 years. These splotches have the potential to churn the upper few cm of regolith, which influence the local surface roughness and ultimately the surface reflectance observed from orbit. This new churning rate estimate is consistent with previous analysis of regolith properties within drive core samples acquired during the Apollo missions; these cores reveal that the upper 2 cm was rapidly and continuously modified over periods of <=10^5 years [Fruchter et al., 1977]. Overall, the examination of LROC NAC temporal pairs enables detailed studies of the impact process on a scale that exceeds laboratory experiments. Continued collection of NAC temporal pairs during the LRO Cornerstone Mission and future extended missions will aid in the discovery of new, larger impact craters and other contemporary surface changes. References:Fruchter et al. 1977. Proc. Lunar Planet Sci. Conf. 8th. pp

  1. A simulation of orientation dependent, global changes in camera sensitivity in ECT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bieszk, J.A.; Hawman, E.G.; Malmin, R.E.

    1984-01-01

    ECT promises the abilities to: 1) observe radioisotope distributions in a patient without the summation of overlying activity to reduce contrast, and 2) measure quantitatively these distributions to further and more accurately assess organ function. Ideally, camera-based ECT systems should have a performance that is independent of camera orientation or gantry angle. This study is concerned with ECT quantitation errors that can arise from angle-dependent variations of camera sensitivity. Using simulated phantoms representative of heart and liver sections, the effects of sensitivity changes on reconstructed images were assessed both visually and quantitatively based on ROI sums. The sinogram for eachmore » test image was simulated with 128 linear digitization and 180 angular views. The global orientation-dependent sensitivity was modelled by applying an angular sensitivity dependence to the sinograms of the test images. Four sensitivity variations were studied. Amplitudes of 0% (as a reference), 5%, 10%, and 25% with a costheta dependence were studied as well as a cos2theta dependence with a 5% amplitude. Simulations were done with and without Poisson noise to: 1) determine trends in the quantitative effects as a function of the magnitude of the variation, and 2) to see how these effects are manifested in studies having statistics comparable to clinical cases. For the most realistic sensitivity variation (costheta, 5% ampl.), the ROIs chosen in the present work indicated changes of <0.5% in the noiseless case and <5% for the case with Poisson noise. The effects of statistics appear to dominate any effects due to global, sinusoidal, orientation-dependent sensitivity changes in the cases studied.« less

  2. Up Close to Mimas

    NASA Technical Reports Server (NTRS)

    2005-01-01

    During its approach to Mimas on Aug. 2, 2005, the Cassini spacecraft narrow-angle camera obtained multi-spectral views of the moon from a range of 228,000 kilometers (142,500 miles).

    This image is a narrow angle clear-filter image which was processed to enhance the contrast in brightness and sharpness of visible features.

    Herschel crater, a 140-kilometer-wide (88-mile) impact feature with a prominent central peak, is visible in the upper right of this image.

    This image was obtained when the Cassini spacecraft was above 25 degrees south, 134 degrees west latitude and longitude. The Sun-Mimas-spacecraft angle was 45 degrees and north is at the top.

    The Cassini-Huygens mission is a cooperative project of NASA, the European Space Agency and the Italian Space Agency. The Jet Propulsion Laboratory, a division of the California Institute of Technology in Pasadena, manages the mission for NASA's Science Mission Directorate, Washington, D.C. The Cassini orbiter and its two onboard cameras were designed, developed and assembled at JPL. The imaging operations center is based at the Space Science Institute in Boulder, Colo.

    For more information about the Cassini-Huygens mission visit http://saturn.jpl.nasa.gov . The Cassini imaging team homepage is at http://ciclops.org .

  3. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  4. Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera

    NASA Astrophysics Data System (ADS)

    Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.

    2016-08-01

    Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.

  5. Formulation of image quality prediction criteria for the Viking lander camera

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Jobson, D. J.; Taylor, E. J.; Wall, S. D.

    1973-01-01

    Image quality criteria are defined and mathematically formulated for the prediction computer program which is to be developed for the Viking lander imaging experiment. The general objective of broad-band (black and white) imagery to resolve small spatial details and slopes is formulated as the detectability of a right-circular cone with surface properties of the surrounding terrain. The general objective of narrow-band (color and near-infrared) imagery to observe spectral characteristics if formulated as the minimum detectable albedo variation. The general goal to encompass, but not exceed, the range of the scene radiance distribution within single, commandable, camera dynamic range setting is also considered.

  6. Proposal of a Budget-Friendly Camera Holder for Endoscopic Ear Surgery.

    PubMed

    Ozturan, Orhan; Yenigun, Alper; Aksoy, Fadlullah; Ertas, Burak

    2018-01-01

    Endoscopic ear surgery (EES) is increasingly a preferred technique in otologic society. It offers excellent visualization of the anatomical structures directly and behind the corners with variable angled telescopes. It also provides reduced operative morbidity due to being able to perform surgical interventions with less invasive approaches. Operative preparation and setup time and cost of endoscopy system are less expensive compared with surgical microscopes. On the other hand, the main disadvantage of EES is that the surgery has to be performed with 1 single hand. It is certainly restrictive for an ear surgeon who has been operating with 2 hands under otologic microscopic views for years and certainly requires a learning period and perseverance. Holding the endoscope by a second surgeon is not executable because of insufficient surgical space.Endoscope/camera holders have been developed for those who need the comfort and convenience afforded by double-handed microscopic ear surgery. An ideal endoscope holder should be easy-to-set up, easily controlled, providing a variety of angled views, allowing the surgeon to operate with 2 hands and, budget-friendly. In this article, a commercially available 11-inch magic arm camera holder is proposed by the authors to be used in EES due to its versatile, convenient, and budget-friendly features. It allows 2-handed EES through existing technology and is affordable for surgeons looking for a low-cost and practical solution.

  7. Visualization of explosion phenomena using a high-speed video camera with an uncoupled objective lens by fiber-optic

    NASA Astrophysics Data System (ADS)

    Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo

    2008-11-01

    Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.

  8. [Chamber Angle Assessment in Clinical Practice - A Comparison between Optical Coherence Tomography and Gonioscopy].

    PubMed

    Mösler, M P; Werner, J U; Lang, G K

    2015-07-01

    In glaucoma the structures of the anterior chamber are important for classification, therapy, progression and prognosis. In this context anterior segment optical coherence tomography (AS-OCT) gains more relevance. This study compares AS-OCT with gonioscopy in diagnostic performance of chamber angle (CA) assessment. 104 consecutive subjects with glaucoma underwent AS-OCT imaging using the Visante OCT. RESULTS were compared to gonioscopic grading from patient history using the Shaffer system. In addition, anterior chamber depth (ACD) assessment using slitlamp examination was evaluated as a prognostic factor for chamber angle width (CAW) and verified by AS-OCT measurement. Average CAW was 29° (AS-OCT). 17 % of the CAs that were "wide" in gonioscopy (variance 5-55°), showed a "narrow" CA in AS-OCT. 35 % of the CAs that were "narrow" in gonioscopy (variance 0-39°) showed a "wide" CA in AS-OCT. ACD assessment using slitlamp examination is a good predictor for CAW. In this context the technique provides equal informative value as gonioscopy. In cases of "wide" ACDs it is even superior. The critical ACD for an increased risk of angle closure is 2.4 mm. Concerning the critical ACD (< 2.4 mm) the technique gave the possibility to estimate, whether the patients were in the crucial range or not. Average ACD was 2.7 mm (AS-OCT). A strong correlation (correlation coefficient 0.83) between ACD and CAW was observed. Variation of 1 mm in the ACD leads to a change of 18.9° in the CAW. All patients with angle closure glaucoma were below this threshold and 74 % of patients with critical ACD had "narrow" (AS-OCT) CAs. In the case of routine clinical practice with inexperienced residents or circumstances that make gonioscopy difficult or impossible, optical coherence tomography is an effective alternative to the gold standard and is to some extent even superior. Georg Thieme Verlag KG Stuttgart · New York.

  9. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  10. Synthesis and characterization of mesoporous ZnS with narrow size distribution of small pores

    NASA Astrophysics Data System (ADS)

    Nistor, L. C.; Mateescu, C. D.; Birjega, R.; Nistor, S. V.

    2008-08-01

    Pure, nanocrystalline cubic ZnS forming a stable mesoporous structure was synthesized at room temperature by a non-toxic surfactant-assisted liquid liquid reaction, in the 9.5 10.5 pH range of values. The appearance of an X-ray diffraction (XRD) peak in the region of very small angles (˜ 2°) reveals the presence of a porous material with a narrow pore size distribution, but with an irregular arrangement of the pores, a so-called worm hole or sponge-like material. The analysis of the wide angle XRD diffractograms shows the building blocks to be ZnS nanocrystals with cubic structure and average diameter of 2 nm. Transmission electron microscopy (TEM) investigations confirm the XRD results; ZnS crystallites of 2.5 nm with cubic (blende) structure are the building blocks of the pore walls with pore sizes from 1.9 to 2.5 nm, and a broader size distribution for samples with smaller pores. Textural measurements (N2 adsorption desorption isotherms) confirm the presence of mesoporous ZnS with a narrow range of small pore sizes. The relatively lower surface area of around 100 m2/g is attributed to some remaining organic molecules, which are filling the smallest pores. Their presence, confirmed by IR spectroscopy, seems to be responsible for the high stability of the resulting mesoporous ZnS as well.

  11. Can orbital angle morphology distinguish dogs from wolves?

    PubMed

    Janssens, Luc; Spanoghe, Inge; Miller, Rebecca; Van Dongen, Stefan

    For more than a century, the orbital angle has been studied by many authors to distinguish dog skulls from their progenitor, the wolf. In early studies, the angle was reported to be different between dogs (49°-55°) and wolves (39°-46°). This clear difference was, however, questioned in a more recent Scandinavian study that shows some overlap. It is clear that in all studies several methodological issues were unexplored or unclear and that group sizes and the variety of breeds and wolf subspecies were small. Archaeological dog skulls had also not been studied. Our goal was to test larger and more varied groups and add archaeological samples as they are an evolutionary stage between wolves and modern dogs. We also tested the influence of measuring methods, intra- and inter-reliability, angle symmetry, the influence of variations in skull position and the possibility of measuring and comparing this angle on 3D CT scan images. Our results indicate that there is about 50 % overlap between the angle range in wolves and modern dogs. However, skulls with a very narrow orbital angle were only found in wolves and those with a very wide angle only in dogs. Archaeological dogs have a mean angle very close to the one of the wolves. Symmetry is highest in wolves and lowest in archaeological dogs. The measuring method is very reliable, for both inter- and intra-reliability (0.99-0.97), and most skull position changes have no statistical influence on the angle measured. Three-dimensional CT scan images can be used to measure OA, but the angles differ from direct measuring and cannot be used for comparison. Evolutionary changes in dog skulls responsible for the wider OA compared to wolf skulls are mainly the lateralisation of the zygomatic process of the frontal bone. Our conclusion is that the orbital angle can be used as an additional morphological measuring method to discern wolves from recent and archaeological dogs. Angles above 60° are certainly from recent dogs. Angles

  12. Retrieval of Garstang's emission function from all-sky camera images

    NASA Astrophysics Data System (ADS)

    Kocifaj, Miroslav; Solano Lamphar, Héctor Antonio; Kundracik, František

    2015-10-01

    The emission function from ground-based light sources predetermines the skyglow features to a large extent, while most mathematical models that are used to predict the night sky brightness require the information on this function. The radiant intensity distribution on a clear sky is experimentally determined as a function of zenith angle using the theoretical approach published only recently in MNRAS, 439, 3405-3413. We have made the experiments in two localities in Slovakia and Mexico by means of two digital single lens reflex professional cameras operating with different lenses that limit the system's field-of-view to either 180º or 167º. The purpose of using two cameras was to identify variances between two different apertures. Images are taken at different distances from an artificial light source (a city) with intention to determine the ratio of zenith radiance relative to horizontal irradiance. Subsequently, the information on the fraction of the light radiated directly into the upward hemisphere (F) is extracted. The results show that inexpensive devices can properly identify the upward emissions with adequate reliability as long as the clear sky radiance distribution is dominated by a largest ground-based light source. Highly unstable turbidity conditions can also make the parameter F difficult to find or even impossible to retrieve. The measurements at low elevation angles should be avoided due to a potentially parasitic effect of direct light emissions from luminaires surrounding the measuring site.

  13. Measuring high-resolution sky luminance distributions with a CCD camera.

    PubMed

    Tohsing, Korntip; Schrempf, Michael; Riechelmann, Stefan; Schilke, Holger; Seckmeyer, Gunther

    2013-03-10

    We describe how sky luminance can be derived from a newly developed hemispherical sky imager (HSI) system. The system contains a commercial compact charge coupled device (CCD) camera equipped with a fish-eye lens. The projection of the camera system has been found to be nearly equidistant. The luminance from the high dynamic range images has been calculated and then validated with luminance data measured by a CCD array spectroradiometer. The deviation between both datasets is less than 10% for cloudless and completely overcast skies, and differs by no more than 20% for all sky conditions. The global illuminance derived from the HSI pictures deviates by less than 5% and 20% under cloudless and cloudy skies for solar zenith angles less than 80°, respectively. This system is therefore capable of measuring sky luminance with the high spatial and temporal resolution of more than a million pixels and every 20 s respectively.

  14. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  15. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  16. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  17. Validation of the Microsoft Kinect® camera system for measurement of lower extremity jump landing and squatting kinematics.

    PubMed

    Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher

    2016-01-01

    Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.

  18. Integration of multispectral face recognition and multi-PTZ camera automated surveillance for security applications

    NASA Astrophysics Data System (ADS)

    Chen, Chung-Hao; Yao, Yi; Chang, Hong; Koschan, Andreas; Abidi, Mongi

    2013-06-01

    Due to increasing security concerns, a complete security system should consist of two major components, a computer-based face-recognition system and a real-time automated video surveillance system. A computerbased face-recognition system can be used in gate access control for identity authentication. In recent studies, multispectral imaging and fusion of multispectral narrow-band images in the visible spectrum have been employed and proven to enhance the recognition performance over conventional broad-band images, especially when the illumination changes. Thus, we present an automated method that specifies the optimal spectral ranges under the given illumination. Experimental results verify the consistent performance of our algorithm via the observation that an identical set of spectral band images is selected under all tested conditions. Our discovery can be practically used for a new customized sensor design associated with given illuminations for an improved face recognition performance over conventional broad-band images. In addition, once a person is authorized to enter a restricted area, we still need to continuously monitor his/her activities for the sake of security. Because pantilt-zoom (PTZ) cameras are capable of covering a panoramic area and maintaining high resolution imagery for real-time behavior understanding, researches in automated surveillance systems with multiple PTZ cameras have become increasingly important. Most existing algorithms require the prior knowledge of intrinsic parameters of the PTZ camera to infer the relative positioning and orientation among multiple PTZ cameras. To overcome this limitation, we propose a novel mapping algorithm that derives the relative positioning and orientation between two PTZ cameras based on a unified polynomial model. This reduces the dependence on the knowledge of intrinsic parameters of PTZ camera and relative positions. Experimental results demonstrate that our proposed algorithm presents substantially

  19. a Uav-Based Low-Cost Stereo Camera System for Archaeological Surveys - Experiences from Doliche (turkey)

    NASA Astrophysics Data System (ADS)

    Haubeck, K.; Prinz, T.

    2013-08-01

    The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.

  20. Sky-radiance gradient measurements at narrow bands in the visible.

    PubMed

    Winter, E M; Metcalf, T W; Stotts, L B

    1995-07-01

    Accurate calibrated measurements of the radiance of the daytime sky were made in narrow bands in the visible portion of the spectrum. These measurements were made over several months and were tabulated in a sun-referenced coordinate system. The radiance as a function of wavelength at angles ranging from 5 to 90 deg was plotted. A best-fit inverse power-law fit shows inversely linear behavior of the radiance versus wavelength near the Sun (5 deg) and a slope approaching inverse fourth power far from the Sun (60 deg). This behavior fits a Mie-scattering interpretation near the Sun and a Rayleigh-scattering interpretation away from the Sun. The results are also compared with LOWTRAN models.

  1. Fast auto-acquisition tomography tilt series by using HD video camera in ultra-high voltage electron microscope.

    PubMed

    Nishi, Ryuji; Cao, Meng; Kanaji, Atsuko; Nishida, Tomoki; Yoshida, Kiyokazu; Isakozawa, Shigeto

    2014-11-01

    The ultra-high voltage electron microscope (UHVEM) H-3000 with the world highest acceleration voltage of 3 MV can observe remarkable three dimensional microstructures of microns-thick samples[1]. Acquiring a tilt series of electron tomography is laborious work and thus an automatic technique is highly desired. We proposed the Auto-Focus system using image Sharpness (AFS)[2,3] for UHVEM tomography tilt series acquisition. In the method, five images with different defocus values are firstly acquired and the image sharpness are calculated. The sharpness are then fitted to a quasi-Gaussian function to decide the best focus value[3]. Defocused images acquired by the slow scan CCD (SS-CCD) camera (Hitachi F486BK) are of high quality but one minute is taken for acquisition of five defocused images.In this study, we introduce a high-definition video camera (HD video camera; Hamamatsu Photonics K. K. C9721S) for fast acquisition of images[4]. It is an analog camera but the camera image is captured by a PC and the effective image resolution is 1280×1023 pixels. This resolution is lower than that of the SS-CCD camera of 4096×4096 pixels. However, the HD video camera captures one image for only 1/30 second. In exchange for the faster acquisition the S/N of images are low. To improve the S/N, 22 captured frames are integrated so that each image sharpness is enough to become lower fitting error. As countermeasure against low resolution, we selected a large defocus step, which is typically five times of the manual defocus step, to discriminate different defocused images.By using HD video camera for autofocus process, the time consumption for each autofocus procedure was reduced to about six seconds. It took one second for correction of an image position and the total correction time was seven seconds, which was shorter by one order than that using SS-CCD camera. When we used SS-CCD camera for final image capture, it took 30 seconds to record one tilt image. We can obtain a tilt

  2. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  3. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  4. A new Brewster angle microscope

    NASA Astrophysics Data System (ADS)

    Lheveder, C.; Hénon, S.; Mercier, R.; Tissot, G.; Fournet, P.; Meunier, J.

    1998-03-01

    We present a new Brewster angle microscope for the study of very thin layers as thin as monolayers, using a custom-made objective. This objective avoids the drawbacks of the models existing at the present time. Its optical axis is perpendicular to the studied layer and consequently gives an image in focus in all the plane contrary to the existing models which give images in focus along a narrow strip. The objective allows one to obtain images with a good resolution (less than 1 μm) without scanning the surface, at the video frequency, allowing for dynamic studies. A large frontal distance associated with a very large aperture is obtained by using a large lens at the entrance of the objective.

  5. Mitigation of Angle Tracking Errors Due to Color Dependent Centroid Shifts in SIM-Lite

    NASA Technical Reports Server (NTRS)

    Nemati, Bijan; An, Xin; Goullioud, Renaud; Shao, Michael; Shen, Tsae-Pyng; Wehmeier, Udo J.; Weilert, Mark A.; Wang, Xu; Werne, Thomas A.; Wu, Janet P.; hide

    2010-01-01

    The SIM-Lite astrometric interferometer will search for Earth-size planets in the habitable zones of nearby stars. In this search the interferometer will monitor the astrometric position of candidate stars relative to nearby reference stars over the course of a 5 year mission. The elemental measurement is the angle between a target star and a reference star. This is a two-step process, in which the interferometer will each time need to use its controllable optics to align the starlight in the two arms with each other and with the metrology beams. The sensor for this alignment is an angle tracking CCD camera. Various constraints in the design of the camera subject it to systematic alignment errors when observing a star of one spectrum compared with a start of a different spectrum. This effect is called a Color Dependent Centroid Shift (CDCS) and has been studied extensively with SIM-Lite's SCDU testbed. Here we describe results from the simulation and testing of this error in the SCDU testbed, as well as effective ways that it can be reduced to acceptable levels.

  6. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to

  7. Single-camera displacement field correlation method for centrosymmetric 3D dynamic deformation measurement

    NASA Astrophysics Data System (ADS)

    Zhao, Jiaye; Wen, Huihui; Liu, Zhanwei; Rong, Jili; Xie, Huimin

    2018-05-01

    Three-dimensional (3D) deformation measurements are a key issue in experimental mechanics. In this paper, a displacement field correlation (DFC) method to measure centrosymmetric 3D dynamic deformation using a single camera is proposed for the first time. When 3D deformation information is collected by a camera at a tilted angle, the measured displacement fields are coupling fields of both the in-plane and out-of-plane displacements. The features of the coupling field are analysed in detail, and a decoupling algorithm based on DFC is proposed. The 3D deformation to be measured can be inverted and reconstructed using only one coupling field. The accuracy of this method was validated by a high-speed impact experiment that simulated an underwater explosion. The experimental results show that the approach proposed in this paper can be used in 3D deformation measurements with higher sensitivity and accuracy, and is especially suitable for high-speed centrosymmetric deformation. In addition, this method avoids the non-synchronisation problem associated with using a pair of high-speed cameras, as is common in 3D dynamic measurements.

  8. Increased horizontal viewing zone angle of a hologram by resolution redistribution of a spatial light modulator.

    PubMed

    Takaki, Yasuhiro; Hayashi, Yuki

    2008-07-01

    The narrow viewing zone angle is one of the problems associated with electronic holography. We propose a technique that enables the ratio of horizontal and vertical resolutions of a spatial light modulator (SLM) to be altered. This technique increases the horizontal resolution of a SLM several times, so that the horizontal viewing zone angle is also increased several times. A SLM illuminated by a slanted point light source array is imaged by a 4f imaging system in which a horizontal slit is located on the Fourier plane. We show that the horizontal resolution was increased four times and that the horizontal viewing zone angle was increased approximately four times.

  9. Evaluation of anterior chamber angle under dark and light conditions in angle closure glaucoma: An anterior segment OCT study.

    PubMed

    Masoodi, Habibeh; Jafarzadehpur, Ebrahim; Esmaeili, Alireza; Abolbashari, Fereshteh; Ahmadi Hosseini, Seyed Mahdi

    2014-08-01

    To evaluate changes of nasal and temporal anterior chamber angle (ACA) in subjects with angle closure glaucoma using Spectralis AS-OCT (SAS-OCT) under dark and light conditions. Based on dark-room gonioscopy, 24 subjects with open angles and 86 with narrow angles participated in this study. The nasal and temporal angle opening distance at 500 μm anterior to the scleral spur (AOD500), nasal and temporal ACA were measured using SAS-OCT in light and dark conditions. In 2 groups, ACA and AOD500 in nasal and temporal quadrants were significantly greater in light compared to dark (all with p=0.000). The AOD500 and ACA were significantly higher in nasal than temporal in measured conditions for 2 groups except the ACA and AOD500 of normal group measured in light. The difference between nasal and temporal in dark (29.07 ± 65.71 μm for AOD500 and 5.7 ± 4.07° for ACA) was greater than light (24.86 ± 79.85 μm for AOD500 and 2.09 ± 7.21° for ACA) condition. But the difference was only significant for ACA (p=0.000). The correlation analysis showed a negative correlation between AOD500 and pupil diameter in temporal and nasal quadrants (both with p=0.000). While temporal AOD500 difference correlated with spherical equivalent, temporal and asal gonioscopy, nasal AOD correlated with IOP, temporal and nasal gonioscopy. Clinically important changes in ACA structure could be detected with SAS-OCT in nasal and temporal quadrants under different illumination intensity. The results could help in improvement of examination condition for better and more accurate assessment of individuals with angle closure glaucoma. Copyright © 2014 British Contact Lens Association. Published by Elsevier Ltd. All rights reserved.

  10. Spectral methods to detect cometary minerals with OSIRIS on board Rosetta

    NASA Astrophysics Data System (ADS)

    Oklay, N.; Vincent, J.-B.; Sierks, H.

    2013-09-01

    Comet 67P/Churyumov-Gerasimenko is going to be observed by the OSIRIS scientific imager (Keller et al. 2007) on board ESA's spacecraft Rosetta in the wavelength range of 250-1000 nm with a combination of 12 filters for the narrow angle camera (NAC) and 14 combination of 12 filters for the narrow angle camera (NAC) and 14 filters in the wavelength range of 240-720 nm for the wide angle camera (WAC). NAC filters are suitable to surface composition studies, while WAC filters are designed for gas and radical emission studies. In order to investigate the composition of the comet surface from the observed images, we need to understand how to detect different minerals and which compositional information can be derived from the NAC filters. Therefore, the most common cometary silicates e.g. enstatite, forsterite are investigated with two hydrated silicates (serpentine and smectite) for the determina- tion of the spectral methods. Laboratory data of those selected minerals are collected from RELAB database (http://www.planetary.brown.edu/relabdocs/relab.htm) and absolute spectra of the minerals observed by OSIRIS NAC filters are calculated. Due to the limited spectral range of the laboratory data, Far-UV and Neutral density filters of NAC are excluded from this analysis. Considered NAC filters in this study are represented in Table 1 and the number of collected laboratory data are presented in Table 2. Detection and separation of the minerals will not only allow us to study the surface composition but also to study observed composition changes due to the cometary activity during the mission.

  11. Matching the best viewing angle in depth cameras for biomass estimation based on poplar seedling geometry.

    PubMed

    Andújar, Dionisio; Fernández-Quintanilla, César; Dorado, José

    2015-06-04

    In energy crops for biomass production a proper plant structure is important to optimize wood yields. A precise crop characterization in early stages may contribute to the choice of proper cropping techniques. This study assesses the potential of the Microsoft Kinect for Windows v.1 sensor to determine the best viewing angle of the sensor to estimate the plant biomass based on poplar seedling geometry. Kinect Fusion algorithms were used to generate a 3D point cloud from the depth video stream. The sensor was mounted in different positions facing the tree in order to obtain depth (RGB-D) images from different angles. Individuals of two different ages, e.g., one month and one year old, were scanned. Four different viewing angles were compared: top view (0°), 45° downwards view, front view (90°) and ground upwards view (-45°). The ground-truth used to validate the sensor readings consisted of a destructive sampling in which the height, leaf area and biomass (dry weight basis) were measured in each individual plant. The depth image models agreed well with 45°, 90° and -45° measurements in one-year poplar trees. Good correlations (0.88 to 0.92) between dry biomass and the area measured with the Kinect were found. In addition, plant height was accurately estimated with a few centimeters error. The comparison between different viewing angles revealed that top views showed poorer results due to the fact the top leaves occluded the rest of the tree. However, the other views led to good results. Conversely, small poplars showed better correlations with actual parameters from the top view (0°). Therefore, although the Microsoft Kinect for Windows v.1 sensor provides good opportunities for biomass estimation, the viewing angle must be chosen taking into account the developmental stage of the crop and the desired parameters. The results of this study indicate that Kinect is a promising tool for a rapid canopy characterization, i.e., for estimating crop biomass

  12. The Narrow-Line Region of Narrow-Line Seyfert 1 Galaxies

    NASA Astrophysics Data System (ADS)

    Rodríguez-Ardila, A.; Binette, Luc; Pastoriza, Miriani G.; Donzelli, Carlos J.

    2000-08-01

    This work studies the optical emission-line properties and physical conditions of the narrow-line region (NLR) of seven narrow-line Seyfert 1 galaxies (NLS1's) for which high signal-to-noise ratio spectroscopic observations were available. The resolution is 340 km s-1 (at Hα) over the wavelength interval 3700-9500 Å, enabling us to separate the broad and narrow components of the permitted emission lines. Our results show that the flux carried out by the narrow component of Hβ is, on average, 50% of the total line flux. As a result, the [O III] λ5007/Hβ ratio emitted in the NLR varies from 1 to 5, instead of the universally adopted value of 10. This has strong implications for the required spectral energy distribution that ionizes the NLR gas. Photoionization models that consider a NLR composed of a combination of matter-bounded and ionization-bounded clouds are successful at explaining the low [O III] λ5007/Hβ ratio and the weakness of low-ionization lines of NLS1's. Variation of the relative proportion of these two type of clouds nicely reproduces the dispersion of narrow-line ratios found among the NLS1 sample. Assuming similar physical model parameters of both NLS1's and the normal Seyfert 1 galaxy NGC 5548, we show that the observed differences of emission-line ratios between these two groups of galaxies can be explained, to a first approximation, in terms of the shape of the input ionizing continuum. Narrow emission-line ratios of NLS1's are better reproduced by a steep power-law continuum in the EUV-soft X-ray region, with spectral index α~-2. Flatter spectral indices (α~-1.5) match the observed line ratios of NGC 5548 but are unable to provide a good match to the NLS1 ratios. This result is consistent with ROSAT observations of NLS1's, which show that these objects are characterized by steeper power-law indices than those of Seyfert 1 galaxies with strong broad optical lines. Based on observations made at CASLEO. Complejo Astronómico El Leoncito

  13. An evaluation of Winnipeg's photo enforcement safety program: results of time series analyses and an intersection camera experiment.

    PubMed

    Vanlaar, Ward; Robertson, Robyn; Marcoux, Kyla

    2014-01-01

    The objective of this study was to evaluate the impact of Winnipeg's photo enforcement safety program on speeding, i.e., "speed on green", and red-light running behavior at intersections as well as on crashes resulting from these behaviors. ARIMA time series analyses regarding crashes related to red-light running (right-angle crashes and rear-end crashes) and crashes related to speeding (injury crashes and property damage only crashes) occurring at intersections were conducted using monthly crash counts from 1994 to 2008. A quasi-experimental intersection camera experiment was also conducted using roadside data on speeding and red-light running behavior at intersections. These data were analyzed using logistic regression analysis. The time series analyses showed that for crashes related to red-light running, there had been a 46% decrease in right-angle crashes at camera intersections, but that there had also been an initial 42% increase in rear-end crashes. For crashes related to speeding, analyses revealed that the installation of cameras was not associated with increases or decreases in crashes. Results of the intersection camera experiment show that there were significantly fewer red light running violations at intersections after installation of cameras and that photo enforcement had a protective effect on speeding behavior at intersections. However, the data also suggest photo enforcement may be less effective in preventing serious speeding violations at intersections. Overall, Winnipeg's photo enforcement safety program had a positive net effect on traffic safety. Results from both the ARIMA time series and the quasi-experimental design corroborate one another. However, the protective effect of photo enforcement is not equally pronounced across different conditions so further monitoring is required to improve the delivery of this measure. Results from this study as well as limitations are discussed. Copyright © 2013 Elsevier Ltd. All rights reserved.

  14. [Study on the Spectral Characteristics of the Narrow-Band Filter in SHS].

    PubMed

    Luo, Hai-yan; Shi, Hai-liang; Li, Zhi-wei; Li, Shuang; Xiong, Wei; Hong, Jin

    2015-04-01

    The spectral response of spatial heterodyne spectroscopy (SHS) is determined by the spectrum property of narrow-band filter. As discussed in previous studies, the symmetric heterodyned interferogram of high frequency waves modulated by SHS and lack of sample lead to spectral confusion, which is associated with the true and ghost spectra. Because of the deviation from theoretical index of narrow-band filter in the process of coating, the boarded spectral response and middle wave shift are presented, and conditions in the theoretical Littrow wavelength made the effective wavelength range of SHS reduced. According to the measured curve of filter, a new wavenumber of zero spatial frequency can be reset by tunable laser, and it is easy for SHS to improve the spectral aliasing distortion. The results show that it is utilized to the maximum extent of the effective bandwidth by adjusting the grating angle of rotation to change the Littrow wavelength of the basic frequency, and the spectral region increased to 14.9 nm from original 12.9 nm.

  15. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  16. Space infrared telescope facility wide field and diffraction limited array camera (IRAC)

    NASA Technical Reports Server (NTRS)

    Fazio, Giovanni G.

    1988-01-01

    The wide-field and diffraction limited array camera (IRAC) is capable of two-dimensional photometry in either a wide-field or diffraction-limited mode over the wavelength range from 2 to 30 microns with a possible extension to 120 microns. A low-doped indium antimonide detector was developed for 1.8 to 5.0 microns, detectors were tested and optimized for the entire 1.8 to 30 micron range, beamsplitters were developed and tested for the 1.8 to 30 micron range, and tradeoff studies of the camera's optical system performed. Data are presented on the performance of InSb, Si:In, Si:Ga, and Si:Sb array detectors bumpbonded to a multiplexed CMOS readout chip of the source-follower type at SIRTF operating backgrounds (equal to or less than 1 x 10 to the 8th ph/sq cm/sec) and temperature (4 to 12 K). Some results at higher temperatures are also presented for comparison to SIRTF temperature results. Data are also presented on the performance of IRAC beamsplitters at room temperature at both 0 and 45 deg angle of incidence and on the performance of the all-reflecting optical system baselined for the camera.

  17. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  18. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  19. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  20. Modified Kelvin Equations for Capillary Condensation in Narrow and Wide Grooves

    NASA Astrophysics Data System (ADS)

    Malijevský, Alexandr; Parry, Andrew O.

    2018-03-01

    We consider the location and order of capillary condensation transitions occurring in deep grooves of width L and depth D . For walls that are completely wet by liquid (contact angle θ =0 ) the transition is continuous and its location is not sensitive to the depth of the groove. However, for walls that are partially wet by liquid, where the transition is first order, we show that the pressure at which it occurs is determined by a modified Kelvin equation characterized by an edge contact angle θE describing the shape of the meniscus formed at the top of the groove. The dependence of θE on the groove depth D relies, in turn, on whether corner menisci are formed at the bottom of the groove in the low density gaslike phase. While for macroscopically wide grooves these are always present when θ <45 ° we argue that their formation is inhibited in narrow grooves. This has a number of implications including that the local pinning of the meniscus and location of the condensation transition is different depending on whether the contact angle is greater or less than a universal value θ*≈31 °. Our arguments are supported by detailed microscopic density functional theory calculations that show that the modified Kelvin equation remains highly accurate even when L and D are of the order of tens of molecular diameters.

  1. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  2. Development of Camera Model and Geometric Calibration/validation of Xsat IRIS Imagery

    NASA Astrophysics Data System (ADS)

    Kwoh, L. K.; Huang, X.; Tan, W. J.

    2012-07-01

    XSAT, launched on 20 April 2011, is the first micro-satellite designed and built in Singapore. It orbits the Earth at altitude of 822 km in a sun synchronous orbit. The satellite carries a multispectral camera IRIS with three spectral bands - 0.52~0.60 mm for Green, 0.63~0.69 mm for Red and 0.76~0.89 mm for NIR at 12 m resolution. In the design of IRIS camera, the three bands were acquired by three lines of CCDs (NIR, Red and Green). These CCDs were physically separated in the focal plane and their first pixels not absolutely aligned. The micro-satellite platform was also not stable enough to allow for co-registration of the 3 bands with simple linear transformation. In the camera model developed, this platform stability was compensated with 3rd to 4th order polynomials for the satellite's roll, pitch and yaw attitude angles. With the camera model, the camera parameters such as the band to band separations, the alignment of the CCDs relative to each other, as well as the focal length of the camera can be validated or calibrated. The results of calibration with more than 20 images showed that the band to band along-track separation agreed well with the pre-flight values provided by the vendor (0.093° and 0.046° for the NIR vs red and for green vs red CCDs respectively). The cross-track alignments were 0.05 pixel and 5.9 pixel for the NIR vs red and green vs red CCDs respectively. The focal length was found to be shorter by about 0.8%. This was attributed to the lower operating temperature which XSAT is currently operating. With the calibrated parameters and the camera model, a geometric level 1 multispectral image with RPCs can be generated and if required, orthorectified imagery can also be produced.

  3. Maximum likelihood estimation in calibrating a stereo camera setup.

    PubMed

    Muijtjens, A M; Roos, J M; Arts, T; Hasman, A

    1999-02-01

    Motion and deformation of the cardiac wall may be measured by following the positions of implanted radiopaque markers in three dimensions, using two x-ray cameras simultaneously. Regularly, calibration of the position measurement system is obtained by registration of the images of a calibration object, containing 10-20 radiopaque markers at known positions. Unfortunately, an accidental change of the position of a camera after calibration requires complete recalibration. Alternatively, redundant information in the measured image positions of stereo pairs can be used for calibration. Thus, a separate calibration procedure can be avoided. In the current study a model is developed that describes the geometry of the camera setup by five dimensionless parameters. Maximum Likelihood (ML) estimates of these parameters were obtained in an error analysis. It is shown that the ML estimates can be found by application of a nonlinear least squares procedure. Compared to the standard unweighted least squares procedure, the ML method resulted in more accurate estimates without noticeable bias. The accuracy of the ML method was investigated in relation to the object aperture. The reconstruction problem appeared well conditioned as long as the object aperture is larger than 0.1 rad. The angle between the two viewing directions appeared to be the parameter that was most likely to cause major inaccuracies in the reconstruction of the 3-D positions of the markers. Hence, attempts to improve the robustness of the method should primarily focus on reduction of the error in this parameter.

  4. Angle of sky light polarization derived from digital images of the sky under various conditions.

    PubMed

    Zhang, Wenjing; Cao, Yu; Zhang, Xuanzhe; Yang, Yi; Ning, Yu

    2017-01-20

    Skylight polarization is used for navigation by some birds and insects. Skylight polarization also has potential for human navigation applications. Its advantages include relative immunity from interference and the absence of error accumulation over time. However, there are presently few examples of practical applications for polarization navigation technology. The main reason is its weak robustness during cloudy weather conditions. In this paper, the real-time measurement of the sky light polarization pattern across the sky has been achieved with a wide field of view camera. The images were processed under a new reference coordinate system to clearly display the symmetrical distribution of angle of polarization with respect to the solar meridian. A new algorithm for the extraction of the image axis of symmetry is proposed, in which the real-time azimuth angle between the camera and the solar meridian is accurately calculated. Our experimental results under different weather conditions show that polarization navigation has high accuracy, is strongly robust, and performs well during fog and haze, clouds, and strong sunlight.

  5. Narrow groove plasmonic nano-gratings for surface plasmon resonance sensing

    PubMed Central

    Dhawan, Anuj; Canva, Michael; Vo-Dinh, Tuan

    2011-01-01

    We present a novel surface plasmon resonance (SPR) configuration based on narrow groove (sub-15 nm) plasmonic nano-gratings such that normally incident radiation can be coupled into surface plasmons without the use of prism-coupling based total internal reflection, as in the classical Kretschmann configuration. This eliminates the angular dependence requirements of SPR-based sensing and allows development of robust miniaturized SPR sensors. Simulations based on Rigorous Coupled Wave Analysis (RCWA) were carried out to numerically calculate the reflectance - from different gold and silver nano-grating structures - as a function of the localized refractive index of the media around the SPR nano-gratings as well as the incident radiation wavelength and angle of incidence. Our calculations indicate substantially higher differential reflectance signals, on localized change of refractive index in the narrow groove plasmonic gratings, as compared to those obtained from conventional SPR-based sensing systems. Furthermore, these calculations allow determination of the optimal nano-grating geometric parameters - i. e. nanoline periodicity, spacing between the nanolines, as well as the height of the nanolines in the nano-grating - for highest sensitivity to localized change of refractive index, as would occur due to binding of a biomolecule target to a functionalized nano-grating surface. PMID:21263620

  6. Test Rover at JPL During Preparation for Mars Rover Low-Angle Selfie

    NASA Image and Video Library

    2015-08-19

    This view of a test rover at NASA's Jet Propulsion Laboratory, Pasadena, California, results from advance testing of arm positions and camera pointings for taking a low-angle self-portrait of NASA's Curiosity Mars rover. This rehearsal in California led to a dramatic Aug. 5, 2015, selfie of Curiosity, online at PIA19807. Curiosity's arm-mounted Mars Hand Lens Imager (MAHLI) camera took 92 of component images that were assembled into that mosaic. The rover team positioned the camera lower in relation to the rover body than for any previous full self-portrait of Curiosity. This practice version was taken at JPL's Mars Yard in July 2013, using the Vehicle System Test Bed (VSTB) rover, which has a test copy of MAHLI on its robotic arm. MAHLI was built by Malin Space Science Systems, San Diego. JPL, a division of the California Institute of Technology in Pasadena, manages the Mars Science Laboratory Project for the NASA Science Mission Directorate, Washington. JPL designed and built the project's Curiosity rover. http://photojournal.jpl.nasa.gov/catalog/PIA19810

  7. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  8. Integrated flexible handheld probe for imaging and evaluation of iridocorneal angle

    NASA Astrophysics Data System (ADS)

    Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin

    2015-01-01

    An imaging probe is designed and developed by integrating a miniaturized charge-coupled diode camera and light-emitting diode light source, which enables evaluation of the iridocorneal region inside the eye. The efficiency of the prototype probe instrument is illustrated initially by using not only eye models, but also samples such as pig eye. The proposed methodology and developed scheme are expected to find potential application in iridocorneal angle documentation, glaucoma diagnosis, and follow-up management procedures.

  9. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding

    PubMed Central

    Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong

    2016-01-01

    During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process. PMID:27649173

  10. A Precise Visual Method for Narrow Butt Detection in Specular Reflection Workpiece Welding.

    PubMed

    Zeng, Jinle; Chang, Baohua; Du, Dong; Hong, Yuxiang; Chang, Shuhe; Zou, Yirong

    2016-09-13

    During the complex path workpiece welding, it is important to keep the welding torch aligned with the groove center using a visual seam detection method, so that the deviation between the torch and the groove can be corrected automatically. However, when detecting the narrow butt of a specular reflection workpiece, the existing methods may fail because of the extremely small groove width and the poor imaging quality. This paper proposes a novel detection method to solve these issues. We design a uniform surface light source to get high signal-to-noise ratio images against the specular reflection effect, and a double-line laser light source is used to obtain the workpiece surface equation relative to the torch. Two light sources are switched on alternately and the camera is synchronized to capture images when each light is on; then the position and pose between the torch and the groove can be obtained nearly at the same time. Experimental results show that our method can detect the groove effectively and efficiently during the welding process. The image resolution is 12.5 μm and the processing time is less than 10 ms per frame. This indicates our method can be applied to real-time narrow butt detection during high-speed welding process.

  11. Faint F Ring and Prometheus

    NASA Image and Video Library

    2016-11-21

    Surface features are visible on Saturn's moon Prometheus in this view from NASA's Cassini spacecraft. Most of Cassini's images of Prometheus are too distant to resolve individual craters, making views like this a rare treat. Saturn's narrow F ring, which makes a diagonal line beginning at top center, appears bright and bold in some Cassini views, but not here. Since the sun is nearly behind Cassini in this image, most of the light hitting the F ring is being scattered away from the camera, making it appear dim. Light-scattering behavior like this is typical of rings comprised of small particles, such as the F ring. This view looks toward the unilluminated side of the rings from about 14 degrees below the ring plane. The image was taken in visible light with the Cassini spacecraft narrow-angle camera on Sept. 24, 2016. The view was acquired at a distance of approximately 226,000 miles (364,000 kilometers) from Prometheus and at a sun-Prometheus-spacecraft, or phase, angle of 51 degrees. Image scale is 1.2 miles (2 kilometers) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA20508

  12. A propagation experiment for modelling high elevation angle land mobile satellite channels

    NASA Technical Reports Server (NTRS)

    Richharia, M.; Evans, B. G.; Butt, G.

    1990-01-01

    This paper summarizes the results of a feasibility study for conducting high elevation angle propagation experiments in the European region for land mobile satellite communication. The study addresses various aspects of a proposed experiment. These include the selection of a suitable source for transmission, possibility of gathering narrow and wide band propagation data in various frequency bands, types of useful data, data acquisition technique, possible experimental configuration, and other experimental details.

  13. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  14. Geometric approach to the design of an imaging probe to evaluate the iridocorneal angle structures

    NASA Astrophysics Data System (ADS)

    Hong, Xun Jie Jeesmond; V. K., Shinoj; Murukeshan, V. M.; Baskaran, M.; Aung, Tin

    2017-06-01

    Photographic imaging methods allow the tracking of anatomical changes in the iridocorneal angle structures and the monitoring of treatment responses overtime. In this work, we aim to design an imaging probe to evaluate the iridocorneal angle structures using geometrical optics. We first perform an analytical analysis on light propagation from the anterior chamber of the eye to the exterior medium using Snell's law. This is followed by adopting a strategy to achieve uniform near field irradiance, by simplifying the complex non-rotational symmetric irradiance distribution of LEDs tilted at an angle. The optimization is based on the geometric design considerations of an angled circular ring array of 4 LEDs (or a 2 × 2 square LED array). The design equation give insights on variable parameters such as the illumination angle of the LEDs, ring array radius, viewing angle of the LEDs, and the working distance. A micro color CCD video camera that has sufficient resolution to resolve the iridocorneal angle structures at the required working distance is then chosen. The proposed design aspects fulfil the safety requirements recommended by the International Commission on Non-ionizing Radiation Protection.

  15. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB

  16. Optimum Projection Angle for Attaining Maximum Distance in a Soccer Punt Kick

    PubMed Central

    Linthorne, Nicholas P.; Patel, Dipesh S.

    2011-01-01

    To produce the greatest horizontal distance in a punt kick the ball must be projected at an appropriate angle. Here, we investigated the optimum projection angle that maximises the distance attained in a punt kick by a soccer goalkeeper. Two male players performed many maximum-effort kicks using projection angles of between 10° and 90°. The kicks were recorded by a video camera at 100 Hz and a 2 D biomechanical analysis was conducted to obtain measures of the projection velocity, projection angle, projection height, ball spin rate, and foot velocity at impact. The player’s optimum projection angle was calculated by substituting mathematical equations for the relationships between the projection variables into the equations for the aerodynamic flight of a soccer ball. The calculated optimum projection angles were in agreement with the player’s preferred projection angles (40° and 44°). In projectile sports even a small dependence of projection velocity on projection angle is sufficient to produce a substantial shift in the optimum projection angle away from 45°. In the punt kicks studied here, the optimum projection angle was close to 45° because the projection velocity of the ball remained almost constant across all projection angles. This result is in contrast to throwing and jumping for maximum distance, where the projection velocity the athlete is able to achieve decreases substantially with increasing projection angle and so the optimum projection angle is well below 45°. Key points The optimum projection angle that maximizes the distance of a punt kick by a soccer goalkeeper is about 45°. The optimum projection angle is close to 45° because the projection velocity of the ball is almost the same at all projection angles. This result is in contrast to throwing and jumping for maximum distance, where the optimum projection angle is well below 45° because the projection velocity the athlete is able to achieve decreases substantially with increasing

  17. Narrow band gap amorphous silicon semiconductors

    DOEpatents

    Madan, A.; Mahan, A.H.

    1985-01-10

    Disclosed is a narrow band gap amorphous silicon semiconductor comprising an alloy of amorphous silicon and a band gap narrowing element selected from the group consisting of Sn, Ge, and Pb, with an electron donor dopant selected from the group consisting of P, As, Sb, Bi and N. The process for producing the narrow band gap amorphous silicon semiconductor comprises the steps of forming an alloy comprising amorphous silicon and at least one of the aforesaid band gap narrowing elements in amount sufficient to narrow the band gap of the silicon semiconductor alloy below that of amorphous silicon, and also utilizing sufficient amounts of the aforesaid electron donor dopant to maintain the amorphous silicon alloy as an n-type semiconductor.

  18. A summary of Viking sample-trench analyses for angles of internal friction and cohesions

    NASA Technical Reports Server (NTRS)

    Moore, H. J.; Clow, G. D.; Hutton, R. E.

    1982-01-01

    Analyses of sample trenches excavated on Mars, using a theory for plowing by narrow blades, provide estimates of the angles of internal friction and the cohesions of the Martian surface materials. Angles of internal friction appear to be the same as those of many terrestrial soils because they are generally between 27 degrees and 39 degrees. Drift material, at the Lander 1 site, has a low angle of internal friction (near 18 degrees). All the materials excavated have low cohesions, generally between 0.2 and 10 kPa. The occurrence of cross bedding, layers of crusts, and blocky slabs shows that these materials are heterogeneous and that they contain planes of weakness. The results reported here have significant implications for future landed missions, Martian eolian processes, and interpretation of infrared temperatures.

  19. Enabling Narrow(est) IWA Coronagraphy with STIS BAR5 and BAR10 Occulters

    NASA Astrophysics Data System (ADS)

    Schneider, Glenn; Gaspar, Andras; Debes, John; Gull, Theodore; Hines, Dean; Apai, Daniel; Rieke, George

    2017-09-01

    The Space Telescope Imaging Spectrograph's (STIS) BAR5 coronagraphic occulter was designed to provide high-contrast, visible-light, imaging in close (> 0.15") angular proximity to bright point-sources. We explored and verified the functionality and utility of the BAR5 occulter. We also investigated, and herein report on, the use of the BAR10 rounded corners as narrow-angle occulters and compare IWA vs. contrast performance for the BAR5, BAR10, and Wedge occulters. With that, we provide recommendations for the most efficacious BAR5 and BAR10 use on-orbit in support of GO science.

  20. A preliminary optical design for the JANUS camera of ESA's space mission JUICE

    NASA Astrophysics Data System (ADS)

    Greggio, D.; Magrin, D.; Ragazzoni, R.; Munari, M.; Cremonese, G.; Bergomi, M.; Dima, M.; Farinato, J.; Marafatto, L.; Viotto, V.; Debei, S.; Della Corte, V.; Palumbo, P.; Hoffmann, H.; Jaumann, R.; Michaelis, H.; Schmitz, N.; Schipani, P.; Lara, L.

    2014-08-01

    The JANUS (Jovis, Amorum ac Natorum Undique Scrutator) will be the on board camera of the ESA JUICE satellite dedicated to the study of Jupiter and its moons, in particular Ganymede and Europa. This optical channel will provide surface maps with plate scale of 15 microrad/pixel with both narrow and broad band filters in the spectral range between 0.35 and 1.05 micrometers over a Field of View 1.72 × 1.29 degrees2. The current optical design is based on TMA design, with on-axis pupil and off-axis field of view. The optical stop is located at the secondary mirror providing an effective collecting area of 7854 mm2 (100 mm entrance pupil diameter) and allowing a simple internal baffling for first order straylight rejection. The nominal optical performances are almost limited by the diffraction and assure a nominal MTF better than 63% all over the whole Field of View. We describe here the optical design of the camera adopted as baseline together with the trade-off that has led us to this solution.

  1. Compensation method for the influence of angle of view on animal temperature measurement using thermal imaging camera combined with depth image.

    PubMed

    Jiao, Leizi; Dong, Daming; Zhao, Xiande; Han, Pengcheng

    2016-12-01

    In the study, we proposed an animal surface temperature measurement method based on Kinect sensor and infrared thermal imager to facilitate the screening of animals with febrile diseases. Due to random motion and small surface temperature variation of animals, the influence of the angle of view on temperature measurement is significant. The method proposed in the present study could compensate the temperature measurement error caused by the angle of view. Firstly, we analyzed the relationship between measured temperature and angle of view and established the mathematical model for compensating the influence of the angle of view with the correlation coefficient above 0.99. Secondly, the fusion method of depth and infrared thermal images was established for synchronous image capture with Kinect sensor and infrared thermal imager and the angle of view of each pixel was calculated. According to experimental results, without compensation treatment, the temperature image measured in the angle of view of 74° to 76° showed the difference of more than 2°C compared with that measured in the angle of view of 0°. However, after compensation treatment, the temperature difference range was only 0.03-1.2°C. This method is applicable for real-time compensation of errors caused by the angle of view during the temperature measurement process with the infrared thermal imager. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  3. The high resolution stereo camera (HRSC): acquisition of multi-spectral 3D-data and photogrammetric processing

    NASA Astrophysics Data System (ADS)

    Neukum, Gerhard; Jaumann, Ralf; Scholten, Frank; Gwinner, Klaus

    2017-11-01

    At the Institute of Space Sensor Technology and Planetary Exploration of the German Aerospace Center (DLR) the High Resolution Stereo Camera (HRSC) has been designed for international missions to planet Mars. For more than three years an airborne version of this camera, the HRSC-A, has been successfully applied in many flight campaigns and in a variety of different applications. It combines 3D-capabilities and high resolution with multispectral data acquisition. Variable resolutions depending on the camera control settings can be generated. A high-end GPS/INS system in combination with the multi-angle image information yields precise and high-frequent orientation data for the acquired image lines. In order to handle these data a completely automated photogrammetric processing system has been developed, and allows to generate multispectral 3D-image products for large areas and with accuracies for planimetry and height in the decimeter range. This accuracy has been confirmed by detailed investigations.

  4. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  5. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  6. Use of cameras for monitoring visibility impairment

    NASA Astrophysics Data System (ADS)

    Malm, William; Cismoski, Scott; Prenni, Anthony; Peters, Melanie

    2018-02-01

    Webcams and automated, color photography cameras have been routinely operated in many U.S. national parks and other federal lands as far back as 1988, with a general goal of meeting interpretive needs within the public lands system and communicating effects of haze on scenic vistas to the general public, policy makers, and scientists. Additionally, it would be desirable to extract quantifiable information from these images to document how visibility conditions change over time and space and to further reflect the effects of haze on a scene, in the form of atmospheric extinction, independent of changing lighting conditions due to time of day, year, or cloud cover. Many studies have demonstrated a link between image indexes and visual range or extinction in urban settings where visibility is significantly degraded and where scenes tend to be gray and devoid of color. In relatively clean, clear atmospheric conditions, clouds and lighting conditions can sometimes affect the image radiance field as much or more than the effects of haze. In addition, over the course of many years, cameras have been replaced many times as technology improved or older systems wore out, and therefore camera image pixel density has changed dramatically. It is shown that gradient operators are very sensitive to image resolution while contrast indexes are not. Furthermore, temporal averaging and time of day restrictions allow for developing quantitative relationships between atmospheric extinction and contrast-type indexes even when image resolution has varied over time. Temporal averaging effectively removes the variability of visibility indexes associated with changing cloud cover and weather conditions, and changes in lighting conditions resulting from sun angle effects are best compensated for by restricting averaging to only certain times of the day.

  7. Saturn's F-Ring

    NASA Technical Reports Server (NTRS)

    2000-01-01

    This narrow-angle camera image of Saturn's F Ring was taken through the Clear filter while at a distance of 6.9 million km from Saturn on 8 November 1980. The brightness variations of this tightly-constrained ring shown here indicate that the ring is less uniform in makeup than the larger rings. JPL managed the Voyager Project for NASA's Office of Space Science

  8. MicroCameras and Photometers (MCP) on board the TARANIS satellite

    NASA Astrophysics Data System (ADS)

    Farges, T.; Hébert, P.; Le Mer-Dachard, F.; Ravel, K.; Gaillac, S.

    2017-12-01

    TARANIS (Tool for the Analysis of Radiations from lightNing and Sprites) is a CNES micro satellite. Its main objective is to study impulsive transfers of energy between the Earth atmosphere and the space environment. It will be sun-synchronous at an altitude of 700 km. It will be launched in 2019 for at least 2 years. Its payload is composed of several electromagnetic instruments in different wavelengths (from gamma-rays to radio waves including optical). TARANIS instruments are currently in calibration and qualification phase. The purpose is to present the MicroCameras and Photometers (MCP) design, to show its performances after its recent characterization and at last to discuss the scientific objectives and how we want to answer it with the MCP observations. The MicroCameras, developed by Sodern, are dedicated to the spatial description of TLEs and their parent lightning. They are able to differentiate sprite and lightning thanks to two narrow bands ([757-767 nm] and [772-782 nm]) that provide simultaneous pairs of images of an Event. Simulation results of the differentiation method will be shown. After calibration and tests, the MicroCameras are now delivered to the CNES for integration on the payload. The Photometers, developed by Bertin Technologies, will provide temporal measurements and spectral characteristics of TLEs and lightning. There are key instrument because of their capability to detect on-board TLEs and then switch all the instruments of the scientific payload in their high resolution acquisition mode. Photometers use four spectral bands in the [170-260 nm], [332-342 nm], [757-767 nm] and [600-900 nm] and have the same field of view as cameras. The on-board TLE detection algorithm remote-controlled parameters have been tuned before launch using the electronic board and simulated or real events waveforms. After calibration, the Photometers are now going through the environmental tests. They will be delivered to the CNES for integration on the

  9. Upper wide-angle viewing system for ITER.

    PubMed

    Lasnier, C J; McLean, A G; Gattuso, A; O'Neill, R; Smiley, M; Vasquez, J; Feder, R; Smith, M; Stratton, B; Johnson, D; Verlaan, A L; Heijmans, J A C

    2016-11-01

    The Upper Wide Angle Viewing System (UWAVS) will be installed on five upper ports of ITER. This paper shows major requirements, gives an overview of the preliminary design with reasons for some design choices, examines self-emitted IR light from UWAVS optics and its effect on accuracy, and shows calculations of signal-to-noise ratios for the two-color temperature output as a function of integration time and divertor temperature. Accurate temperature output requires correction for vacuum window absorption vs. wavelength and for self-emitted IR, which requires good measurement of the temperature of the optical components. The anticipated signal-to-noise ratio using presently available IR cameras is adequate for the required 500 Hz frame rate.

  10. The Development of using the digital projection method to measure the contact angle of ball screw

    NASA Astrophysics Data System (ADS)

    Chen, Chun-Jen; Jywe, Wenyuh; Liu, Yu-Chun; Jwo, Hsin-Hong

    The ball screw frequently used to drive or translate the parts on the precision machine, such as machine tool and motorized stage. Therefore they were most frequently used on the precision machine, semiconductor equipment, medical instrument and aero industry. The main parts of ball screw are screw, ball and nut. The contact angle between the screw, ball and nut will affect the performance (include loading and noise) and lifecycle of a ball screw. If the actual contact angle and the designed contact angle are not the same, the friction between the ball, screw and nut will increase and it will result in the thermal increase and lifecycle decrease. This paper combines the traditional profile projector and commercial digital camera to build an imaging based and noncontact measurements system. It can implement the contact angle measurement quickly and accurately. Three different pitch angles of ball screws were completed tests in this paper. The angle resolution of this measurement system is about 0.001 degree and its accuracy is about 0.05 degree.

  11. Calibration Plans for the Multi-angle Imaging SpectroRadiometer (MISR)

    NASA Astrophysics Data System (ADS)

    Bruegge, C. J.; Duval, V. G.; Chrien, N. L.; Diner, D. J.

    1993-01-01

    The EOS Multi-angle Imaging SpectroRadiometer (MISR) will study the ecology and climate of the Earth through acquisition of global multi-angle imagery. The MISR employs nine discrete cameras, each a push-broom imager. Of these, four point forward, four point aft and one views the nadir. Absolute radiometric calibration will be obtained pre-flight using high quantum efficiency (HQE) detectors and an integrating sphere source. After launch, instrument calibration will be provided using HQE detectors in conjunction with deployable diffuse calibration panels. The panels will be deployed at time intervals of one month and used to direct sunlight into the cameras, filling their fields-of-view and providing through-the-optics calibration. Additional techniques will be utilized to reduce systematic errors, and provide continuity as the methodology changes with time. For example, radiation-resistant photodiodes will also be used to monitor panel radiant exitance. These data will be acquired throughout the five-year mission, to maintain calibration in the latter years when it is expected that the HQE diodes will have degraded. During the mission, it is planned that the MISR will conduct semi-annual ground calibration campaigns, utilizing field measurements and higher resolution sensors (aboard aircraft or in-orbit platforms) to provide a check of the on-board hardware. These ground calibration campaigns are limited in number, but are believed to be the key to the long-term maintenance of MISR radiometric calibration.

  12. MOJAVE - XIV. Shapes and opening angles of AGN jets

    NASA Astrophysics Data System (ADS)

    Pushkarev, A. B.; Kovalev, Y. Y.; Lister, M. L.; Savolainen, T.

    2017-07-01

    We present 15 GHz stacked VLBA images of 373 jets associated with active galactic nuclei (AGNs) having at least five observing epochs within a 20 yr time interval 1994-2015 from the Monitoring Of Jets in Active galactic nuclei with VLBA Experiments (MOJAVE) programme and/or its precursor, the 2-cm VLBA Survey. These data are supplemented by 1.4 GHz single-epoch VLBA observations of 135 MOJAVE AGNs to probe larger scale jet structures. The typical jet geometry is found to be close to conical on scales from hundreds to thousands of parsecs, while a number of galaxies show quasi-parabolic streamlines on smaller scales. A true jet geometry in a considerable fraction of AGNs appears only after stacking epochs over several years. The jets with significant radial accelerated motion undergo more active collimation. We have analysed total intensity jet profiles transverse to the local jet ridgeline and derived both apparent and intrinsic opening angles of the flows, with medians of 21.5° and 1.3°, respectively. The Fermi LAT-detected gamma-ray AGNs in our sample have, on average, wider apparent and narrower intrinsic opening angle, and smaller viewing angle than non-LAT-detected AGNs. We have established a highly significant correlation between the apparent opening angle and gamma-ray luminosity, driven by Doppler beaming and projection effects.

  13. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  14. Preclinical imaging of iridocorneal angle and fundus using a modified integrated flexible handheld probe

    PubMed Central

    Hong, Xun Jie Jeesmond; Shinoj, Vengalathunadakal K.; Murukeshan, Vadakke Matham; Baskaran, Mani; Aung, Tin

    2017-01-01

    Abstract. A flexible handheld imaging probe consisting of a 3  mm×3  mm charge-coupled device camera, light-emitting diode light sources, and near-infrared laser source is designed and developed. The imaging probe is designed with specifications to capture the iridocorneal angle images and posterior segment images. Light propagation from the anterior chamber of the eye to the exterior is considered analytically using Snell’s law. Imaging of the iridocorneal angle region and fundus is performed on ex vivo porcine samples and subsequently on small laboratory animals, such as the New Zealand white rabbit and nonhuman primate, in vivo. The integrated flexible handheld probe demonstrates high repeatability in iridocorneal angle and fundus documentation. The proposed concept and methodology are expected to find potential application in the diagnosis, prognosis, and management of glaucoma. PMID:28413809

  15. The Pluto System At Small Phase Angles

    NASA Astrophysics Data System (ADS)

    Verbiscer, Anne J.; Buie, Marc W.; Binzel, Richard; Ennico, Kimberly; Grundy, William M.; Olkin, Catherine B.; Showalter, Mark Robert; Spencer, John R.; Stern, S. Alan; Weaver, Harold A.; Young, Leslie; New Horizons Science Team

    2016-10-01

    Hubble Space Telescope observations of the Pluto system acquired during the New Horizons encounter epoch (HST Program 13667, M. Buie, PI) span the phase angle range from 0.06 to 1.7 degrees, enabling the measurement and characterization of the opposition effect for Pluto and its satellites at 0.58 microns using HST WFC3/UVIS with the F350LP filter, which has a broadband response and a pivot wavelength of 0.58 microns. At these small phase angles, differences in the opposition effect width and amplitude appear. The small satellites Nix and Hydra both exhibit a very narrow opposition surge, while the considerably larger moon Charon has a broader opposition surge. Microtextural surface properties derived from the shape and magnitude of the opposition surge of each surface contain a record of the collisional history of the system. We combine these small phase angle observations with those made at larger phase angles by the New Horizons Long Range Reconnaissance Imager (LORRI), which also has a broadband response with a pivot wavelength of 0.61 microns, to produce the most complete disk-integrated solar phase curves that we will have for decades to come. Modeling these disk-integrated phase curves generates sets of photometric parameters that will inform spectral modeling of the satellite surfaces as well as terrains on Pluto from spatially resolved New Horizons Ralph Linear Etalon Imaging Spectral Array (LEISA) data from 1.2 to 2.5 microns. Rotationally resolved phase curves of Pluto reveal opposition effects that only appear at phase angles less than 0.1 degree and have widths and amplitudes that are highly dependent on longitude and therefore on Pluto's diverse terrains. The high albedo region informally known as Sputnik Planum dominates the disk-integrated reflectance of Pluto on the New Horizons encounter hemisphere. These results lay the groundwork for observations at true opposition in 2018, when the Pluto system will be observable at phase angles so small that

  16. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  17. LASER APPLICATIONS AND OTHER TOPICS IN QUANTUM ELECTRONICS: Measurement of the surface wavelength distribution of narrow-band radiation by a colorimetric method

    NASA Astrophysics Data System (ADS)

    Kraiskii, A. V.; Mironova, T. V.; Sultanov, T. T.

    2010-09-01

    A method is suggested for determining the wavelength of narrow-band light from a digital photograph of a radiating surface. The digital camera used should be appropriately calibrated. The accuracy of the wavelength measurement is better than 1 nm. The method was tested on the yellow doublet of mercury spectrum and on the adjacent continuum of the incandescent lamp radiation spectrum. By means of the method suggested the homogeneity of holographic sensor swelling was studied in stationary and transient cases.

  18. Ortho-Rectification of Narrow Band Multi-Spectral Imagery Assisted by Dslr RGB Imagery Acquired by a Fixed-Wing Uas

    NASA Astrophysics Data System (ADS)

    Rau, J.-Y.; Jhan, J.-P.; Huang, C.-Y.

    2015-08-01

    Miniature Multiple Camera Array (MiniMCA-12) is a frame-based multilens/multispectral sensor composed of 12 lenses with narrow band filters. Due to its small size and light weight, it is suitable to mount on an Unmanned Aerial System (UAS) for acquiring high spectral, spatial and temporal resolution imagery used in various remote sensing applications. However, due to its wavelength range is only 10 nm that results in low image resolution and signal-to-noise ratio which are not suitable for image matching and digital surface model (DSM) generation. In the meantime, the spectral correlation among all 12 bands of MiniMCA images are low, it is difficult to perform tie-point matching and aerial triangulation at the same time. In this study, we thus propose the use of a DSLR camera to assist automatic aerial triangulation of MiniMCA-12 imagery and to produce higher spatial resolution DSM for MiniMCA12 ortho-image generation. Depending on the maximum payload weight of the used UAS, these two kinds of sensors could be collected at the same time or individually. In this study, we adopt a fixed-wing UAS to carry a Canon EOS 5D Mark2 DSLR camera and a MiniMCA-12 multi-spectral camera. For the purpose to perform automatic aerial triangulation between a DSLR camera and the MiniMCA-12, we choose one master band from MiniMCA-12 whose spectral range has overlap with the DSLR camera. However, all lenses of MiniMCA-12 have different perspective centers and viewing angles, the original 12 channels have significant band misregistration effect. Thus, the first issue encountered is to reduce the band misregistration effect. Due to all 12 MiniMCA lenses being frame-based, their spatial offsets are smaller than 15 cm and all images are almost 98% overlapped, we thus propose a modified projective transformation (MPT) method together with two systematic error correction procedures to register all 12 bands of imagery on the same image space. It means that those 12 bands of images acquired at

  19. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  20. A Survey of PWNe around Narrow-Pulse Gamma-ray Pulsars

    NASA Astrophysics Data System (ADS)

    Romani, Roger

    2010-09-01

    We propose here, on behalf of the Fermi LAT team, ACIS observations of the X-ray counterparts of six unusual gamma-ray pulsars discovered by the LAT. The targets, four seen only in the gamma-rays, two also radio-detected, have unusual single or narrow double pulse profiles, which require particular emission geometries for different pulsar models. By measuring the arcsecond-scale structure of the wind nebula termination shocks of these young (<100kyr) objects, CXO can pin down the viewing angle and test the pulsar physics. All have known X-ray fluxes and we can also extract spectral and distance estimates needed to interpret the GeV gamma-rays. The survey sample covers a range of ages, spindown powers and expected inclinations, making it a powerful test of pulsar emission models.

  1. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  2. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  3. A two-angle far-field microscope imaging technique for spray flows

    NASA Astrophysics Data System (ADS)

    Kourmatzis, Agisilaos; Pham, Phuong X.; Masri, Assaad R.

    2017-03-01

    Backlight imaging is frequently used for the visualization of multiphase flows, where with appropriate microscope lenses, quantitative information on the spray structure can be attained. However, a key issue resides in the nature of the measurement which relies on a single viewing angle, hence preventing imaging of all liquid structures and features, such as those located behind other fragments. This paper presents results from an extensive experimental study aimed as a step forward towards resolving this problem by using a pair of high speed cameras oriented at 90 degrees to each other, and synchronized to two high-speed diode lasers. Both cameras are used with long distance microscope lenses. The images are processed as pairs allowing for identification and classification of the same liquid structure from two perspectives at high temporal (5 kHz) and spatial resolution (∼3 μm). Using a controlled mono-disperse spray, simultaneous, time-resolved visualization of the same spherical object being focused on one plane while de-focused on the other plane 90 degrees to the first has allowed for a quantification of shot-to-shot defocused size measurement error. An extensive error analysis is performed for spheroidal structures imaged from two angles and the dual angle technique is extended to measure the volume of non-spherical fragments for the first time, by ‘discretising’ a fragment into a number of constituent ellipses. Error analysis is performed based on measuring the known volumes of solid arbitrary shapes, and volume estimates were found to be within  ∼11% of the real volume for representative ‘ligament-like’ shapes. The contribution concludes by applying the ellipsoidal method to a real spray consisting of multiple non-spherical fragments. This extended approach clearly demonstrates potential to yield novel volume weighted quantities of non-spherical objects in turbulent multiphase flow applications.

  4. The SALSA Project - High-End Aerial 3d Camera

    NASA Astrophysics Data System (ADS)

    Rüther-Kindel, W.; Brauchle, J.

    2013-08-01

    The ATISS measurement drone, developed at the University of Applied Sciences Wildau, is an electrical powered motor glider with a maximum take-off weight of 25 kg including a payload capacity of 10 kg. Two 2.5 kW engines enable ultra short take-off procedures and the motor glider design results in a 1 h endurance. The concept of ATISS is based on the idea to strictly separate between aircraft and payload functions, which makes ATISS a very flexible research platform for miscellaneous payloads. ATISS is equipped with an autopilot for autonomous flight patterns but under permanent pilot control from the ground. On the basis of ATISS the project SALSA was undertaken. The aim was to integrate a system for digital terrain modelling. Instead of a laser scanner a new design concept was chosen based on two synchronized high resolution digital cameras, one in a fixed nadir orientation and the other in a oblique orientation. Thus from every object on the ground images from different view angles are taken. This new measurement camera system MACS-TumbleCam was developed at the German Aerospace Center DLR Berlin-Adlershof especially for the ATISS payload concept. Special advantage in comparison to laser scanning is the fact, that instead of a cloud of points a surface including texture is generated and a high-end inertial orientation system can be omitted. The first test flights show a ground resolution of 2 cm and height resolution of 3 cm, which underline the extraordinary capabilities of ATISS and the MACS measurement camera system.

  5. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  6. Effects of gap width on droplet transfer behavior in ultra-narrow gap laser welding of high strength aluminum alloys

    NASA Astrophysics Data System (ADS)

    Song, Chaoqun; Dong, Shiyun; Yan, Shixing; He, Jiawu; Xu, Binshi; He, Peng

    2017-10-01

    Ultra-narrow gap laser welding is a novel method for thick high strength aluminum alloy plate for its lower heat input, less deformation and higher efficiency. To obtain a perfect welding quality, it is vital to control the more complex droplet transfer behavior under the influence of ultra-narrow gap groove. This paper reports the effects of gap width of groove on droplet transfer behavior in ultra-narrow gap laser welding of 7A52 aluminum alloy plates by a high speed camera, using an ER 5356 filler wire. The results showed that the gap width had directly effects on droplet transfer mode and droplet shape. The droplet transfer modes were, in order, both-sidewall transfer, single-sidewall transfer, globular droplet transfer and bridging transfer, with different droplet shape and transition period, as the gap width increased from 2 mm to 3.5mm. The effect of gap width on lack of fusion was also studied to analyze the cause for lack of fusion at the bottom and on the sidewall of groove. Finally, with a 2.5 mm U-type parallel groove, a single-pass joint with no lack of fusion and other macro welding defects was successfully obtained in a single-sidewall transfer mode.

  7. The Si/CdTe semiconductor Compton camera of the ASTRO-H Soft Gamma-ray Detector (SGD)

    NASA Astrophysics Data System (ADS)

    Watanabe, Shin; Tajima, Hiroyasu; Fukazawa, Yasushi; Ichinohe, Yuto; Takeda, Shin`ichiro; Enoto, Teruaki; Fukuyama, Taro; Furui, Shunya; Genba, Kei; Hagino, Kouichi; Harayama, Atsushi; Kuroda, Yoshikatsu; Matsuura, Daisuke; Nakamura, Ryo; Nakazawa, Kazuhiro; Noda, Hirofumi; Odaka, Hirokazu; Ohta, Masayuki; Onishi, Mitsunobu; Saito, Shinya; Sato, Goro; Sato, Tamotsu; Takahashi, Tadayuki; Tanaka, Takaaki; Togo, Atsushi; Tomizuka, Shinji

    2014-11-01

    The Soft Gamma-ray Detector (SGD) is one of the instrument payloads onboard ASTRO-H, and will cover a wide energy band (60-600 keV) at a background level 10 times better than instruments currently in orbit. The SGD achieves low background by combining a Compton camera scheme with a narrow field-of-view active shield. The Compton camera in the SGD is realized as a hybrid semiconductor detector system which consists of silicon and cadmium telluride (CdTe) sensors. The design of the SGD Compton camera has been finalized and the final prototype, which has the same configuration as the flight model, has been fabricated for performance evaluation. The Compton camera has overall dimensions of 12 cm×12 cm×12 cm, consisting of 32 layers of Si pixel sensors and 8 layers of CdTe pixel sensors surrounded by 2 layers of CdTe pixel sensors. The detection efficiency of the Compton camera reaches about 15% and 3% for 100 keV and 511 keV gamma rays, respectively. The pixel pitch of the Si and CdTe sensors is 3.2 mm, and the signals from all 13,312 pixels are processed by 208 ASICs developed for the SGD. Good energy resolution is afforded by semiconductor sensors and low noise ASICs, and the obtained energy resolutions with the prototype Si and CdTe pixel sensors are 1.0-2.0 keV (FWHM) at 60 keV and 1.6-2.5 keV (FWHM) at 122 keV, respectively. This results in good background rejection capability due to better constraints on Compton kinematics. Compton camera energy resolutions achieved with the final prototype are 6.3 keV (FWHM) at 356 keV and 10.5 keV (FWHM) at 662 keV, which satisfy the instrument requirements for the SGD Compton camera (better than 2%). Moreover, a low intrinsic background has been confirmed by the background measurement with the final prototype.

  8. Film cooling performance of a row of dual-fanned holes at various injection angles

    NASA Astrophysics Data System (ADS)

    Li, Guangchao; Wang, Haofeng; Zhang, Wei; Kou, Zhihai; Xu, Rangshu

    2017-10-01

    Film cooling performance about a row of dual-fanned holes with injection angles of 30°, 60 ° and 90° were experimentally investigated at blowing ratios of 1.0 and 2.0. Dual-fanned hole is a novel shaped hole which has both inlet expansion and outlet expansion. A transient thermochromic liquid crystal technique was used to reveal the local values of film cooling effectiveness and heat transfer coefficient. The results show that injection angles have strong influence on the two dimensional distributions of film cooling effectiveness and heat transfer coefficient. For the small injection angle of 30 degree and small blowing ratio of 1.0, there is only a narrow spanwise region covered with film. The increase of injection angle and blowing ratio both leads to the enhanced spanwise film diffusion, but reduced local cooling ability far away from the hole. Injection angles have comprehensive influence on the averaged film cooling effectiveness for various x/d locations. As injection angles are 30 and 60 degree, two bands of high heat transfer coefficients are found in mixing region of the gas and coolant. As injection angle increases to 90 degree, the mixing leads to the enhanced heat transfer region near the film hole. The averaged heat transfer coefficient increases with the increase of injection angle.

  9. Feasibility Study of Utilization of Action Camera, GoPro Hero 4, Google Glass, and Panasonic HX-A100 in Spine Surgery.

    PubMed

    Lee, Chang Kyu; Kim, Youngjun; Lee, Nam; Kim, Byeongwoo; Kim, Doyoung; Yi, Seong

    2017-02-15

    Study for feasibility of commercially available action cameras in recording video of spine. Recent innovation of the wearable action camera with high-definition video recording enables surgeons to use camera in the operation at ease without high costs. The purpose of this study is to compare the feasibility, safety, and efficacy of commercially available action cameras in recording video of spine surgery. There are early reports of medical professionals using Google Glass throughout the hospital, Panasonic HX-A100 action camera, and GoPro. This study is the first report for spine surgery. Three commercially available cameras were tested: GoPro Hero 4 Silver, Google Glass, and Panasonic HX-A100 action camera. Typical spine surgery was selected for video recording; posterior lumbar laminectomy and fusion. Three cameras were used by one surgeon and video was recorded throughout the operation. The comparison was made on the perspective of human factor, specification, and video quality. The most convenient and lightweight device for wearing and holding throughout the long operation time was Google Glass. The image quality; all devices except Google Glass supported HD format and GoPro has unique 2.7K or 4K resolution. Quality of video resolution was best in GoPro. Field of view, GoPro can adjust point of interest, field of view according to the surgery. Narrow FOV option was the best for recording in GoPro to share the video clip. Google Glass has potentials by using application programs. Connectivity such as Wi-Fi and Bluetooth enables video streaming for audience, but only Google Glass has two-way communication feature in device. Action cameras have the potential to improve patient safety, operator comfort, and procedure efficiency in the field of spinal surgery and broadcasting a surgery with development of the device and applied program in the future. N/A.

  10. Design of motion adjusting system for space camera based on ultrasonic motor

    NASA Astrophysics Data System (ADS)

    Xu, Kai; Jin, Guang; Gu, Song; Yan, Yong; Sun, Zhiyuan

    2011-08-01

    Drift angle is a transverse intersection angle of vector of image motion of the space camera. Adjusting the angle could reduce the influence on image quality. Ultrasonic motor (USM) is a new type of actuator using ultrasonic wave stimulated by piezoelectric ceramics. They have many advantages in comparison with conventional electromagnetic motors. In this paper, some improvement was designed for control system of drift adjusting mechanism. Based on ultrasonic motor T-60 was designed the drift adjusting system, which is composed of the drift adjusting mechanical frame, the ultrasonic motor, the driver of Ultrasonic Motor, the photoelectric encoder and the drift adjusting controller. The TMS320F28335 DSP was adopted as the calculation and control processor, photoelectric encoder was used as sensor of position closed loop system and the voltage driving circuit designed as generator of ultrasonic wave. It was built the mathematic model of drive circuit of the ultrasonic motor T-60 using matlab modules. In order to verify the validity of the drift adjusting system, was introduced the source of the disturbance, and made simulation analysis. It designed the control systems of motor drive for drift adjusting system with the improved PID control. The drift angle adjusting system has such advantages as the small space, simple configuration, high position control precision, fine repeatability, self locking property and low powers. It showed that the system could accomplish the mission of drift angle adjusting excellent.

  11. Robust range estimation with a monocular camera for vision-based forward collision warning system.

    PubMed

    Park, Ki-Yeong; Hwang, Sun-Young

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments.

  12. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance

  13. Game of thrown bombs in 3D: using high speed cameras and photogrammetry techniques to reconstruct bomb trajectories at Stromboli (Italy)

    NASA Astrophysics Data System (ADS)

    Gaudin, D.; Taddeucci, J.; Scarlato, P.; Del Bello, E.; Houghton, B. F.; Orr, T. R.; Andronico, D.; Kueppers, U.

    2015-12-01

    Large juvenile bombs and lithic clasts, produced and ejected during explosive volcanic eruptions, follow ballistic trajectories. Of particular interest are: 1) the determination of ejection velocity and launch angle, which give insights into shallow conduit conditions and geometry; 2) particle trajectories, with an eye on trajectory evolution caused by collisions between bombs, as well as the interaction between bombs and ash/gas plumes; and 3) the computation of the final emplacement of bomb-sized clasts, which is important for hazard assessment and risk management. Ground-based imagery from a single camera only allows the reconstruction of bomb trajectories in a plan perpendicular to the line of sight, which may lead to underestimation of bomb velocities and does not allow the directionality of the ejections to be studied. To overcome this limitation, we adapted photogrammetry techniques to reconstruct 3D bomb trajectories from two or three synchronized high-speed video cameras. In particular, we modified existing algorithms to consider the errors that may arise from the very high velocity of the particles and the impossibility of measuring tie points close to the scene. Our method was tested during two field campaigns at Stromboli. In 2014, two high-speed cameras with a 500 Hz frame rate and a ~2 cm resolution were set up ~350m from the crater, 10° apart and synchronized. The experiment was repeated with similar parameters in 2015, but using three high-speed cameras in order to significantly reduce uncertainties and allow their estimation. Trajectory analyses for tens of bombs at various times allowed for the identification of shifts in the mean directivity and dispersal angle of the jets during the explosions. These time evolutions are also visible on the permanent video-camera monitoring system, demonstrating the applicability of our method to all kinds of explosive volcanoes.

  14. An interpretation of the narrow positron annihilation feature from X-ray nova Muscae 1991

    NASA Technical Reports Server (NTRS)

    Chen, Wan; Gehrels, Neil; Cheng, F. H.

    1993-01-01

    The physical mechanism responsible for the narrow redshifted positron annihilation gamma-ray line from the X-ray nova Muscae 1991 is studied. The orbital inclination angle of the system is estimated and its black hole mass is constrained under the assumptions that the annihilation line centroid redshift is purely gravitational and that the line width is due to the combined effect of temperature broadening and disk rotation. The large black hole mass lower limit of 8 solar and the high binary mass ratio it implies raise a serious challenge to theoretical models of the formation and evolution of massive binaries.

  15. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  16. Experimental Study of Multispectral Characteristics of an Unmanned Aerial Vehicle at Different Observation Angles

    PubMed Central

    Zheng, Haijing; Bai, Tingzhu; Wang, Quanxi; Cao, Fengmei; Shao, Long; Sun, Zhaotian

    2018-01-01

    This study investigates multispectral characteristics of an unmanned aerial vehicle (UAV) at different observation angles by experiment. The UAV and its engine are tested on the ground in the cruise state. Spectral radiation intensities at different observation angles are obtained in the infrared band of 0.9–15 μm by a spectral radiometer. Meanwhile, infrared images are captured separately by long-wavelength infrared (LWIR), mid-wavelength infrared (MWIR), and short-wavelength infrared (SWIR) cameras. Additionally, orientation maps of the radiation area and radiance are obtained. The results suggest that the spectral radiation intensity of the UAV is determined by its exhaust plume and that the main infrared emission bands occur at 2.7 μm and 4.3 μm. At observation angles in the range of 0°–90°, the radiation area of the UAV in MWIR band is greatest; however, at angles greater than 90°, the radiation area in the SWIR band is greatest. In addition, the radiance of the UAV at an angle of 0° is strongest. These conclusions can guide IR stealth technique development for UAVs. PMID:29389880

  17. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  18. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  19. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  20. Optimization design of periscope type 3X zoom lens design for a five megapixel cellphone camera

    NASA Astrophysics Data System (ADS)

    Sun, Wen-Shing; Tien, Chuen-Lin; Pan, Jui-Wen; Chao, Yu-Hao; Chu, Pu-Yi

    2016-11-01

    This paper presents a periscope type 3X zoom lenses design for a five megapixel cellphone camera. The configuration of optical system uses the right angle prism in front of the zoom lenses to change the optical path rotated by a 90° angle resulting in the zoom lenses length of 6 mm. The zoom lenses can be embedded in mobile phone with a thickness of 6 mm. The zoom lenses have three groups with six elements. The half field of view is varied from 30° to 10.89°, the effective focal length is adjusted from 3.142 mm to 9.426 mm, and the F-number is changed from 2.8 to 5.13.

  1. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  2. Convolutional Neural Network-Based Human Detection in Nighttime Images Using Visible Light Camera Sensors.

    PubMed

    Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung

    2017-05-08

    Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.

  3. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  4. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002, The camera provided views as the the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  5. Mounted Video Camera Captures Launch of STS-112, Shuttle Orbiter Atlantis

    NASA Technical Reports Server (NTRS)

    2002-01-01

    A color video camera mounted to the top of the External Tank (ET) provided this spectacular never-before-seen view of the STS-112 mission as the Space Shuttle Orbiter Atlantis lifted off in the afternoon of October 7, 2002. The camera provided views as the orbiter began its ascent until it reached near-orbital speed, about 56 miles above the Earth, including a view of the front and belly of the orbiter, a portion of the Solid Rocket Booster, and ET. The video was downlinked during flight to several NASA data-receiving sites, offering the STS-112 team an opportunity to monitor the shuttle's performance from a new angle. Atlantis carried the S1 Integrated Truss Structure and the Crew and Equipment Translation Aid (CETA) Cart. The CETA is the first of two human-powered carts that will ride along the International Space Station's railway providing a mobile work platform for future extravehicular activities by astronauts. Landing on October 18, 2002, the Orbiter Atlantis ended its 11-day mission.

  6. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  7. Mars Image Collection Mosaic Builder

    NASA Technical Reports Server (NTRS)

    Plesea, Lucian; Hare, Trent

    2008-01-01

    A computer program assembles images from the Mars Global Surveyor (MGS) Mars Observer Camera Narrow Angle (MOCNA) collection to generate a uniform-high-resolution, georeferenced, uncontrolled mosaic image of the Martian surface. At the time of reporting the information for this article, the mosaic covered 7 percent of the Martian surface and contained data from more than 50,000 source images acquired under various light conditions at various resolutions.

  8. Upper wide-angle viewing system for ITER

    DOE PAGES

    Lasnier, C. J.; McLean, A. G.; Gattuso, A.; ...

    2016-08-15

    The Upper Wide Angle Viewing System (UWAVS) will be installed on five upper ports of ITER. Here, this paper shows major requirements, gives an overview of the preliminary design with reasons for some design choices, examines self-emitted IR light from UWAVS optics and its effect on accuracy, and shows calculations of signal-to-noise ratios for the two-color temperature output as a function of integration time and divertor temperature. Accurate temperature output requires correction for vacuum window absorption vs. wavelength and for self-emitted IR, which requires good measurement of the temperature of the optical components. The anticipated signal-to-noise ratio using presently availablemore » IR cameras is adequate for the required 500 Hz frame rate.« less

  9. A single-layer wide-angle negative-index metamaterial at visible frequencies.

    PubMed

    Burgos, Stanley P; de Waele, Rene; Polman, Albert; Atwater, Harry A

    2010-05-01

    Metamaterials are materials with artificial electromagnetic properties defined by their sub-wavelength structure rather than their chemical composition. Negative-index materials (NIMs) are a special class of metamaterials characterized by an effective negative index that gives rise to such unusual wave behaviour as backwards phase propagation and negative refraction. These extraordinary properties lead to many interesting functions such as sub-diffraction imaging and invisibility cloaking. So far, NIMs have been realized through layering of resonant structures, such as split-ring resonators, and have been demonstrated at microwave to infrared frequencies over a narrow range of angles-of-incidence and polarization. However, resonant-element NIM designs suffer from the limitations of not being scalable to operate at visible frequencies because of intrinsic fabrication limitations, require multiple functional layers to achieve strong scattering and have refractive indices that are highly dependent on angle of incidence and polarization. Here we report a metamaterial composed of a single layer of coupled plasmonic coaxial waveguides that exhibits an effective refractive index of -2 in the blue spectral region with a figure-of-merit larger than 8. The resulting NIM refractive index is insensitive to both polarization and angle-of-incidence over a +/-50 degree angular range, yielding a wide-angle NIM at visible frequencies.

  10. Preliminary optical design of PANIC, a wide-field infrared camera for CAHA

    NASA Astrophysics Data System (ADS)

    Cárdenas, M. C.; Rodríguez Gómez, J.; Lenzen, R.; Sánchez-Blanco, E.

    2008-07-01

    In this paper, we present the preliminary optical design of PANIC (PAnoramic Near Infrared camera for Calar Alto), a wide-field infrared imager for the Calar Alto 2.2 m telescope. The camera optical design is a folded single optical train that images the sky onto the focal plane with a plate scale of 0.45 arcsec per 18 μm pixel. A mosaic of four Hawaii 2RG of 2k x 2k made by Teledyne is used as detector and will give a field of view of 31.9 arcmin x 31.9 arcmin. This cryogenic instrument has been optimized for the Y, J, H and K bands. Special care has been taken in the selection of the standard IR materials used for the optics in order to maximize the instrument throughput and to include the z band. The main challenges of this design are: to produce a well defined internal pupil which allows reducing the thermal background by a cryogenic pupil stop; the correction of off-axis aberrations due to the large field available; the correction of chromatic aberration because of the wide spectral coverage; and the capability of introduction of narrow band filters (~1%) in the system minimizing the degradation in the filter passband without a collimated stage in the camera. We show the optomechanical error budget and compensation strategy that allows our as built design to met the performances from an optical point of view. Finally, we demonstrate the flexibility of the design showing the performances of PANIC at the CAHA 3.5m telescope.

  11. Optic for industrial endoscope/borescope with narrow field of view and low distortion

    DOEpatents

    Stone, Gary F.; Trebes, James E.

    2005-08-16

    An optic for the imaging optics on the distal end of a flexible fiberoptic endoscope or rigid borescope inspection tool. The image coverage is over a narrow (<20 degrees) field of view with very low optical distortion (<5% pin cushion or barrel distortion), compared to the typical <20% distortion. The optic will permit non-contact surface roughness measurements using optical techniques. This optic will permit simultaneous collection of selected image plane data, which data can then be subsequently optically processed. The image analysis will yield non-contact surface topology data for inspection where access to the surface does not permit a mechanical styles profilometer verification of surface topology. The optic allows a very broad spectral band or range of optical inspection. It is capable of spectroscopic imaging and fluorescence induced imaging when a scanning illumination source is used. The total viewing angle for this optic is 10 degrees for the full field of view of 10 degrees, compared to 40-70 degrees full angle field of view of the conventional gradient index or GRIN's lens systems.

  12. Rhea and Her Craters

    NASA Image and Video Library

    2005-01-17

    This Cassini image shows predominantly the impact-scarred leading hemisphere of Saturn's icy moon Rhea (1,528 kilometers, or 949 miles across). The image was taken in visible light with the Cassini spacecraft narrow angle camera on Dec. 12, 2004, at a distance of 2 million kilometers (1.2 million miles) from Rhea and at a Sun-Rhea-spacecraft, or phase, angle of 30 degrees. The image scale is about 12 kilometers (7.5 miles) per pixel. The image has been magnified by a factor of two and contrast enhanced to aid visibility. http://photojournal.jpl.nasa.gov/catalog/PIA06564

  13. Influence of laser beam incidence angle on laser lap welding quality of galvanized steels

    NASA Astrophysics Data System (ADS)

    Mei, Lifang; Yan, Dongbing; Chen, Genyu; Wang, Zhenhui; Chen, Shuixuan

    2017-11-01

    Based on the characteristics of laser welded structural parts of auto bodies, the influence of variation in laser beam incidence angle on the lap welding performance of galvanized auto-body sheets was studied. Lap welding tests were carried out on the galvanized sheets for auto-body application at different laser beam incidence angles by using the optimal welding parameters obtained through orthogonal experiment. The effects of incidence angle variation on seam appearance, cross-sectional shape, joint mechanical properties and microstructure of weldments were analyzed. In addition, the main factors influencing the value of incidence angle were investigated. According to the results, the weld seams had a good appearance as well as a fine, and uniform microstructure when the laser beam incidence angle was smaller than the critical incidence angle, and thus they could withstand great tensile and shear loads. Moreover, all tensile-shear specimens were fractured in the base material zone. When the laser beam incidence angle was larger than the critical incidence angle, defects like shrinkage and collapse tended to emerge, thereby resulting in the deteriorated weldability of specimens. Meanwhile, factors like the type and thickness of sheet, weld width as well as inter-sheet gap all had a certain effect on the value of laser beam incidence angle. When the sheet thickness was small and the weld width was narrow, the laser beam incidence angle could be increased appropriately. At the same time, small changes in the inter-sheet gap could greatly impact the value of incidence angle. When the inter-sheet gap was small, the laser beam incidence angle should not be too large.

  14. Angle-resolved reflection spectroscopy of high-quality PMMA opal crystal

    NASA Astrophysics Data System (ADS)

    Nemtsev, Ivan V.; Tambasov, Igor A.; Ivanenko, Alexander A.; Zyryanov, Victor Ya.

    2018-02-01

    PMMA opal crystal was prepared by a simple hybrid method, which includes sedimentation, meniscus formation and evaporation. We investigated three surfaces of this crystal by angle-resolved reflective light spectroscopy and SEM study. The angle-resolved reflective measurements were carried out in the 400-1100 nm range. We have determined the high-quality ordered surface of the crystal region. Narrow particle size distribution of the surface has been revealed. The average particle diameter obtained with SEM was nearly 361 nm. The most interesting result was that reflectivity of the surface turned out up to 98% at normal light incidence. Using a fit of dependences of the maximum reflectivity wavelength from an angle based on the Bragg-Snell law, the wavelength of maximum 0° reflectivity, the particle diameter and the fill factor have been determined. For the best surface maximum reflectivity wavelength of a 0° angle was estimated to be 869 nm. The particle diameter and fill factor were calculated as 372 nm and 0.8715, respectively. The diameter obtained by fitting is in excellent agreement with the particle diameter obtained with SEM. The reflectivity maximum is assumed to increase significantly when increasing the fill factor. We believe that using our simple approach to manufacture PMMA opal crystals will significantly increase the fabrication of high-quality photonic crystal templates and thin films.

  15. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  16. Hip rotation angle is associated with frontal plane knee joint mechanics during running.

    PubMed

    Sakaguchi, Masanori; Shimizu, Norifumi; Yanai, Toshimasa; Stefanyshyn, Darren J; Kawakami, Yasuo

    2015-02-01

    Inability to control lower extremity segments in the frontal and transverse planes resulting in large knee abduction angle and increased internal knee abduction impulse has been associated with patellofemoral pain (PFP). However, the influence of hip rotation angles on frontal plane knee joint kinematics and kinetics remains unclear. The purpose of this study was to explore how hip rotation angles are related to frontal plane knee joint kinematics and kinetics during running. Seventy runners participated in this study. Three-dimensional marker positions and ground reaction forces were recorded with an 8-camera motion analysis system and a force plate while subjects ran along a 25-m runway at a speed of 4m/s. Knee abduction, hip rotation and toe-out angles, frontal plane lever arm at the knee, internal knee abduction moment and impulse, ground reaction forces and the medio-lateral distance from the ankle joint center to the center of pressure (AJC-CoP) were quantified. The findings of this study indicate that greater hip external rotation angles were associated with greater toe-out angles, longer AJC-CoP distances, smaller internal knee abduction impulses with shorter frontal plane lever arms and greater knee abduction angles. Thus, there appears to exist a conflict between kinematic and kinetic risk factors of PFP, and hip external rotation angle may be a key factor to control frontal plane knee joint kinematics and kinetics. These results may help provide an appropriate manipulation and/or intervention on running style to reduce the risk of PFP. Copyright © 2014 Elsevier B.V. All rights reserved.

  17. Earth Reflectivity from Deep Space Climate Observatory (DSCOVR) Earth Polychromatic Camera (EPIC)

    NASA Astrophysics Data System (ADS)

    Song, W.; Knyazikhin, Y.; Wen, G.; Marshak, A.; Yan, G.; Mu, X.; Park, T.; Chen, C.; Xu, B.; Myneni, R. B.

    2017-12-01

    Earth reflectivity, which is also specified as Earth albedo or Earth reflectance, is defined as the fraction of incident solar radiation reflected back to space at the top of the atmosphere. It is a key climate parameter that describes climate forcing and associated response of the climate system. Satellite is one of the most efficient ways to measure earth reflectivity. Conventional polar orbit and geostationary satellites observe the Earth at a specific local solar time or monitor only a specific area of the Earth. For the first time, the NASA's Earth Polychromatic Imaging Camera (EPIC) onboard NOAA's Deep Space Climate Observatory (DSCOVR) collects simultaneously radiance data of the entire sunlit earth at 8 km resolution at nadir every 65 to 110 min. It provides reflectivity images in backscattering direction with the scattering angle between 168º and 176º at 10 narrow spectral bands in ultraviolet, visible, and near-Infrared (NIR) wavelengths. We estimate the Earth reflectivity using DSCOVR EPIC observations and analyze errors in Earth reflectivity due to sampling strategy of polar orbit Terra/Aqua MODIS and geostationary Goddard Earth Observing System-R series missions. We also provide estimates of contributions from ocean, clouds, land and vegetation to the Earth reflectivity. Graphic abstract shows enhanced RGB EPIC images of the Earth taken on July-24-2016 at 7:04GMT and 15:48 GMT. Parallel lines depict a 2330 km wide Aqua MODIS swath. The plot shows diurnal courses of mean Earth reflectance over the Aqua swath (triangles) and the entire image (circles). In this example the relative difference between the mean reflectances is +34% at 7:04GMT and -16% at 15:48 GMT. Corresponding daily averages are 0.256 (0.044) and 0.231 (0.025). The relative precision estimated as root mean square relative error is 17.9% in this example.

  18. Relationship between iris surface features and angle width in Asian eyes.

    PubMed

    Sidhartha, Elizabeth; Nongpiur, Monisha Esther; Cheung, Carol Y; He, Mingguang; Wong, Tien Yin; Aung, Tin; Cheng, Ching-Yu

    2014-10-23

    To examine the associations between iris surface features with anterior chamber angle width in Asian eyes. In this prospective cross-sectional study, we recruited 600 subjects from a large population-based study, the Singapore Epidemiology of Eye Diseases (SEED) study. We obtained standardized digital slit-lamp iris photographs and graded the iris crypts (by number and size), furrows (by number and circumferential extent), and color (higher grade denoting darker iris). Vertical and horizontal cross-sections of anterior chamber were imaged using anterior segment optical coherence tomography. Angle opening distance (AOD), angle recess area (ARA), and trabecular-iris space area (TISA) were measured using customized software. Associations of the angle width with the iris surface features in the subject's right eyes were assessed using linear regression analysis. A total of 464 eyes of the 464 subjects (mean age: 57.5 ± 8.6 years) had complete and gradable data for crypts and color, and 423 eyes had gradable data for furrows. After adjustment for age, sex, ethnicity, pupil size, and corneal arcus, higher crypt grade was independently associated with wider AOD750 (β [change in angle width per grade higher] = 0.018, P = 0.023), ARA750 (β = 0.022, P = 0.049), and TISA750 (β = 0.011, P = 0.019), and darker iris was associated narrower ARA750 (β = -0.025, P = 0.044) and TISA750 (β = -0.013, P = 0.011). Iris surface features, assessed and measured from slit-lamp photographs, correlated well with anterior chamber angle width; irises with more crypts and lighter color were associated with wider angle. These findings may provide another imaging modality to assess angle closure risk based on iris surface features. Copyright 2014 The Association for Research in Vision and Ophthalmology, Inc.

  19. Substituting the polarizer mechanism with a polarization camera - an experiment to confirm its capability

    NASA Astrophysics Data System (ADS)

    Reginald, Nelson Leslie; Gopalswamy, Natchimuthuk; Guhathakurta, Madhulika; Yashiro, Seiji

    2016-05-01

    Experiments that require polarized brightness measurements, traditionally have done so by taking three successive images through a polarizer that is rotated through three well-defined angles. With the advent of the polarization camera, the polarized brightness can be measured from a single image. This also eliminates the need for a polarizer and the associated rotator mechanisms and can contribute towards less weight, size, less power requirements, and importantly higher temporal resolution. We intend to demonstrate the capabilities of the polarization camera by conducting a field experiment in conjunction with the total solar eclipse of 21 August 2017 using the Imaging Spectrograph of Coronal Electrons (ISCORE) instrument (Reginald et. al., solar physics, 2009, 260, 347-361). In this instrumental concept four K-coronal images of the corona through four filters centered at 385.0, 398.7, 410.0, 423.3 nm with a bandpass of 4 nm are expected to allow us to determine the coronal electron temperature and electron speed all around the corona. In order to determine the K-coronal brightness through each filter, we would have to take three images by rotating a polarizer through three angles for each of the filters, and it is not feasible owing to the short durations of total solar eclipses. Therefore, in the past we have assumed the total brightness (F + K) measured by each of the four filters to represent K-coronal brightness, which is true in low solar corona. However, with the advent of the polarization camera we can now measure the Stokes Polarization Parameters on a pixel by pixel basis for every image taken by the polarization camera. This allows us to independently quantify the total brightness (K+F) and polarized brightness (K). Also in addition to the four filter images that allow us to measure the electron temperature and electron speed, taking an additional image without a filter will give us enough information to determine the electron density. This instrumental

  20. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  1. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  2. SU-F-J-200: An Improved Method for Event Selection in Compton Camera Imaging for Particle Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mackin, D; Beddar, S; Polf, J

    2016-06-15

    Purpose: The uncertainty in the beam range in particle therapy limits the conformality of the dose distributions. Compton scatter cameras (CC), which measure the prompt gamma rays produced by nuclear interactions in the patient tissue, can reduce this uncertainty by producing 3D images confirming the particle beam range and dose delivery. However, the high intensity and short time windows of the particle beams limit the number of gammas detected. We attempt to address this problem by developing a method for filtering gamma ray scattering events from the background by applying the known gamma ray spectrum. Methods: We used a 4more » stage Compton camera to record in list mode the energy deposition and scatter positions of gammas from a Co-60 source. Each CC stage contained a 4×4 array of CdZnTe crystal. To produce images, we used a back-projection algorithm and four filtering Methods: basic, energy windowing, delta energy (ΔE), or delta scattering angle (Δθ). Basic filtering requires events to be physically consistent. Energy windowing requires event energy to fall within a defined range. ΔE filtering selects events with the minimum difference between the measured and a known gamma energy (1.17 and 1.33 MeV for Co-60). Δθ filtering selects events with the minimum difference between the measured scattering angle and the angle corresponding to a known gamma energy. Results: Energy window filtering reduced the FWHM from 197.8 mm for basic filtering to 78.3 mm. ΔE and Δθ filtering achieved the best results, FWHMs of 64.3 and 55.6 mm, respectively. In general, Δθ filtering selected events with scattering angles < 40°, while ΔE filtering selected events with angles > 60°. Conclusion: Filtering CC events improved the quality and resolution of the corresponding images. ΔE and Δθ filtering produced similar results but each favored different events.« less

  3. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  4. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  5. Robust Range Estimation with a Monocular Camera for Vision-Based Forward Collision Warning System

    PubMed Central

    2014-01-01

    We propose a range estimation method for vision-based forward collision warning systems with a monocular camera. To solve the problem of variation of camera pitch angle due to vehicle motion and road inclination, the proposed method estimates virtual horizon from size and position of vehicles in captured image at run-time. The proposed method provides robust results even when road inclination varies continuously on hilly roads or lane markings are not seen on crowded roads. For experiments, a vision-based forward collision warning system has been implemented and the proposed method is evaluated with video clips recorded in highway and urban traffic environments. Virtual horizons estimated by the proposed method are compared with horizons manually identified, and estimated ranges are compared with measured ranges. Experimental results confirm that the proposed method provides robust results both in highway and in urban traffic environments. PMID:24558344

  6. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  7. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  8. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  9. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  10. Narrow infrasound pulses from lightning; are they of electrostatic or thermal origin?

    NASA Astrophysics Data System (ADS)

    CHUM, Jaroslav; Diendorfer, Gerhard; Šindelářová, Tereza; Baše, Jiří; Hruška, František

    2014-05-01

    Narrow (~1-2 s) infrasound pulses that followed, with ~11 to ~50 s delays, rapid changes of electrostatic field were observed by a microbarometer array in the Czech Republic during thunderstorm activity. The angles of arrival (azimuth and elevation) were analyzed for selected distinct events. Comparisons of distances and azimuths of infrasound sources from the center of microbarometer array with lightning locations determined by EUCLID lightning detection network show that most of the selected events are most likely associated with intra-cloud (IC) discharges. Preceding rapid changes of electrostatic field, potential association of infrasound pulses with IC discharges, and high elevation angles of arrival for near infrasound sources indicate that an electrostatic mechanism is probably responsible for their generation. It is discussed that distinguishing of the relative role of thermal and electrostatic mechanism is difficult, and that none of published models of electrostatic production of infrasound thunder can explain the presented observations precisely. A modification of the current models, based on consideration of at least two charged layers is suggested. Further theoretical and experimental investigations are however needed to get a better description of the generation mechanism of those infrasound pulses.

  11. The Relative Contribution of Ankle Moment and Trailing Limb Angle to Propulsive Force during Gait

    PubMed Central

    Hsiao, HaoYuan; Knarr, Brian A.; Higginson, Jill S.; Binder-Macleod, Stuart A.

    2014-01-01

    A major factor for increasing walking speed is the ability to increase propulsive force. Although propulsive force has been shown to be related to ankle moment and trailing limb angle, the relative contribution of each factor to propulsive force has never been determined. The primary purpose of this study was to quantify the relative contribution of ankle moment and trailing limb angle to propulsive force for able-bodied individuals walking at different speeds. Twenty able-bodied individuals walked at their self-selected and 120% of self-selected walking speed on the treadmill. Kinematic data were collected using an 8-camera motion-capture system. A model describing the relationship between ankle moment, trailing limb angle and propulsive force was obtained through quasi-static analysis. Our main findings were that ankle moment and trailing limb angle each contributes linearly to propulsive force, and that the change in trailing limb angle contributes almost as twice as much as the change in ankle moment to the increase in propulsive force during speed modulation for able-bodied individuals. Able-bodied individuals preferentially modulate trailing limb angle more than ankle moment to increase propulsive force. Future work will determine if this control strategy can be applied to individuals poststroke. PMID:25498289

  12. Behavioral patterns and in-situ target strength of the hairtail ( Trichiurus lepturus) via coupling of scientific echosounder and acoustic camera data

    NASA Astrophysics Data System (ADS)

    Hwang, Kangseok; Yoon, Eun-A.; Kang, Sukyung; Cha, Hyungkee; Lee, Kyounghoon

    2017-12-01

    The present study focuses on the influence of target strength (TS) changes in the swimming angle of the hairtail ( Trichiurus lepturus). We measured in-situ TS at 38 and 120 kHz with luring lamps at a fishing ground for jigging boats near the coastal waters of Jeju-do in Korea. Swimming angle and size of hairtails were measured using an acoustic camera. Results showed that mean preanal length was estimated to be 13.5 cm (SD = 2.7 cm) and mean swimming tilt angle was estimated to be 43.9° (SD = 17.6°). The mean TS values were -35.7 and -41.2 dB at 38 and 120 kHz, respectively. The results will assist in understanding the influence of swimming angle on the TS of hairtails and, thus, improve the accuracy of biomass estimates.

  13. Schiaparelli Crater Rim and Interior Deposits

    NASA Technical Reports Server (NTRS)

    1998-01-01

    A portion of the rim and interior of the large impact crater Schiaparelli is seen at different resolutions in images acquired October 18, 1997 by the Mars Global Surveyor Orbiter Camera (MOC) and by the Viking Orbiter 1 twenty years earlier. The left image is a MOC wide angle camera 'context' image showing much of the eastern portion of the crater at roughly 1 km (0.6 mi) per picture element. The image is about 390 by 730 km (240 X 450 miles). Shown within the wide angle image is the outline of a portion of the best Viking image (center, 371S53), acquired at a resolution of about 240 m/pixel (790 feet). The area covered is 144 X 144 km (89 X 89 miles). The right image is the high resolution narrow angle camera view. The area covered is very small--3.9 X 10.2 km (2.4 X 6.33 mi)--but is seen at 63 times higher resolution than the Viking image. The subdued relief and bright surface are attributed to blanketing by dust; many small craters have been completely filled in, and only the most recent (and very small) craters appear sharp and bowl-shaped. Some of the small craters are only 10-12 m (30-35 feet) across. Occasional dark streaks on steeper slopes are small debris slides that have probably occurred in the past few decades. The two prominent, narrow ridges in the center of the image may be related to the adjustment of the crater floor to age or the weight of the material filling the basin.

    Malin Space Science Systems (MSSS) and the California Institute of Technology built the MOC using spare hardware from the Mars Observer mission. MSSS operates the camera from its facilities in San Diego, CA. The Jet Propulsion Laboratory's Mars Surveyor Operations Project operates the Mars Global Surveyor spacecraft with its industrial partner, Lockheed Martin Astronautics, from facilities in Pasadena, CA and Denver, CO.

  14. Characterization of snowfall properties at high-latitude sites through use of a combined Multi-Angle Snow Camera (MASC) and radar approach

    NASA Astrophysics Data System (ADS)

    Cooper, S.; Wood, N.; Garrett, T. J.; L'Ecuyer, T. S.; Pettersen, C.

    2016-12-01

    Estimates of snowfall rate derived from radar reflectivities alone are non-unique, as different combinations of snowfall rates and snowflake microphysical properties can conspire to produce nearly identical radar reflectivity signatures. Such ambiguities can result in retrieval uncertainties on the order of 100-200% for individual events. Here, we use observations of snowflake particle size distribution, fallspeed, and habit from the Multi-Angle Snow Camera (MASC) to constrain estimates of snowfall derived from radar reflectivities. MASC measurements of microphysical properties and uncertainties are introduced into a modified form of the optimal-estimation CloudSat snowfall algorithm (2C-SNOW-PROFILE) via the a priori guess and variance terms. Initial results focus on the MASC and Ka-band Zenith Radar (KaZR) measurements at the ARM NSA Barrow Climate Facility site. Use of MASC fallspeed, MASC PSD, and a CloudSat particle model as base assumptions resulted in retrieved total accumulations with a -17% difference relative to nearby National Weather Service observations averaged over five snow events. Use of different but reasonable combinations of retrieval assumptions resulted in estimated snowfall accumulations with differences ranging from -63% to + 86% for the same storm events. Retrieved snowfall rates were particularly sensitive to assumed fallspeed and habit, suggesting that MASC measurements may provide a path forward in reducing the non-uniqueness of the snowfall retrieval problem. Preliminary results also will be presented for the deployment of the MASC, MicroRain Radar (MRR), and Precipitation Imaging Package (PIP) to Haukeliseter, Norway during winter season 2016-17. These instruments will then be deployed to northern Sweden for winter 2017-18. It is hoped more accurate knowledge of snowfall properties dependent upon location and meteorological conditions will be useful for both weather and climate applications.

  15. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  16. PET with the HIDAC camera?

    NASA Astrophysics Data System (ADS)

    Townsend, D. W.

    1988-06-01

    In 1982 the first prototype high density avalanche chamber (HIDAC) positron camera became operational in the Division of Nuclear Medicine of Geneva University Hospital. The camera consisted of dual 20 cm × 20 cm HIDAC detectors mounted on a rotating gantry. In 1984, these detectors were replaced by 30 cm × 30 cm detectors with improved performance and reliability. Since then, the larger detectors have undergone clinical evaluation. This article discusses certain aspects of the evaluation program and the conclusions that can be drawn from the results. The potential of the HIDAC camera for quantitative positron emission tomography (PET) is critically examined, and its performance compared with a state-of-the-art, commercial ring camera. Guidelines for the design of a future HIDAC camera are suggested.

  17. Multi-angle Spectra Evolution of Ionospheric Turbulence Excited by RF Interactions at HAARP

    NASA Astrophysics Data System (ADS)

    Sheerin, J. P.; Rayyan, N.; Watkins, B. J.; Watanabe, N.; Golkowski, M.; Bristow, W. A.; Bernhardt, P. A.; Briczinski, S. J., Jr.

    2014-12-01

    The high power HAARP HF transmitter is employed to generate and study strong Langmuir turbulence (SLT) in the interaction region of overdense ionospheric plasma. Diagnostics included the Modular UHF Ionospheric Radar (MUIR) sited at HAARP, the SuperDARN-Kodiak HF radar, and HF receivers to record stimulated electromagnetic emissions (SEE). Dependence of diagnostic signals on HAARP HF parameters, including pulselength, duty-cycle, aspect angle, and frequency were recorded. Short pulse, low duty cycle experiments demonstrate control of artificial field-aligned irregularities (AFAI) and isolation of ponderomotive effects. For the first time, simultaneous multi-angle radar measurements of plasma line spectra are recorded demonstrating marked dependence on aspect angle with the strongest interaction region observed displaced southward of the HF zenith pointing angle. For a narrow range of HF pointing between Spitze and magnetic zenith, a reduced threshold for AFAI is observed. High time resolution studies of the temporal evolution of the plasma line reveal the appearance of an overshoot effect on ponderomotive timescales. Numerous measurements of the outshifted plasma line are observed. Experimental results are compared to previous high latitude experiments and predictions from recent modeling efforts

  18. Location accuracy evaluation of lightning location systems using natural lightning flashes recorded by a network of high-speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. C. V.; Campos, L. Z. D. S.; Pinto, O., Jr.; Antunes, L.

    2014-12-01

    This work presents a method for the evaluation of location accuracy of all Lightning Location System (LLS) in operation in southeastern Brazil, using natural cloud-to-ground (CG) lightning flashes. This can be done through a multiple high-speed cameras network (RAMMER network) installed in the Paraiba Valley region - SP - Brazil. The RAMMER network (Automated Multi-camera Network for Monitoring and Study of Lightning) is composed by four high-speed cameras operating at 2,500 frames per second. Three stationary black-and-white (B&W) cameras were situated in the cities of São José dos Campos and Caçapava. A fourth color camera was mobile (installed in a car), but operated in a fixed location during the observation period, within the city of São José dos Campos. The average distance among cameras was 13 kilometers. Each RAMMER sensor position was determined so that the network can observe the same lightning flash from different angles and all recorded videos were GPS (Global Position System) time stamped, allowing comparisons of events between cameras and the LLS. The RAMMER sensor is basically composed by a computer, a Phantom high-speed camera version 9.1 and a GPS unit. The lightning cases analyzed in the present work were observed by at least two cameras, their position was visually triangulated and the results compared with BrasilDAT network, during the summer seasons of 2011/2012 and 2012/2013. The visual triangulation method is presented in details. The calibration procedure showed an accuracy of 9 meters between the accurate GPS position of the object triangulated and the result from the visual triangulation method. Lightning return stroke positions, estimated with the visual triangulation method, were compared with LLS locations. Differences between solutions were not greater than 1.8 km.

  19. Uncooled radiometric camera performance

    NASA Astrophysics Data System (ADS)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  20. Polarization sensitive camera for the in vitro diagnostic and monitoring of dental erosion

    NASA Astrophysics Data System (ADS)

    Bossen, Anke; Rakhmatullina, Ekaterina; Lussi, Adrian; Meier, Christoph

    Due to a frequent consumption of acidic food and beverages, the prevalence of dental erosion increases worldwide. In an initial erosion stage, the hard dental tissue is softened due to acidic demineralization. As erosion progresses, a gradual tissue wear occurs resulting in thinning of the enamel. Complete loss of the enamel tissue can be observed in severe clinical cases. Therefore, it is essential to provide a diagnosis tool for an accurate detection and monitoring of dental erosion already at early stages. In this manuscript, we present the development of a polarization sensitive imaging camera for the visualization and quantification of dental erosion. The system consists of two CMOS cameras mounted on two sides of a polarizing beamsplitter. A horizontal linearly polarized light source is positioned orthogonal to the camera to ensure an incidence illumination and detection angles of 45°. The specular reflected light from the enamel surface is collected with an objective lens mounted on the beam splitter and divided into horizontal (H) and vertical (V) components on each associate camera. Images of non-eroded and eroded enamel surfaces at different erosion degrees were recorded and assessed with diagnostic software. The software was designed to generate and display two types of images: distribution of the reflection intensity (V) and a polarization ratio (H-V)/(H+V) throughout the analyzed tissue area. The measurements and visualization of these two optical parameters, i.e. specular reflection intensity and the polarization ratio, allowed detection and quantification of enamel erosion at early stages in vitro.

  1. Portable retinal imaging for eye disease screening using a consumer-grade digital camera

    NASA Astrophysics Data System (ADS)

    Barriga, Simon; Larichev, Andrey; Zamora, Gilberto; Soliz, Peter

    2012-03-01

    The development of affordable means to image the retina is an important step toward the implementation of eye disease screening programs. In this paper we present the i-RxCam, a low-cost, hand-held, retinal camera for widespread applications such as tele-retinal screening for eye diseases like diabetic retinopathy (DR), glaucoma, and age-related ocular diseases. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities. The i-RxCam uses a Nikon D3100 digital camera body. The camera has a CMOS sensor with 14.8 million pixels. We use a 50mm focal lens that gives a retinal field of view of 45 degrees. The internal autofocus can compensate for about 2D (diopters) of focusing error. The light source is an LED produced by Philips with a linear emitting area that is transformed using a light pipe to the optimal shape at the eye pupil, an annulus. To eliminate corneal reflex we use a polarization technique in which the light passes through a nano-wire polarizer plate. This is a novel type of polarizer featuring high polarization separation (contrast ratio of more than 1000) and very large acceptance angle (>45 degrees). The i-RxCam approach will yield a significantly more economical retinal imaging device that would allow mass screening of the at-risk population.

  2. Optically trapped atomic resonant devices for narrow linewidth spectral imaging

    NASA Astrophysics Data System (ADS)

    Qian, Lipeng

    This thesis focuses on the development of atomic resonant devices for spectroscopic applications. The primary emphasis is on the imaging properties of optically thick atomic resonant fluorescent filters and their applications. In addition, this thesis presents a new concept for producing very narrow linewidth light as from an atomic vapor lamp pumped by a nanosecond pulse system. This research was motivated by application for missile warning system, and presents an innovative approach to a wide angle, ultra narrow linewidth imaging filter using a potassium vapor cell. The approach is to image onto and collect the fluorescent photons emitted from the surface of an optically thick potassium vapor cell, generating a 2 GHz pass-band imaging filter. This linewidth is narrow enough to fall within a Fraunhefer dark zone in the solar spectrum, thus make the detection solar blind. Experiments are conducted to measure the absorption line shape of the potassium resonant filter, the quantum efficiency of the fluorescent behavior, and the resolution of the fluorescent image. Fluorescent images with different spatial frequency components are analyzed by using a discrete Fourier transform, and the imaging capability of the fluorescent filter is described by its Modulation Transfer Function. For the detection of radiation that is spectrally broader than the linewidth of the potassium imaging filter, the fluorescent image is seen to be blurred by diffuse fluorescence from the slightly off resonant photons. To correct this, an ultra-thin potassium imaging filter is developed and characterized. The imaging property of the ultra-thin potassium imaging cell is tested with a potassium seeded flame, yielding a resolution image of ˜ 20 lines per mm. The physics behind the atomic resonant fluorescent filter is radiation trapping. The diffusion process of the resonant photons trapped in the atomic vapor is theoretically described in this thesis. A Monte Carlo method is used to simulate the

  3. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  4. The Europa Imaging System (EIS): Investigating Europa's geology, ice shell, and current activity

    NASA Astrophysics Data System (ADS)

    Turtle, Elizabeth; Thomas, Nicolas; Fletcher, Leigh; Hayes, Alexander; Ernst, Carolyn; Collins, Geoffrey; Hansen, Candice; Kirk, Randolph L.; Nimmo, Francis; McEwen, Alfred; Hurford, Terry; Barr Mlinar, Amy; Quick, Lynnae; Patterson, Wes; Soderblom, Jason

    2016-07-01

    NASA's Europa Mission, planned for launch in 2022, will perform more than 40 flybys of Europa with altitudes at closest approach as low as 25 km. The instrument payload includes the Europa Imaging System (EIS), a camera suite designed to transform our understanding of Europa through global decameter-scale coverage, topographic and color mapping, and unprecedented sub- meter-scale imaging. EIS combines narrow-angle and wide-angle cameras to address these science goals: • Constrain the formation processes of surface features by characterizing endogenic geologic structures, surface units, global cross-cutting relationships, and relationships to Europa's subsurface structure and potential near-surface water. • Search for evidence of recent or current activity, including potential plumes. • Characterize the ice shell by constraining its thickness and correlating surface features with subsurface structures detected by ice penetrating radar. • Characterize scientifically compelling landing sites and hazards by determining the nature of the surface at scales relevant to a potential lander. EIS Narrow-angle Camera (NAC): The NAC, with a 2.3°° x 1.2°° field of view (FOV) and a 10-μμrad instantaneous FOV (IFOV), achieves 0.5-m pixel scale over a 2-km-wide swath from 50-km altitude. A 2-axis gimbal enables independent targeting, allowing very high-resolution stereo imaging to generate digital topographic models (DTMs) with 4-m spatial scale and 0.5-m vertical precision over the 2-km swath from 50-km altitude. The gimbal also makes near-global (>95%) mapping of Europa possible at ≤50-m pixel scale, as well as regional stereo imaging. The NAC will also perform high-phase-angle observations to search for potential plumes. EIS Wide-angle Camera (WAC): The WAC has a 48°° x 24°° FOV, with a 218-μμrad IFOV, and is designed to acquire pushbroom stereo swaths along flyby ground-tracks. From an altitude of 50 km, the WAC achieves 11-m pixel scale over a 44-km

  5. A Fractured Pole

    NASA Image and Video Library

    2015-10-15

    NASA's Cassini spacecraft zoomed by Saturn's icy moon Enceladus on Oct. 14, 2015, capturing this stunning image of the moon's north pole. A companion view from the wide-angle camera (PIA20010) shows a zoomed out view of the same region for context. Scientists expected the north polar region of Enceladus to be heavily cratered, based on low-resolution images from the Voyager mission, but high-resolution Cassini images show a landscape of stark contrasts. Thin cracks cross over the pole -- the northernmost extent of a global system of such fractures. Before this Cassini flyby, scientists did not know if the fractures extended so far north on Enceladus. North on Enceladus is up. The image was taken in visible green light with the Cassini spacecraft narrow-angle camera. The view was acquired at a distance of approximately 4,000 miles (6,000 kilometers) from Enceladus and at a Sun-Enceladus-spacecraft, or phase, angle of 9 degrees. Image scale is 115 feet (35 meters) per pixel. http://photojournal.jpl.nasa.gov/catalog/PIA19660

  6. The Activity of Comet 67P/Churyumov-Gerasimenko as Seen by Rosetta/OSIRIS

    NASA Astrophysics Data System (ADS)

    Sierks, H.; Barbieri, C.; Lamy, P. L.; Rodrigo, R.; Rickman, H.; Koschny, D.

    2015-12-01

    The Rosetta mission of the European Space Agency arrived on August 6, 2014, at the target comet 67P/Churyumov-Gerasimenko. OSIRIS (Optical, Spectroscopic, and Infrared Remote Imaging System) is the scientific imaging system onboard Rosetta. OSIRIS consists of a Narrow Angle Camera (NAC) for the nucleus surface and dust studies and a Wide Angle Camera (WAC) for the wide field gas and dust coma investigations. OSIRIS observed the coma and the nucleus of comet 67P/C-G during approach, arrival, and landing of PHILAE. OSIRIS continued comet monitoring and mapping of surface and activity in 2015 with close fly-bys with high resolution and remote, wide angle observations. The scientific results reveal a nucleus with two lobes and varied morphology. Active regions are located at steep cliffs and collapsed pits which form collimated gas jets. Dust is accelerated by the gas, forming bright jet filaments and the large scale, diffuse coma of the comet. We will present activity and surface changes observed in the Northern and Southern hemisphere and around perihelion passage.

  7. Ladder beam and camera video recording system for evaluating forelimb and hindlimb deficits after sensorimotor cortex injury in rats.

    PubMed

    Soblosky, J S; Colgin, L L; Chorney-Lane, D; Davidson, J F; Carey, M E

    1997-12-30

    Hindlimb and forelimb deficits in rats caused by sensorimotor cortex lesions are frequently tested by using the narrow flat beam (hindlimb), the narrow pegged beam (hindlimb and forelimb) or the grid-walking (forelimb) tests. Although these are excellent tests, the narrow flat beam generates non-parametric data so that using more powerful parametric statistical analyses are prohibited. All these tests can be difficult to score if the rat is moving rapidly. Foot misplacements, especially on the grid-walking test, are indicative of an ongoing deficit, but have not been reliably and accurately described and quantified previously. In this paper we present an easy to construct and use horizontal ladder-beam with a camera system on rails which can be used to evaluate both hindlimb and forelimb deficits in a single test. By slow motion videotape playback we were able to quantify and demonstrate foot misplacements which go beyond the recovery period usually seen using more conventional measures (i.e. footslips and footfaults). This convenient system provides a rapid and reliable method for recording and evaluating rat performance on any type of beam and may be useful for measuring sensorimotor recovery following brain injury.

  8. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  9. Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST

    NASA Image and Video Library

    1993-12-06

    S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  10. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  11. Neptune - full ring system

    NASA Technical Reports Server (NTRS)

    1989-01-01

    This pair of Voyager 2 images (FDS 11446.21 and 11448.10), two 591-s exposures obtained through the clear filter of the wide angle camera, show the full ring system with the highest sensitivity. Visible in this figure are the bright, narrow N53 and N63 rings, the diffuse N42 ring, and (faintly) the plateau outside of the N53 ring (with its slight brightening near 57,500 km).

  12. Range Finding with a Plenoptic Camera

    DTIC Science & Technology

    2014-03-27

    92 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Simulated Camera Analysis...Varying Lens Diameter . . . . . . . . . . . . . . . . 95 Simulated Camera Analysis: Varying Detector Size . . . . . . . . . . . . . . . . . 98 Simulated ...Matching Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 37 Simulated Camera Performance with SIFT

  13. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  14. Preliminary Mapping of Permanently Shadowed and Sunlit Regions Using the Lunar Reconnaissance Orbiter Camera (LROC)

    NASA Astrophysics Data System (ADS)

    Speyerer, E.; Koeber, S.; Robinson, M. S.

    2010-12-01

    The spin axis of the Moon is tilted by only 1.5° (compared with the Earth's 23.5°), leaving some areas near the poles in permanent shadow while other nearby regions remain sunlit for a majority of the year. Theory, radar data, neutron measurements, and Lunar CRater Observation and Sensing Satellite (LCROSS) observations suggest that volatiles may be present in the cold traps created inside these permanently shadowed regions. While areas of near permanent illumination are prime locations for future lunar outposts due to benign thermal conditions and near constant solar power. The Lunar Reconnaissance Orbiter (LRO) has two imaging systems that provide medium and high resolution views of the poles. During almost every orbit the LROC Wide Angle Camera (WAC) acquires images at 100 m/pixel of the polar region (80° to 90° north and south latitude). In addition, the LROC Narrow Angle Camera (NAC) targets selected regions of interest at 0.7 to 1.5 m/pixel [Robinson et al., 2010]. During the first 11 months of the nominal mission, LROC acquired almost 6,000 WAC images and over 7,300 NAC images of the polar region (i.e., within 2° of pole). By analyzing this time series of WAC and NAC images, regions of permanent shadow and permanent, or near-permanent illumination can be quantified. The LROC Team is producing several reduced data products that graphically illustrate the illumination conditions of the polar regions. Illumination movie sequences are being produced that show how the lighting conditions change over a calendar year. Each frame of the movie sequence is a polar stereographic projected WAC image showing the lighting conditions at that moment. With the WAC’s wide field of view (~100 km at an altitude of 50 km), each frame has repeat coverage between 88° and 90° at each pole. The same WAC images are also being used to develop multi-temporal illumination maps that show the percent each 100 m × 100 m area is illuminated over a period of time. These maps are

  15. Power estimation of martial arts movement using 3D motion capture camera

    NASA Astrophysics Data System (ADS)

    Azraai, Nur Zaidi; Awang Soh, Ahmad Afiq Sabqi; Mat Jafri, Mohd Zubir

    2017-06-01

    Motion capture camera (MOCAP) has been widely used in many areas such as biomechanics, physiology, animation, arts, etc. This project is done by approaching physics mechanics and the extended of MOCAP application through sports. Most researchers will use a force plate, but this will only can measure the force of impact, but for us, we are keen to observe the kinematics of the movement. Martial arts is one of the sports that uses more than one part of the human body. For this project, martial art `Silat' was chosen because of its wide practice in Malaysia. 2 performers have been selected, one of them has an experienced in `Silat' practice and another one have no experience at all so that we can compare the energy and force generated by the performers. Every performer will generate a punching with same posture which in this project, two types of punching move were selected. Before the measuring start, a calibration has been done so the software knows the area covered by the camera and reduce the error when analyze by using the T stick that have been pasted with a marker. A punching bag with mass 60 kg was hung on an iron bar as a target. The use of this punching bag is to determine the impact force of a performer when they punch. This punching bag also will be stuck with the optical marker so we can observe the movement after impact. 8 cameras have been used and placed with 2 cameras at every side of the wall with different angle in a rectangular room 270 ft2 and the camera covered approximately 50 ft2. We covered only a small area so less noise will be detected and make the measurement more accurate. A Marker has been pasted on the limb of the entire hand that we want to observe and measure. A passive marker used in this project has a characteristic to reflect the infrared that being generated by the camera. The infrared will reflected to the camera sensor so the marker position can be detected and show in software. The used of many cameras is to increase the

  16. Expansion of the visual angle of a car rear-view image via an image mosaic algorithm

    NASA Astrophysics Data System (ADS)

    Wu, Zhuangwen; Zhu, Liangrong; Sun, Xincheng

    2015-05-01

    The rear-view image system is one of the active safety devices in cars and is widely applied in all types of vehicles and traffic safety areas. However, studies made by both domestic and foreign researchers were based on a single image capture device while reversing, so a blind area still remained to drivers. Even if multiple cameras were used to expand the visual angle of the car's rear-view image in some studies, the blind area remained because different source images were not mosaicked together. To acquire an expanded visual angle of a car rear-view image, two charge-coupled device cameras with optical axes angled at 30 deg were mounted below the left and right fenders of a car in three light conditions-sunny outdoors, cloudy outdoors, and an underground garage-to capture rear-view heterologous images of the car. Then these rear-view heterologous images were rapidly registered through the scale invariant feature transform algorithm. Combined with the random sample consensus algorithm, the two heterologous images were finally mosaicked using the linear weighted gradated in-and-out fusion algorithm, and a seamless and visual-angle-expanded rear-view image was acquired. The four-index test results showed that the algorithms can mosaic rear-view images well in the underground garage condition, where the average rate of correct matching was the lowest among the three conditions. The rear-view image mosaic algorithm presented had the best information preservation, the shortest computation time and the most complete preservation of the image detail features compared to the mean value method (MVM) and segmental fusion method (SFM), and it was also able to perform better in real time and provided more comprehensive image details than MVM and SFM. In addition, it had the most complete image preservation from source images among the three algorithms. The method introduced by this paper provided the basis for researching the expansion of the visual angle of a car rear

  17. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  18. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  19. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  20. Developing a Low-Cost System for 3d Data Acquisition

    NASA Astrophysics Data System (ADS)

    Kossieris, S.; Kourounioti, O.; Agrafiotis, P.; Georgopoulos, A.

    2017-11-01

    In this paper, a developed low-cost system is described, which aims to facilitate 3D documentation fast and reliably by acquiring the necessary data in outdoor environment for the 3D documentation of façades especially in the case of very narrow streets. In particular, it provides a viable solution for buildings up to 8-10m high and streets as narrow as 2m or even less. In cases like that, it is practically impossible or highly time-consuming to acquire images in a conventional way. This practice would lead to a huge number of images and long processing times. The developed system was tested in the narrow streets of a medieval village on the Greek island of Chios. There, in order to by-pass the problem of short taking distances, it was thought to use high definition action cameras together with a 360˚ camera, which are usually provided with very wide-angle lenses and are capable of acquiring images, of high definition, are rather cheap and, most importantly, extremely light. Results suggest that the system can perform fast 3D data acquisition adequate for deliverables of high quality.

  1. A new star tracker concept for satellite attitude determination based on a multi-purpose panoramic camera

    NASA Astrophysics Data System (ADS)

    Opromolla, Roberto; Fasano, Giancarmine; Rufino, Giancarlo; Grassi, Michele; Pernechele, Claudio; Dionisio, Cesare

    2017-11-01

    This paper presents an innovative algorithm developed for attitude determination of a space platform. The algorithm exploits images taken from a multi-purpose panoramic camera equipped with hyper-hemispheric lens and used as star tracker. The sensor architecture is also original since state-of-the-art star trackers accurately image as many stars as possible within a narrow- or medium-size field-of-view, while the considered sensor observes an extremely large portion of the celestial sphere but its observation capabilities are limited by the features of the optical system. The proposed original approach combines algorithmic concepts, like template matching and point cloud registration, inherited from the computer vision and robotic research fields, to carry out star identification. The final aim is to provide a robust and reliable initial attitude solution (lost-in-space mode), with a satisfactory accuracy level in view of the multi-purpose functionality of the sensor and considering its limitations in terms of resolution and sensitivity. Performance evaluation is carried out within a simulation environment in which the panoramic camera operation is realistically reproduced, including perturbations in the imaged star pattern. Results show that the presented algorithm is able to estimate attitude with accuracy better than 1° with a success rate around 98% evaluated by densely covering the entire space of the parameters representing the camera pointing in the inertial space.

  2. The 1997 Spring Regression of the Martian South Polar Cap: Mars Orbiter Camera Observations

    USGS Publications Warehouse

    James, P.B.; Cantor, B.A.; Malin, M.C.; Edgett, K.; Carr, M.H.; Danielson, G.E.; Ingersoll, A.P.; Davies, M.E.; Hartmann, W.K.; McEwen, A.S.; Soderblom, L.A.; Thomas, P.C.; Veverka, J.

    2000-01-01

    The Mars Orbiter cameras (MOC) on Mars Global Surveyor observed the south polar cap of Mars during its spring recession in 1997. The images acquired by the wide angle cameras reveal a pattern of recession that is qualitatively similar to that observed by Viking in 1977 but that does differ in at least two respects. The 1977 recession in the 0o to 120o longitude sector was accelerated relative to the 1997 observations after LS = 240o; the Mountains of Mitchel also detached from the main cap earlier in 1997. Comparison of the MOC images with Mars Orbiter Laser Altimeter data shows that the Mountains of Mitchel feature is controlled by local topography. Relatively dark, low albedo regions well within the boundaries of the seasonal cap were observed to have red-to-violet ratios that characterize them as frost units rather than unfrosted or partially frosted ground; this suggests the possibility of regions covered by CO2 frost having different grain sizes.

  3. Intracranial cerebrospinal fluid spaces imaging using a pulse-triggered three-dimensional turbo spin echo MR sequence with variable flip-angle distribution.

    PubMed

    Hodel, Jérôme; Silvera, Jonathan; Bekaert, Olivier; Rahmouni, Alain; Bastuji-Garin, Sylvie; Vignaud, Alexandre; Petit, Eric; Durning, Bruno; Decq, Philippe

    2011-02-01

    To assess the three-dimensional turbo spin echo with variable flip-angle distribution magnetic resonance sequence (SPACE: Sampling Perfection with Application optimised Contrast using different flip-angle Evolution) for the imaging of intracranial cerebrospinal fluid (CSF) spaces. We prospectively investigated 18 healthy volunteers and 25 patients, 20 with communicating hydrocephalus (CH), five with non-communicating hydrocephalus (NCH), using the SPACE sequence at 1.5T. Volume rendering views of both intracranial and ventricular CSF were obtained for all patients and volunteers. The subarachnoid CSF distribution was qualitatively evaluated on volume rendering views using a four-point scale. The CSF volumes within total, ventricular and subarachnoid spaces were calculated as well as the ratio between ventricular and subarachnoid CSF volumes. Three different patterns of subarachnoid CSF distribution were observed. In healthy volunteers we found narrowed CSF spaces within the occipital aera. A diffuse narrowing of the subarachnoid CSF spaces was observed in patients with NCH whereas patients with CH exhibited narrowed CSF spaces within the high midline convexity. The ratios between ventricular and subarachnoid CSF volumes were significantly different among the volunteers, patients with CH and patients with NCH. The assessment of CSF spaces volume and distribution may help to characterise hydrocephalus.

  4. First light observations with TIFR Near Infrared Imaging Camera (TIRCAM-II)

    NASA Astrophysics Data System (ADS)

    Ojha, D. K.; Ghosh, S. K.; D'Costa, S. L. A.; Naik, M. B.; Sandimani, P. R.; Poojary, S. S.; Bhagat, S. B.; Jadhav, R. B.; Meshram, G. S.; Bakalkar, C. B.; Ramaprakash, A. N.; Mohan, V.; Joshi, J.

    TIFR near infrared imaging camera (TIRCAM-II) is based on the Aladdin III Quadrant InSb focal plane array (512×512 pixels; 27.6 μm pixel size; sensitive between 1 - 5.5 μm). TIRCAM-II had its first engineering run with the 2 m IUCAA telescope at Girawali during February - March 2011. The first light observations with TIRCAM-II were quite successful. Several infrared standard with TIRCAM-II were quite successful. Several infrared standard stars, the Trapezium Cluster in Orion region, McNeil's nebula, etc., were observed in the J, K and in a narrow-band at 3.6 μm (nbL). In the nbL band, some bright stars could be detected from the Girawali site. The performance of TIRCAM-II is discussed in the light of preliminary observations in near infrared bands.

  5. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  6. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  7. Dual-illumination mode, wide-field probe imaging scheme for imaging irido-corneal angle region inside eye

    NASA Astrophysics Data System (ADS)

    Shinoj, V. K.; Murukeshan, V. M.; Hong, Jesmond; Baskaran, M.; Aung, Tin

    2015-07-01

    Noninvasive medical imaging techniques have generated great interest and high potential in the research and development of ocular imaging and follow up procedures. It is well known that angle closure glaucoma is one of the major ocular diseases/ conditions that causes blindness. The identification and treatment of this disease are related primarily to angle assessment techniques. In this paper, we illustrate a probe-based imaging approach to obtain the images of the angle region in eye. The proposed probe consists of a micro CCD camera and LED/NIR laser light sources and they are configured at the distal end to enable imaging of iridocorneal region inside eye. With this proposed dualmodal probe, imaging is performed in light (white visible LED ON) and dark (NIR laser light source alone) conditions and the angle region is noticeable in both cases. The imaging using NIR sources have major significance in anterior chamber imaging since it evades pupil constriction due to the bright light and thereby the artificial altering of anterior chamber angle. The proposed methodology and developed scheme are expected to find potential application in glaucoma disease detection and diagnosis.

  8. Bio-inspired motion detection in an FPGA-based smart camera module.

    PubMed

    Köhler, T; Röchter, F; Lindemann, J P; Möller, R

    2009-03-01

    Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.

  9. LROC Stereo Observations

    NASA Astrophysics Data System (ADS)

    Beyer, Ross A.; Archinal, B.; Li, R.; Mattson, S.; Moratto, Z.; McEwen, A.; Oberst, J.; Robinson, M.

    2009-09-01

    The Lunar Reconnaissance Orbiter Camera (LROC) will obtain two types of multiple overlapping coverage to derive terrain models of the lunar surface. LROC has two Narrow Angle Cameras (NACs), working jointly to provide a wider (in the cross-track direction) field of view, as well as a Wide Angle Camera (WAC). LRO's orbit precesses, and the same target can be viewed at different solar azimuth and incidence angles providing the opportunity to acquire `photometric stereo' in addition to traditional `geometric stereo' data. Geometric stereo refers to images acquired by LROC with two observations at different times. They must have different emission angles to provide a stereo convergence angle such that the resultant images have enough parallax for a reasonable stereo solution. The lighting at the target must not be radically different. If shadows move substantially between observations, it is very difficult to correlate the images. The majority of NAC geometric stereo will be acquired with one nadir and one off-pointed image (20 degree roll). Alternatively, pairs can be obtained with two spacecraft rolls (one to the left and one to the right) providing a stereo convergence angle up to 40 degrees. Overlapping WAC images from adjacent orbits can be used to generate topography of near-global coverage at kilometer-scale effective spatial resolution. Photometric stereo refers to multiple-look observations of the same target under different lighting conditions. LROC will acquire at least three (ideally five) observations of a target. These observations should have near identical emission angles, but with varying solar azimuth and incidence angles. These types of images can be processed via various methods to derive single pixel resolution topography and surface albedo. The LROC team will produce some topographic models, but stereo data collection is focused on acquiring the highest quality data so that such models can be generated later.

  10. Narrow-headed garter snake (Thamnophis rufipunctatus)

    USGS Publications Warehouse

    Nowak, Erika M.

    2006-01-01

    The narrow-headed garter snake is a harmless, nonvenomous snake that is distinguished by its elongated, triangular-shaped head and the red or dark spots on its olive to tan body. Today, the narrow-headed garter snake is a species of special concern in the United States because of its decline over much of its historic range. Arizona's Oak Creek has historically contained the largest population of narrow-headed garter snakes in the United States. The U.S. Geological Survey (USGS) and the Arizona Game and Fish Department jointly funded research by USGS scientists in Oak Creek to shed light on the factors causing declining population numbers. The research resulted in better understanding of the snake's habitat needs, winter and summer range, and dietary habits. Based on the research findings, the U.S. Forest Service has developed recommendations that visitors and local residents can adopt to help slow the decline of the narrow-headed garter snake in Oak Creek.

  11. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  12. The effectiveness of detection of splashed particles using a system of three integrated high-speed cameras

    NASA Astrophysics Data System (ADS)

    Ryżak, Magdalena; Beczek, Michał; Mazur, Rafał; Sochan, Agata; Bieganowski, Andrzej

    2017-04-01

    The phenomenon of splash, which is one of the factors causing erosion of the soil surface, is the subject of research of various scientific teams. One of efficient methods of observation and analysis of this phenomenon are high-speed cameras to measure particles at 2000 frames per second or higher. Analysis of the phenomenon of splash with the use of high-speed cameras and specialized software can reveal, among other things, the number of broken particles, their speeds, trajectories, and the distances over which they were transferred. The paper presents an attempt at evaluation of the efficiency of detection of splashed particles with the use of a set of 3 cameras (Vision Research MIRO 310) and software Dantec Dynamics Studio, using a 3D module (Volumetric PTV). In order to assess the effectiveness of estimating the number of particles, the experiment was performed on glass beads with a diameter of 0.5 mm (corresponding to the sand fraction). Water droplets with a diameter of 4.2 mm fell on a sample from a height of 1.5 m. Two types of splashed particles were observed: particle having a low range (up to 18 mm) splashed at larger angles and particles of a high range (up to 118 mm) splashed at smaller angles. The detection efficiency the number of splashed particles estimated by the software was 45 - 65% for particles with a large range. The effectiveness of the detection of particles by the software has been calculated on the basis of comparison with the number of beads that fell on the adhesive surface around the sample. This work was partly financed from the National Science Centre, Poland; project no. 2014/14/E/ST10/00851.

  13. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  14. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  15. Robust and adaptive band-to-band image transform of UAS miniature multi-lens multispectral camera

    NASA Astrophysics Data System (ADS)

    Jhan, Jyun-Ping; Rau, Jiann-Yeou; Haala, Norbert

    2018-03-01

    Utilizing miniature multispectral (MS) or hyperspectral (HS) cameras by mounting them on an Unmanned Aerial System (UAS) has the benefits of convenience and flexibility to collect remote sensing imagery for precision agriculture, vegetation monitoring, and environment investigation applications. Most miniature MS cameras adopt a multi-lens structure to record discrete MS bands of visible and invisible information. The differences in lens distortion, mounting positions, and viewing angles among lenses mean that the acquired original MS images have significant band misregistration errors. We have developed a Robust and Adaptive Band-to-Band Image Transform (RABBIT) method for dealing with the band co-registration of various types of miniature multi-lens multispectral cameras (Mini-MSCs) to obtain band co-registered MS imagery for remote sensing applications. The RABBIT utilizes modified projective transformation (MPT) to transfer the multiple image geometry of a multi-lens imaging system to one sensor geometry, and combines this with a robust and adaptive correction (RAC) procedure to correct several systematic errors and to obtain sub-pixel accuracy. This study applies three state-of-the-art Mini-MSCs to evaluate the RABBIT method's performance, specifically the Tetracam Miniature Multiple Camera Array (MiniMCA), Micasense RedEdge, and Parrot Sequoia. Six MS datasets acquired at different target distances and dates, and locations are also applied to prove its reliability and applicability. Results prove that RABBIT is feasible for different types of Mini-MSCs with accurate, robust, and rapid image processing efficiency.

  16. The High Resolution Stereo Camera (HRSC): 10 Years of Imaging Mars

    NASA Astrophysics Data System (ADS)

    Jaumann, R.; Neukum, G.; Tirsch, D.; Hoffmann, H.

    2014-04-01

    The HRSC Experiment: Imagery is the major source for our current understanding of the geologic evolution of Mars in qualitative and quantitative terms.Imaging is required to enhance our knowledge of Mars with respect to geological processes occurring on local, regional and global scales and is an essential prerequisite for detailed surface exploration. The High Resolution Stereo Camera (HRSC) of ESA's Mars Express Mission (MEx) is designed to simultaneously map the morphology, topography, structure and geologic context of the surface of Mars as well as atmospheric phenomena [1]. The HRSC directly addresses two of the main scientific goals of the Mars Express mission: (1) High-resolution three-dimensional photogeologic surface exploration and (2) the investigation of surface-atmosphere interactions over time; and significantly supports: (3) the study of atmospheric phenomena by multi-angle coverage and limb sounding as well as (4) multispectral mapping by providing high-resolution threedimensional color context information. In addition, the stereoscopic imagery will especially characterize landing sites and their geologic context [1]. The HRSC surface resolution and the digital terrain models bridge the gap in scales between highest ground resolution images (e.g., HiRISE) and global coverage observations (e.g., Viking). This is also the case with respect to DTMs (e.g., MOLA and local high-resolution DTMs). HRSC is also used as cartographic basis to correlate between panchromatic and multispectral stereo data. The unique multi-angle imaging technique of the HRSC supports its stereo capability by providing not only a stereo triplet but also a stereo quintuplet, making the photogrammetric processing very robust [1, 3]. The capabilities for three dimensional orbital reconnaissance of the Martian surface are ideally met by HRSC making this camera unique in the international Mars exploration effort.

  17. Clinical trials of the prototype Rutherford Appleton Laboratory MWPC positron camera at the Royal Marsden Hospital

    NASA Astrophysics Data System (ADS)

    Flower, M. A.; Ott, R. J.; Webb, S.; Leach, M. O.; Marsden, P. K.; Clack, R.; Khan, O.; Batty, V.; McCready, V. R.; Bateman, J. E.

    1988-06-01

    Two clinical trials of the prototype RAL multiwire proportional chamber (MWPC) positron camera were carried out prior to the development of a clinical system with large-area detectors. During the first clinical trial, the patient studies included skeletal imaging using 18F, imaging of brain glucose metabolism using 18F FDG, bone marrow imaging using 52Fe citrate and thyroid imaging with Na 124I. Longitudinal tomograms were produced from the limited-angle data acquisition from the static detectors. During the second clinical trial, transaxial, coronal and sagittal images were produced from the multiview data acquisition. A more detailed thyroid study was performed in which the volume of the functioning thyroid tissue was obtained from the 3D PET image and this volume was used in estimating the radiation dose achieved during radioiodine therapy of patients with thyrotoxicosis. Despite the small field of view of the prototype camera, and the use of smaller than usual amounts of activity administered, the PET images were in most cases comparable with, and in a few cases visually better than, the equivalent planar view using a state-of-the-art gamma camera with a large field of view and routine radiopharmaceuticals.

  18. Optics Near the Snell Angle in a Water-to-Air Change of Medium

    DTIC Science & Technology

    2007-01-01

    the seawater wedge at the focus of a notional 57.3-mm lens modeled in ZEMAX ® [5]. The boxes are plotted in units of µm, and lens focal length is...lenses had insufficient focal-plane coverage. The ZEMAX spot diagram of this layout is depicted in Fig. 4. It is corrected for the horizon angle...the Fig. 9 ZEMAX layout. It is a two-prism design, but only one prism need be built and carried within the camera, with the forward prism being the

  19. Solar System Portrait - Views of 6 Planets

    NASA Image and Video Library

    1996-09-13

    These six narrow-angle color images were made from the first ever portrait of the solar system taken by NASA’s Voyager 1, which was more than 4 billion miles from Earth and about 32 degrees above the ecliptic. The spacecraft acquired a total of 60 frames for a mosaic of the solar system which shows six of the planets. Mercury is too close to the sun to be seen. Mars was not detectable by the Voyager cameras due to scattered sunlight in the optics, and Pluto was not included in the mosaic because of its small size and distance from the sun. These blown-up images, left to right and top to bottom are Venus, Earth, Jupiter, and Saturn, Uranus, Neptune. The background features in the images are artifacts resulting from the magnification. The images were taken through three color filters -- violet, blue and green -- and recombined to produce the color images. Jupiter and Saturn were resolved by the camera but Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposure times. Earth appears to be in a band of light because it coincidentally lies right in the center of the scattered light rays resulting from taking the image so close to the sun. Earth was a crescent only 0.12 pixels in size. Venus was 0.11 pixel in diameter. The planetary images were taken with the narrow-angle camera (1500 mm focal length). http://photojournal.jpl.nasa.gov/catalog/PIA00453

  20. THE DARK ENERGY CAMERA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less