Sample records for framing camera fc

  1. Reevaluating Surface Composition of Asteroid (4) Vesta by Comparing HED Spectral Data with Dawn Framing Camera (FC) Observations

    NASA Astrophysics Data System (ADS)

    Giebner, T.; Jaumann, R.; Schröder, S.

    2016-08-01

    This master's thesis project tries to reevaluate previous findings on asteroid (4) Vesta's surface composition by using DAWN FC Filter image ratios in a new way in order to identify HED (howardite, eucrite, diogenite) lithologies on the surface.

  2. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  3. Origin of Dark Material on VESTA from DAWN FC Data: Remnant Carbonaceous Chondrite Impators

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Mittlefehldt, David W.; Cloutis, E. A.; OBrien, D. P.; Durda, D. D.; Bottke, W. F.; Buczkowski, D.; Scully, J. E. C.; hide

    2012-01-01

    NASA's Dawn spacecraft entered orbit around asteroid (4) Vesta in July 2011 for a yearlong mapping orbit. The surface of Vesta as imaged by the Dawn Framing Camera (FC) revealed a surface that is unlike any asteroid we have visited so far with a spacecraft. Albedo and color variations on Vesta are the most diverse in the asteroid belt with a majority of these linked to distinct compositional units on the asteroid s surface. FC discovered dark material on Vesta. These low albedo surface features were first observed during Rotational Characterization 3 phase at a resolution of approx. 487 m/pixel. Here we explore the composition and possible meteoritical analogs for the dark material on Vesta.

  4. Behavior of Compact Toroid Injected into C-2U Confinement Vessel

    NASA Astrophysics Data System (ADS)

    Matsumoto, Tadafumi; Roche, T.; Allrey, I.; Sekiguchi, J.; Asai, T.; Conroy, M.; Gota, H.; Granstedt, E.; Hooper, C.; Kinley, J.; Valentine, T.; Waggoner, W.; Binderbauer, M.; Tajima, T.; the TAE Team

    2016-10-01

    The compact toroid (CT) injector system has been developed for particle refueling on the C-2U device. A CT is formed by a magnetized coaxial plasma gun (MCPG) and the typical ejected CT/plasmoid parameters are as follows: average velocity 100 km/s, average electron density 1.9 ×1015 cm-3, electron temperature 30-40 eV, mass 12 μg . To refuel particles into FC plasma the CT must penetrate the transverse magnetic field that surrounds the FRC. The kinetic energy density of the CT should be higher than magnetic energy density of the axial magnetic field, i.e., ρv2 / 2 >=B2 / 2μ0 , where ρ, v, and B are mass density, velocity, and surrounded magnetic field, respectively. Also, the penetrated CT's trajectory is deflected by the transverse magnetic field (Bz 1 kG). Thus, we have to estimate CT's energy and track the CT trajectory inside the magnetic field, for which we adopted a fast-framing camera on C-2U: framing rate is up to 1.25 MHz for 120 frames. By employing the camera we clearly captured the CT/plasmoid trajectory. Comparisons between the fast-framing camera and some other diagnostics as well as CT injection results on C-2U will be presented.

  5. Automated Spectral System for Terrain Classification, Mineralogy of Vesta from the Dawn Framing Cameras

    NASA Astrophysics Data System (ADS)

    Reddy, V.; Le Corre, L.; Nathues, A.; Hall, I.; Gutierrez-Marques, P.; Hoffmann, M.

    2011-10-01

    The Dawn mission will rendezvous with asteroid (4) Vesta in July 2011. We have developed a set of equations for extracting mean pyroxene chemistry (Ferrosilite and Wollastonite) for classifying terrains on Vesta by using the Dawn Framing Camera (FC) multi-color bands. The Automated Spectral System (ASS) utilizes pseudo-Band I minima to estimate the mean pyroxene chemistry of diogenites, and basaltic eucrites. The mean pyroxene chemistries of cumulate eucrites, and howardites overlap each other on the pyroxene quadrilateral and hence are harder to distinguish. We expect our ASS to carry a bulk of the terrain classification and mineralogy workload utilizing these equations and complement the work of DawnKey (Le Corre et al., 2011, DPS/EPSC 2011). The system will also provide surface mineral chemistry layers that can be used for mapping Vesta's surface.

  6. How to characterize terrains on 4 Vesta using Dawn Framing Camera color bands?

    NASA Astrophysics Data System (ADS)

    Le Corre, Lucille; Reddy, Vishnu; Nathues, Andreas; Cloutis, Edward A.

    2011-12-01

    We present methods for terrain classification on 4 Vesta using Dawn Framing Camera (FC) color information derived from laboratory spectra of HED meteorites and other Vesta-related assemblages. Color and spectral parameters have been derived using publicly available spectra of these analog materials to identify the best criteria for distinguishing various terrains. We list the relevant parameters for identifying eucrites, diogenites, mesosiderites, pallasites, clinopyroxenes and olivine + orthopyroxene mixtures using Dawn FC color cubes. Pseudo Band I minima derived by fitting a low order polynomial to the color data are found to be useful for extracting the pyroxene chemistry. Our investigation suggests a good correlation (R2 = 0.88) between laboratory measured ferrosilite (Fs) pyroxene chemistry vs. those from pseudo Band I minima using equations from Burbine et al. (Burbine, T.H., Buchanan, P.C., Dolkar, T., Binzel, R.P. [2009]. Planetary Science 44, 1331-1341). The pyroxene chemistry information is a complementary terrain classification capability beside the color ratios. We also investigated the effects of exogenous material (i.e., CM2 carbonaceous chondrites) on the spectra of HEDs using laboratory mixtures of these materials. Our results are the basis for an automated software pipeline that will allow us to classify terrains on 4 Vesta efficiently.

  7. High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2017-06-01

    The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).

  8. Dawn Maps the Surface Composition of Vesta

    NASA Technical Reports Server (NTRS)

    Prettyman, T.; Palmer, E.; Reedy, R.; Sykes, M.; Yingst, R.; McSween, H.; DeSanctis, M. C.; Capaccinoni, F.; Capria, M. T.; Filacchione, G.; hide

    2011-01-01

    By 7-October-2011, the Dawn mission will have completed Survey orbit and commenced high altitude mapping of 4-Vesta. We present a preliminary analysis of data acquired by Dawn's Framing Camera (FC) and the Visual and InfraRed Spectrometer (VIR) to map mineralogy and surface temperature, and to detect and quantify surficial OH. The radiometric calibration of VIR and FC is described. Background counting data acquired by GRaND are used to determine elemental detection limits from measurements at low altitude, which will commence in November. Geochemical models used in the interpretation of the data are described. Thermal properties, mineral-, and geochemical-data are combined to provide constraints on Vesta s formation and thermal evolution, the delivery of exogenic materials, space weathering processes, and the origin of the howardite, eucrite, and diogenite (HED) meteorites.

  9. Spectral parameters for Dawn FC color data: Carbonaceous chondrites and aqueous alteration products as potential cerean analog materials

    NASA Astrophysics Data System (ADS)

    Schäfer, Tanja; Nathues, Andreas; Mengel, Kurt; Izawa, Matthew R. M.; Cloutis, Edward A.; Schäfer, Michael; Hoffmann, Martin

    2016-02-01

    We identified a set of spectral parameters based on Dawn Framing Camera (FC) bandpasses, covering the wavelength range 0.4-1.0 μm, for mineralogical mapping of potential chondritic material and aqueous alteration products on dwarf planet Ceres. Our parameters are inferred from laboratory spectra of well-described and clearly classified carbonaceous chondrites representative for a dark component. We additionally investigated the FC signatures of candidate bright materials including carbonates, sulfates and hydroxide (brucite), which can possibly be exposed on the cerean surface by impact craters or plume activity. Several materials mineralogically related to carbonaceous chondrites, including pure ferromagnesian phyllosilicates, and serpentinites were also investigated. We tested the potential of the derived FC parameters for distinguishing between different carbonaceous chondritic materials, and between other plausible cerean surface materials. We found that the major carbonaceous chondrite groups (CM, CO, CV, CK, and CR) are distinguishable using the FC filter ratios 0.56/0.44 μm and 0.83/0.97 μm. The absorption bands of Fe-bearing phyllosilicates at 0.7 and 0.9 μm in terrestrial samples and CM carbonaceous chondrites can be detected by a combination of FC band parameters using the filters at 0.65, 0.75, 0.83, 0.92 and 0.97 μm. This set of parameters serves as a basis to identify and distinguish different lithologies on the cerean surface by FC multispectral data.

  10. Detection of serpentine in exogenic carbonaceous chondrite material on Vesta from Dawn FC data

    NASA Astrophysics Data System (ADS)

    Nathues, Andreas; Hoffmann, Martin; Cloutis, Edward A.; Schäfer, Michael; Reddy, Vishnu; Christensen, Ulrich; Sierks, Holger; Thangjam, Guneshwar Singh; Le Corre, Lucille; Mengel, Kurt; Vincent, Jean-Baptist; Russell, Christopher T.; Prettyman, Tom; Schmedemann, Nico; Kneissl, Thomas; Raymond, Carol; Gutierrez-Marques, Pablo; Hall, Ian; Büttner, Irene

    2014-09-01

    The Dawn mission’s Framing Camera (FC) observed Asteroid (4) Vesta in 2011 and 2012 using seven color filters and one clear filter from different orbits. In the present paper we analyze recalibrated HAMO color cubes (spatial resolution ∼60 m/pixel) with a focus on dark material (DM). We present a definition of highly concentrated DM based on spectral parameters, subsequently map the DM across the Vestan surface, geologically classify DM, study its spectral properties on global and local scales, and finally, compare the FC in-flight color data with laboratory spectra. We have discovered an absorption band centered at 0.72 μm in localities of DM that show the lowest albedo values by using FC data as well as spectral information from Dawn’s imaging spectrometer VIR. Such localities are contained within impact-exposed outcrops on inner crater walls and ejecta material. Comparisons between spectral FC in-flight data, and laboratory spectra of meteorites and mineral mixtures in the wavelength range 0.4-1.0 μm, revealed that the absorption band can be attributed to the mineral serpentine, which is typically present in CM chondrites. Dark material in its purest form is rare on Vesta’s surface and is distributed globally in a non-uniform manner. Our findings confirm the hypothesis of an exogenic origin of the DM by the infall of carbonaceous chondritic material, likely of CM type. It further confirms the hypothesis that most of the DM was deposited by the Veneneia impact.

  11. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  12. Development of a Maintenance Advisor Expert System for the MK 92 MOD 2 Fire Control System: FC-1 Designation - Time, Range, Bearing FC-1 Acquisition, FC-1 Track - Range, Bearing, and FC-2 Designation - Time, Range, Bearing, FC-2 Acquisition, FC-2 Track - Range, Bearing, and FC-4 and FC-5

    DTIC Science & Technology

    1993-09-01

    is not present at output of the power amplifier- THEN replace train drive motor ELSE continue troubleshooting procedures. 30 Rules offer several...Type Body Type Tires Tires Engine Type Engine Type Battery Type Battery Type Figure 5-2 KOWLEDGE ACCESS BY FRAME AND SLOT 33 B. SEMANTIC NETWORKS A

  13. Visible Color and Photometry of Bright Materials on Vesta

    NASA Technical Reports Server (NTRS)

    Schroder, S. E.; Li, J. Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.

    2012-01-01

    The Dawn Framing Camera (FC) collected images of the surface of Vesta at a pixel scale of 70 m in the High Altitude Mapping Orbit (HAMO) phase through its clear and seven color filters spanning from 430 nm to 980 nm. The surface of Vesta displays a large diversity in its brightness and colors, evidently related to the diverse geology [1] and mineralogy [2]. Here we report a detailed investigation of the visible colors and photometric properties of the apparently bright materials on Vesta in order to study their origin. The global distribution and the spectroscopy of bright materials are discussed in companion papers [3, 4], and the synthesis results about the origin of Vestan bright materials are reported in [5].

  14. Solid-state framing camera with multiple time frames

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Baker, K. L.; Stewart, R. E.; Steele, P. T.

    2013-10-07

    A high speed solid-state framing camera has been developed which can operate over a wide range of photon energies. This camera measures the two-dimensional spatial profile of the flux incident on a cadmium selenide semiconductor at multiple times. This multi-frame camera has been tested at 3.1 eV and 4.5 keV. The framing camera currently records two frames with a temporal separation between the frames of 5 ps but this separation can be varied between hundreds of femtoseconds up to nanoseconds and the number of frames can be increased by angularly multiplexing the probe beam onto the cadmium selenide semiconductor.

  15. High-frame-rate infrared and visible cameras for test range instrumentation

    NASA Astrophysics Data System (ADS)

    Ambrose, Joseph G.; King, B.; Tower, John R.; Hughes, Gary W.; Levine, Peter A.; Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; O'Mara, K.; Sjursen, W.; McCaffrey, Nathaniel J.; Pantuso, Francis P.

    1995-09-01

    Field deployable, high frame rate camera systems have been developed to support the test and evaluation activities at the White Sands Missile Range. The infrared cameras employ a 640 by 480 format PtSi focal plane array (FPA). The visible cameras employ a 1024 by 1024 format backside illuminated CCD. The monolithic, MOS architecture of the PtSi FPA supports commandable frame rate, frame size, and integration time. The infrared cameras provide 3 - 5 micron thermal imaging in selectable modes from 30 Hz frame rate, 640 by 480 frame size, 33 ms integration time to 300 Hz frame rate, 133 by 142 frame size, 1 ms integration time. The infrared cameras employ a 500 mm, f/1.7 lens. Video outputs are 12-bit digital video and RS170 analog video with histogram-based contrast enhancement. The 1024 by 1024 format CCD has a 32-port, split-frame transfer architecture. The visible cameras exploit this architecture to provide selectable modes from 30 Hz frame rate, 1024 by 1024 frame size, 32 ms integration time to 300 Hz frame rate, 1024 by 1024 frame size (with 2:1 vertical binning), 0.5 ms integration time. The visible cameras employ a 500 mm, f/4 lens, with integration time controlled by an electro-optical shutter. Video outputs are RS170 analog video (512 by 480 pixels), and 12-bit digital video.

  16. Reliability and discriminatory power of methods for dental plaque quantification

    PubMed Central

    RAGGIO, Daniela Prócida; BRAGA, Mariana Minatel; RODRIGUES, Jonas Almeida; FREITAS, Patrícia Moreira; IMPARATO, José Carlos Pettorossi; MENDES, Fausto Medeiros

    2010-01-01

    Objective This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI) and fluorescence camera (FC) to detect plaque. Material and Methods Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. Results Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. Conclusions The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque. PMID:20485931

  17. Ceres' Global Cryosphere

    NASA Astrophysics Data System (ADS)

    Sizemore, H. G.; Prettyman, T. H.; De Sanctis, M. C.; Schmidt, B. E.; Hughson, K.; Chilton, H.; Castillo, J. C.; Platz, T.; Schorghofer, N.; Bland, M. T.; Sori, M.; Buczkowski, D.; Byrne, S.; Landis, M. E.; Fu, R.; Ermakov, A.; Raymond, C. A.; Schwartz, S. J.

    2017-12-01

    Prior to the arrival of the Dawn spacecraft at Ceres, the dwarf planet was anticipated to have a deep global cryosphere protected by a thin silicate lag. Gravity science along with data collected by Dawn's Framing Camera (FC), Gamma Ray and Neutron Detector (GRaND), and Visible and Infrared Mapping Spectrometer (VIR-MS) during the primary mission at Ceres have confirmed the existence of a global, silicate-rich cryosphere, and suggest the existence of deeper ice, brine, or mud layers. As such, Ceres' surface morphology has characteristics in common with both Mars and the small icy bodies of the outer solar system. We will summarize the evidence for the existence and global extent of the Cerean cryosphere. We will also discuss the range of morphological features that have been linked to subsurface ice, and highlight outstanding science questions.

  18. A novel method to generate unmarked gene deletions in the intracellular pathogen Rhodococcus equi using 5-fluorocytosine conditional lethality

    PubMed Central

    van der Geize, R.; de Jong, W.; Hessels, G. I.; Grommen, A. W. F.; Jacobs, A. A. C.; Dijkhuizen, L.

    2008-01-01

    A novel method to efficiently generate unmarked in-frame gene deletions in Rhodococcus equi was developed, exploiting the cytotoxic effect of 5-fluorocytosine (5-FC) by the action of cytosine deaminase (CD) and uracil phosphoribosyltransferase (UPRT) enzymes. The opportunistic, intracellular pathogen R. equi is resistant to high concentrations of 5-FC. Introduction of Escherichia coli genes encoding CD and UPRT conferred conditional lethality to R. equi cells incubated with 5-FC. To exemplify the use of the codA::upp cassette as counter-selectable marker, an unmarked in-frame gene deletion mutant of R. equi was constructed. The supA and supB genes, part of a putative cholesterol catabolic gene cluster, were efficiently deleted from the R. equi wild-type genome. Phenotypic analysis of the generated ΔsupAB mutant confirmed that supAB are essential for growth of R. equi on cholesterol. Macrophage survival assays revealed that the ΔsupAB mutant is able to survive and proliferate in macrophages comparable to wild type. Thus, cholesterol metabolism does not appear to be essential for macrophage survival of R. equi. The CD-UPRT based 5-FC counter-selection may become a useful asset in the generation of unmarked in-frame gene deletions in other actinobacteria as well, as actinobacteria generally appear to be 5-FC resistant and 5-FU sensitive. PMID:18984616

  19. The significant impact of framing coils on long-term outcomes in endovascular coiling for intracranial aneurysms: how to select an appropriate framing coil.

    PubMed

    Ishida, Wataru; Sato, Masayuki; Amano, Tatsuo; Matsumaru, Yuji

    2016-09-01

    OBJECTIVE The importance of a framing coil (FC)-the first coil inserted into an aneurysm during endovascular coiling, also called a lead coil or a first coil-is recognized, but its impact on long-term outcomes, including recanalization and retreatment, is not well established. The purposes of this study were to test the hypothesis that the FC is a significant factor for aneurysmal recurrence and to provide some insights on appropriate FC selection. METHODS The authors retrospectively reviewed endovascular coiling for 280 unruptured intracranial aneurysms and gathered data on age, sex, aneurysm location, aneurysm morphology, maximal size, neck width, adjunctive techniques, recanalization, retreatment, follow-up periods, total volume packing density (VPD), volume packing density of the FC, and framing coil percentage (FCP; the percentage of FC volume in total coil volume) to clarify the associated factors for aneurysmal recurrence. RESULTS Of 236 aneurysms included in this study, 33 (14.0%) had recanalization, and 18 (7.6%) needed retreatment during a mean follow-up period of 37.7 ± 16.1 months. In multivariate analysis, aneurysm size (odds ratio [OR] = 1.29, p < 0.001), FCP < 32% (OR 3.54, p = 0.009), and VPD < 25% (OR 2.96, p = 0.015) were significantly associated with recanalization, while aneurysm size (OR 1.25, p < 0.001) and FCP < 32% (OR 6.91, p = 0.017) were significant predictors of retreatment. VPD as a continuous value or VPD with any cutoff value could not predict retreatment with statistical significance in multivariate analysis. CONCLUSIONS FCP, which is equal to the FC volume as a percentage of the total coil volume and is unaffected by the morphology of the aneurysm or the measurement error in aneurysm length, width, or height, is a novel predictor of recanalization and retreatment and is more significantly predictive of retreatment than VPD. To select FCs large enough to meet the condition of FCP ≥ 32% is a potential relevant factor for better long-term outcomes. These findings support our hypothesis that the FC is a significant factor for aneurysmal recurrence.

  20. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  1. High-resolution Ceres LAMO atlas derived from Dawn FC images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K. D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C.

    2016-12-01

    Introduction: NASA's Dawn spacecraft has been orbiting the dwarf planet Ceres since December 2015 in LAMO (High Altitude Mapping Orbit) with an altitude of about 400 km to characterize for instance the geology, topography, and shape of Ceres. One of the major goals of this mission phase is the global high-resolution mapping of Ceres. Data: The Dawn mission is equipped with a fram-ing camera (FC). The framing camera took until the time of writing about 27,500 clear filter images in LAMO with a resolution of about 30 m/pixel and dif-ferent viewing angles and different illumination condi-tions. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the target. A high-resolution shape model was provided by stereo processing of the HAMO dataset, orbit and attitude data are available as reconstructed SPICE data. Ceres' HAMO shape model is used for the calculation of the ray intersection points while the map projection itself was done onto a reference sphere of Ceres. The final step is the controlled mosaicking of all nadir images to a global mosaic of Ceres, the so called basemap. Ceres map tiles: The Ceres atlas will be produced in a scale of 1:250,000 and will consist of 62 tiles that conforms to the quadrangle schema for Venus at 1:5,000,000. A map scale of 1:250,000 is a compro-mise between the very high resolution in LAMO and a proper map sheet size of the single tiles. Nomenclature: The Dawn team proposed to the International Astronomical Union (IAU) to use the names of gods and goddesses of agriculture and vege-tation from world mythology as names for the craters and to use names of agricultural festivals of the world for other geological features. This proposal was ac-cepted by the IAU and the team proposed 92 names for geological features to the IAU based on the LAMO mosaic. These feature names will be applied to the map tiles.

  2. High-resolution Ceres HAMO Atlas derived from Dawn FC Images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K. D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2015-12-01

    Introduction: NASA's Dawn spacecraft will orbit the dwarf planet Ceres in August and September 2015 in HAMO (High Altitude Mapping Orbit) with an altitude of about 1,500 km to characterize for instance the geology, topography, and shape of Ceres before it will be transferred to the lowest orbit. One of the major goals of this mission phase is the global mapping of Ceres. Data: The Dawn mission is equipped with a fram-ing camera (FC). The framing camera will take about 2600 clear filter images with a resolution of about 120 m/pixel and different viewing angles and different illumination conditions. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the target. Both, improved orientation and high-resolution shape models, are provided by stereo processing of the HAMO dataset. Ceres' HAMO shape model is used for the calculation of the ray intersection points while the map projection itself will be done onto a reference sphere for Ceres. The final step is the controlled mosaicking of all nadir images to a global mosaic of Ceres, the so called basemap. Ceres map tiles: The Ceres atlas will be produced in a scale of 1:750,000 and will consist of 15 tiles that conform to the quadrangle schema for small planets and medium size Icy satellites. A map scale of 1:750,000 guarantees a mapping at the highest availa-ble Dawn resolution in HAMO. Nomenclature: The Dawn team proposed to the International Astronomical Union (IAU) to use the names of gods and goddesses of agriculture and vege-tation from world mythology as names for the craters. This proposal was accepted by the IAU and the team proposed names for geological features to the IAU based on the HAMO mosaic. These feature names will be applied to the map tiles.

  3. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  4. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  5. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  6. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  7. Applying compressive sensing to TEM video: A substantial frame rate increase on any camera

    DOE PAGES

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia; ...

    2015-08-13

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ transmission electron microscopy (TEM) experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1 ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing (CS) methods to increase the frame rate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integratedmore » into a single camera frame during the acquisition process, and then extracted upon readout using statistical CS inversion. Here we describe the background of CS and statistical methods in depth and simulate the frame rates and efficiencies for in-situ TEM experiments. Depending on the resolution and signal/noise of the image, it should be possible to increase the speed of any camera by more than an order of magnitude using this approach.« less

  8. Coincidence ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander H.; Fan, Lin; Li, Wen

    2014-12-01

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.

  9. Mapping Vesta Equatorial Quadrangle V-8EDL: Various Craters and Giant Grooves

    NASA Astrophysics Data System (ADS)

    Le Corre, L.; Nathues, A.; Reddy, V.; Buczkowski, D.; Denevi, B. W.; Gaffey, M.; Williams, D. A.; Garry, W. B.; Yingst, R.; Jaumann, R.; Pieters, C. M.; Russell, C. T.; Raymond, C. A.

    2011-12-01

    NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-8EDL located between -22 and 22 degrees latitude and 144 and 216 degrees East longitude. This quadrangle is dominated by old craters (without any ejecta visible in the clear and color bands), but one small recent crater can be seen with bright ejecta blanket and rays. The latter has some small, dark units outside and inside the crater rim that could be indicative of impact melt. This quadrangle also contains a set of giant linear grooves running almost parallel to the equator that might have formed subsequent to a big impact. We will use FC mosaics with clear images and false color composites as well as VIR spectroscopy data in order to constrain the geology and identify the nature of each unit present in this quadrangle.

  10. The application of high-speed photography in z-pinch high-temperature plasma diagnostics

    NASA Astrophysics Data System (ADS)

    Wang, Kui-lu; Qiu, Meng-tong; Hei, Dong-wei

    2007-01-01

    This invited paper is presented to discuss the application of high speed photography in z-pinch high temperature plasma diagnostics in recent years in Northwest Institute of Nuclear Technology in concentrative mode. The developments and applications of soft x-ray framing camera, soft x-ray curved crystal spectrometer, optical framing camera, ultraviolet four-frame framing camera and ultraviolet-visible spectrometer are introduced.

  11. Comparison of Fundus Autofluorescence Between Fundus Camera and Confocal Scanning Laser Ophthalmoscope–based Systems

    PubMed Central

    Park, Sung Pyo; Siringo, Frank S.; Pensec, Noelle; Hong, In Hwan; Sparrow, Janet; Barile, Gaetano; Tsang, Stephen H.; Chang, Stanley

    2015-01-01

    BACKGROUND AND OBJECTIVE To compare fundus autofluorescence (FAF) imaging via fundus camera (FC) and confocal scanning laser ophthalmoscope (cSLO). PATIENTS AND METHODS FAF images were obtained with a digital FC (530 to 580 nm excitation) and a cSLO (488 nm excitation). Two authors evaluated correlation of autofluorescence pattern, atrophic lesion size, and image quality between the two devices. RESULTS In 120 eyes, the autofluorescence pattern correlated in 86% of lesions. By lesion subtype, correlation rates were 100% in hemorrhage, 97% in geographic atrophy, 82% in flecks, 75% in drusen, 70% in exudates, 67% in pigment epithelial detachment, 50% in fibrous scars, and 33% in macular hole. The mean lesion size in geographic atrophy was 4.57 ± 2.3 mm2 via cSLO and 3.81 ± 1.94 mm2 via FC (P < .0001). Image quality favored cSLO in 71 eyes. CONCLUSION FAF images were highly correlated between the FC and cSLO. Differences between the two devices revealed contrasts. Multiple image capture and confocal optics yielded higher image contrast with the cSLO, although acquisition and exposure time was longer. PMID:24221461

  12. Resolved spectrophotometric properties of the Ceres surface from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Schröder, S. E.; Mottola, S.; Carsenty, U.; Ciarniello, M.; Jaumann, R.; Li, J.-Y.; Longobardo, A.; Palmer, E.; Pieters, C.; Preusker, F.; Raymond, C. A.; Russell, C. T.

    2017-05-01

    We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera (FC) images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected FC images acquired on approach to Ceres were assembled into global maps of albedo and color. Generally, albedo and color variations on Ceres are muted. The albedo map is dominated by a large, circular feature in Vendimia Planitia, known from HST images (Li et al., 2006), and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope over the visible wavelength range when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta (Schröder et al., 2013b). The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters may be physically smooth rather than rough. High albedo, blue color, and physical smoothness all appear to be indicators of youth. The blue color may result from the desiccation of ejected material that is similar to the phyllosilicates/water ice mixtures in the experiments of Poch et al. (2016). The physical smoothness of some blue terrains would be consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We find red terrain (positive spectral slope) near Ernutet crater, where De Sanctis et al. (2017) detected organic material. The spectrophotometric properties of the large Vendimia Planitia feature suggest it is a palimpsest, consistent with the Marchi et al. (2016) impact basin hypothesis. The central bright area in Occator crater, Cerealia Facula, is the brightest on Ceres with an average visual normal albedo of about 0.6 at a resolution of 1.3 km per pixel (six times Ceres average). The albedo of fresh, bright material seen inside this area in the highest resolution images (35 m per pixel) is probably around unity. Cerealia Facula has an unusually steep phase function, which may be due to unresolved topography, high surface roughness, or large average particle size. It has a strongly red spectrum whereas the neighboring, less-bright, Vinalia Faculae are neutral in color. We find no evidence for a diurnal ground fog-type haze in Occator as described by Nathues et al. (2015). We can neither reproduce their findings using the same images, nor confirm them using higher resolution images. FC images have not yet offered direct evidence for present sublimation in Occator.

  13. Coincidence ion imaging with a fast frame camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lee, Suk Kyoung; Cudry, Fadia; Lin, Yun Fei

    2014-12-15

    A new time- and position-sensitive particle detection system based on a fast frame CMOS (complementary metal-oxide semiconductors) camera is developed for coincidence ion imaging. The system is composed of four major components: a conventional microchannel plate/phosphor screen ion imager, a fast frame CMOS camera, a single anode photomultiplier tube (PMT), and a high-speed digitizer. The system collects the positional information of ions from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of a PMT processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of ion spots onmore » each camera frame with the peak heights on the corresponding time-of-flight spectrum of a PMT. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide.« less

  14. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    NASA Astrophysics Data System (ADS)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  15. Regolith Depth, Mobility, and Variability on Vesta from Dawn's Low Altitude Mapping Orbit

    NASA Technical Reports Server (NTRS)

    Denevi, B. W.; Coman, E. I.; Blewett, D. T.; Mittlefehldt, D. W.; Buczkowski, D. L.; Combe, J.-P.; De Sanctis, M. C.; Jaumann, R.; Li, J.-Y.; Marchi, S.; hide

    2012-01-01

    Regolith, the fragmental debris layer formed from impact events of all sizes, covers the surface of all asteroids imaged by spacecraft to date. Here we use Framing Camera (FC) images [1] acquired by the Dawn spacecraft [2] from its low-altitude mapping orbit (LAMO) of 210 km (pixel scales of 20 m) to characterize regolith depth, variability, and mobility on Vesta, and to locate areas of especially thin regolith and exposures of competent material. These results will help to evaluate how the surface of this differentiated asteroid has evolved over time, and provide key contextual information for understanding the origin and degree of mixing of the surficial materials for which compositions are estimated [3,4] and the causes of the relative spectral immaturity of the surface [5]. Vestan regolith samples, in the form of howardite meteorites, can be studied in the laboratory to provide complementary constraints on the regolith process [6].

  16. Simultaneous tracking and regulation visual servoing of wheeled mobile robots with uncalibrated extrinsic parameters

    NASA Astrophysics Data System (ADS)

    Lu, Qun; Yu, Li; Zhang, Dan; Zhang, Xuebo

    2018-01-01

    This paper presentsa global adaptive controller that simultaneously solves tracking and regulation for wheeled mobile robots with unknown depth and uncalibrated camera-to-robot extrinsic parameters. The rotational angle and the scaled translation between the current camera frame and the reference camera frame, as well as the ones between the desired camera frame and the reference camera frame can be calculated in real time by using the pose estimation techniques. A transformed system is first obtained, for which an adaptive controller is then designed to accomplish both tracking and regulation tasks, and the controller synthesis is based on Lyapunov's direct method. Finally, the effectiveness of the proposed method is illustrated by a simulation study.

  17. Hardware accelerator design for tracking in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.

  18. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  19. Coincidence electron/ion imaging with a fast frame camera

    NASA Astrophysics Data System (ADS)

    Li, Wen; Lee, Suk Kyoung; Lin, Yun Fei; Lingenfelter, Steven; Winney, Alexander; Fan, Lin

    2015-05-01

    A new time- and position- sensitive particle detection system based on a fast frame CMOS camera is developed for coincidence electron/ion imaging. The system is composed of three major components: a conventional microchannel plate (MCP)/phosphor screen electron/ion imager, a fast frame CMOS camera and a high-speed digitizer. The system collects the positional information of ions/electrons from a fast frame camera through real-time centroiding while the arrival times are obtained from the timing signal of MCPs processed by a high-speed digitizer. Multi-hit capability is achieved by correlating the intensity of electron/ion spots on each camera frame with the peak heights on the corresponding time-of-flight spectrum. Efficient computer algorithms are developed to process camera frames and digitizer traces in real-time at 1 kHz laser repetition rate. We demonstrate the capability of this system by detecting a momentum-matched co-fragments pair (methyl and iodine cations) produced from strong field dissociative double ionization of methyl iodide. We further show that a time resolution of 30 ps can be achieved when measuring electron TOF spectrum and this enables the new system to achieve a good energy resolution along the TOF axis.

  20. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  1. A novel simultaneous streak and framing camera without principle errors

    NASA Astrophysics Data System (ADS)

    Jingzhen, L.; Fengshan, S.; Ningwen, L.; Xiangdong, G.; Bin, H.; Qingyang, W.; Hongyi, C.; Yi, C.; Xiaowei, L.

    2018-02-01

    A novel simultaneous streak and framing camera with continuous access, the perfect information of which is far more important for the exact interpretation and precise evaluation of many detonation events and shockwave phenomena, has been developed. The camera with the maximum imaging frequency of 2 × 106 fps and the maximum scanning velocity of 16.3 mm/μs has fine imaging properties which are the eigen resolution of over 40 lp/mm in the temporal direction and over 60 lp/mm in the spatial direction and the framing frequency principle error of zero for framing record, and the maximum time resolving power of 8 ns and the scanning velocity nonuniformity of 0.136%~-0.277% for streak record. The test data have verified the performance of the camera quantitatively. This camera, simultaneously gained frames and streak with parallax-free and identical time base, is characterized by the plane optical system at oblique incidence different from space system, the innovative camera obscura without principle errors, and the high velocity motor driven beryllium-like rotating mirror, made of high strength aluminum alloy with cellular lateral structure. Experiments demonstrate that the camera is very useful and reliable to take high quality pictures of the detonation events.

  2. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  3. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  4. Initial Demonstration of 9-MHz Framing Camera Rates on the FAST UV Drive Laser Pulse Trains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lumpkin, A. H.; Edstrom Jr., D.; Ruan, J.

    2016-10-09

    We report the configuration of a Hamamatsu C5680 streak camera as a framing camera to record transverse spatial information of green-component laser micropulses at 3- and 9-MHz rates for the first time. The latter is near the time scale of the ~7.5-MHz revolution frequency of the Integrable Optics Test Accelerator (IOTA) ring and its expected synchroton radiation source temporal structure. The 2-D images are recorded with a Gig-E readout CCD camera. We also report a first proof of principle with an OTR source using the linac streak camera in a semi-framing mode.

  5. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging

    DOE PAGES

    Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...

    2016-02-26

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less

  6. Advances in x-ray framing cameras at the National Ignition Facility to improve quantitative precision in x-ray imaging.

    PubMed

    Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P

    2016-02-01

    We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.

  7. Mapping Vesta Mid-Latitude Quadrangle V-12EW: Mapping the Edge of the South Polar Structure

    NASA Astrophysics Data System (ADS)

    Hoogenboom, T.; Schenk, P.; Williams, D. A.; Hiesinger, H.; Garry, W. B.; Yingst, R.; Buczkowski, D.; McCord, T. B.; Jaumann, R.; Pieters, C. M.; Gaskell, R. W.; Neukum, G.; Schmedemann, N.; Marchi, S.; Nathues, A.; Le Corre, L.; Roatsch, T.; Preusker, F.; White, O. L.; DeSanctis, C.; Filacchione, G.; Raymond, C. A.; Russell, C. T.

    2011-12-01

    NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-12EW. This quadrangle is dominated by the arcuate edge of the large 460+ km diameter south polar topographic feature first observed by HST (Thomas et al., 1997). Sparsely cratered, the portion of this feature covered in V-12EW is characterized by arcuate ridges and troughs forming a generalized arcuate pattern. Mapping of this terrain and the transition to areas to the north will be used to test whether this feature has an impact or other (e.g., internal) origin. We are also using FC stereo and VIR images to assess whether their are any compositional differences between this terrain and areas further to the north, and image data to evaluate the distribution and age of young impact craters within the map area. The authors acknowledge the support of the Dawn Science, Instrument and Operations Teams.

  8. Image synchronization for 3D application using the NanEye sensor

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  9. 33. DETAILS OF SAMPLE SUPPORT FRAME ASSEMLBY, LIFTING LUG, AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    33. DETAILS OF SAMPLE SUPPORT FRAME ASSEMLBY, LIFTING LUG, AND SAMPLE CARRIER ROD. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-S-5. INEL INDEX CODE NUMBER: 075 0701 60 851 151979. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  10. Development of two-framing camera with large format and ultrahigh speed

    NASA Astrophysics Data System (ADS)

    Jiang, Xiaoguo; Wang, Yuan; Wang, Yi

    2012-10-01

    High-speed imaging facility is important and necessary for the formation of time-resolved measurement system with multi-framing capability. The framing camera which satisfies the demands of both high speed and large format needs to be specially developed in the ultrahigh speed research field. A two-framing camera system with high sensitivity and time-resolution has been developed and used for the diagnosis of electron beam parameters of Dragon-I linear induction accelerator (LIA). The camera system, which adopts the principle of light beam splitting in the image space behind the lens with long focus length, mainly consists of lens-coupled gated image intensifier, CCD camera and high-speed shutter trigger device based on the programmable integrated circuit. The fastest gating time is about 3 ns, and the interval time between the two frames can be adjusted discretely at the step of 0.5 ns. Both the gating time and the interval time can be tuned to the maximum value of about 1 s independently. Two images with the size of 1024×1024 for each can be captured simultaneously in our developed camera. Besides, this camera system possesses a good linearity, uniform spatial response and an equivalent background illumination as low as 5 electrons/pix/sec, which fully meets the measurement requirements of Dragon-I LIA.

  11. SEOS frame camera applications study

    NASA Technical Reports Server (NTRS)

    1974-01-01

    A research and development satellite is discussed which will provide opportunities for observation of transient phenomena that fall within the fixed viewing circle of the spacecraft. The evaluation of possible applications for frame cameras, for SEOS, are studied. The computed lens characteristics for each camera are listed.

  12. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  13. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  14. Hardware accelerator design for change detection in smart camera

    NASA Astrophysics Data System (ADS)

    Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil

    2011-10-01

    Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.

  15. Dawn Orbit Determination Team: Modeling and Fitting of Optical Data at Vesta

    NASA Technical Reports Server (NTRS)

    Kennedy, Brian; Abrahamson, Matt; Ardito, Alessandro; Haw, Robert; Mastrodemos, Nicholas; Nandi, Sumita; Park, Ryan; Rush, Brian; Vaughan, Andrew

    2013-01-01

    The Dawn spacecraft was launched on September 27th, 2007. Its mission is to consecutively rendezvous with and observe the two largest bodies in the main asteroid belt, Vesta and Ceres. It has already completed over a year's worth of direct observations of Vesta (spanning from early 2011 through late 2012) and is currently on a cruise trajectory to Ceres, where it will begin scientific observations in mid-2015. Achieving this data collection required careful planning and execution from all Dawn operations teams. Dawn's Orbit Determination (OD) team was tasked with reconstruction of the as-flown trajectory as well as determination of the Vesta rotational rate, pole orientation and ephemeris, among other Vesta parameters. Improved knowledge of the Vesta pole orientation, specifically, was needed to target the final maneuvers that inserted Dawn into the first science orbit at Vesta. To solve for these parameters, the OD team used radiometric data from the Deep Space Network (DSN) along with optical data reduced from Dawn's Framing Camera (FC) images. This paper will de-scribe the initial determination of the Vesta ephemeris and pole using a combination of radiometric and optical data, and also the progress the OD team has made since then to further refine the knowledge of Vesta's body frame orientation and rate with these data.

  16. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  17. Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera

    NASA Technical Reports Server (NTRS)

    Parker, Vance C.; Crews, Jeanne Lee

    1988-01-01

    The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.

  18. Characterization of the ePix100 prototype: a front-end ASIC for second-generation LCLS integrating hybrid pixel detectors

    NASA Astrophysics Data System (ADS)

    Caragiulo, P.; Dragone, A.; Markovic, B.; Herbst, R.; Nishimura, K.; Reese, B.; Herrmann, S.; Hart, P.; Blaj, G.; Segal, J.; Tomada, A.; Hasi, J.; Carini, G.; Kenney, C.; Haller, G.

    2014-09-01

    ePix100 is the first variant of a novel class of integrating pixel ASICs architectures optimized for the processing of signals in second generation LINAC Coherent Light Source (LCLS) X-Ray cameras. ePix100 is optimized for ultra-low noise application requiring high spatial resolution. ePix ASICs are based on a common platform composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog to digital converters per column. The ePix100 variant has 50μmx50μm pixels arranged in a 352x384 matrix, a resolution of 50e- r.m.s. and a signal range of 35fC (100 photons at 8keV). In its final version it will be able to sustain a frame rate of 1kHz. A first prototype has been fabricated and characterized and the measurement results are reported here.

  19. Types and Distribution of Bright Materials in 4 Vesta

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, D. W.; Li, Jian-Yang; Pieters, C. M.; De Sanctis, M. C.; Schroder, S. E.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Yingst, R. A.

    2012-01-01

    A strong case can be made that Vesta is the parent asteroid of the howardite, eucrite and diogenite (HED) meteorites [1]. As such, we have over a century of detailed sample analysis experience to call upon when formulating hypotheses regarding plausible lithologic diversity on Vesta. It thus came as a surprise when Dawn s Framing Camera (FC) first revealed distinctly localized materials of exceptionally low and high albedos, often closely associated. To understand the nature and origin of these materials, and how they inform us of the geological evolution of Vesta, task forces began their study. An initial step of the scientific endeavor is to develop a descriptive, non-genetic classification of objects to use as a basis for developing hypotheses and observational campaigns. Here we present a catalog of the types of light-toned deposits and their distribution across Vesta. A companion abstract [2] discusses possible origins of bright materials and the constraints they suggest for vestan geology.

  20. Multiple-frame IR photo-recorder KIT-3M

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E; Wilkins, P; Nebeker, N

    2006-05-15

    This paper reports the experimental results of a high-speed multi-frame infrared camera which has been developed in Sarov at VNIIEF. Earlier [1] we discussed the possibility of creation of the multi-frame infrared radiation photo-recorder with framing frequency about 1 MHz. The basis of the photo-recorder is a semiconductor ionization camera [2, 3], which converts IR radiation of spectral range 1-10 micrometers into a visible image. Several sequential thermal images are registered by using the IR converter in conjunction with a multi-frame electron-optical camera. In the present report we discuss the performance characteristics of a prototype commercial 9-frame high-speed IR photo-recorder.more » The image converter records infrared images of thermal fields corresponding to temperatures ranging from 300 C to 2000 C with an exposure time of 1-20 {micro}s at a frame frequency up to 500 KHz. The IR-photo-recorder camera is useful for recording the time evolution of thermal fields in fast processes such as gas dynamics, ballistics, pulsed welding, thermal processing, automotive industry, aircraft construction, in pulsed-power electric experiments, and for the measurement of spatial mode characteristics of IR-laser radiation.« less

  1. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  2. Arruntia Crater: A Rare Window into Vesta’s Northern Hemisphere

    NASA Astrophysics Data System (ADS)

    Sunshine, Jessica; Cheek, Leah

    2015-11-01

    One of the intriguing results of the Dawn mission to Vesta was the discovery that the only deposits containing significant olivine, a key mineralogic indicator of primitive materials, occur in two shallow craters in the northern hemisphere. Numerous investigations into these exposures using either the Framing Camera (FC) or the VIR spectrometer typically find similarities between the olivine outcropping in Bellicia’s wall and in the Arruntia ejecta. Our own investigations - using a hybrid VIR-FC approach -suggest an important distinction between the exposures at the two craters. Specifically, we find that the proximal Arruntia ejecta are dominated not by an olivine-rich component (although isolated examples occur), but instead by a more evolved, eucritic component. These interspersed eucritic materials are similar to olivine-rich materials in FC data because both components pull the 1 um band to longer wavelengths. Ultimately, however, they are distinguished by the position and strength of the 2 um band, which is not covered by FC wavelengths. The Arruntia ejecta also appear olivine-like in some parameterizations of VIR spectra for the area, but closer examination of the full spectra at a fine spatial scale clearly suggests the presence of two different materials. Interestingly, the approach used here also reveals a separate diogenitic component in the distal Arruntia ejecta as well as in isolated locations within Arruntia’s wall. Initial evaluation of stratigraphic relationships in the Arruntia ejecta suggest a pre-impact sequence of eucrite -> olivine-rich -> diogenite grading from depth to the shallowest subsurface (although the small size of Arruntia means that this entire assemblage was excavated from no more than a few of kilometers below the surface). These results illustrate two main points: first, that pyroxene-bearing materials rich in olivine are difficult to distinguish from evolved pyroxenes in the absence of full resolution spectroscopy at a fine spatial scale. Second, we find that Arruntia crater is likely a unique location on Vesta where pervasive mechanical gardening has not yet masked the stratigraphic relationships between endmember diogenitic and eucritic components nor olivine-enhancements.

  3. Three-dimensional spectral analysis of compositional heterogeneity at Arruntia crater on (4) Vesta using Dawn FC

    NASA Astrophysics Data System (ADS)

    Thangjam, Guneshwar; Nathues, Andreas; Mengel, Kurt; Schäfer, Michael; Hoffmann, Martin; Cloutis, Edward A.; Mann, Paul; Müller, Christian; Platz, Thomas; Schäfer, Tanja

    2016-03-01

    We introduce an innovative three-dimensional spectral approach (three band parameter space with polyhedrons) that can be used for both qualitative and quantitative analyzes improving the characterization of surface compositional heterogeneity of (4) Vesta. It is an advanced and more robust methodology compared to the standard two-dimensional spectral approach (two band parameter space). The Dawn Framing Camera (FC) color data obtained during High Altitude Mapping Orbit (resolution ∼ 60 m/pixel) is used. The main focus is on the howardite-eucrite-diogenite (HED) lithologies containing carbonaceous chondritic material, olivine, and impact-melt. The archived spectra of HEDs and their mixtures, from RELAB, HOSERLab and USGS databases as well as our laboratory-measured spectra are used for this study. Three-dimensional convex polyhedrons are defined using computed band parameter values of laboratory spectra. Polyhedrons based on the parameters of Band Tilt (R0.92μm/R0.96μm), Mid Ratio ((R0.75μm/R0.83μm)/(R0.83μm/R0.92μm)) and reflectance at 0.55 μm (R0.55μm) are chosen for the present analysis. An algorithm in IDL programming language is employed to assign FC data points to the respective polyhedrons. The Arruntia region in the northern hemisphere of Vesta is selected for a case study because of its geological and mineralogical importance. We observe that this region is eucrite-dominated howarditic in composition. The extent of olivine-rich exposures within an area of 2.5 crater radii is ∼12% larger than the previous finding (Thangjam, G. et al. [2014]. Meteorit. Planet. Sci. 49, 1831-1850). Lithologies of nearly pure CM2-chondrite, olivine, glass, and diogenite are not found in this region. Although there are no unambiguous spectral features of impact melt, the investigation of morphological features using FC clear filter data from Low Altitude Mapping Orbit (resolution ∼ 18 m/pixel) suggests potential impact-melt features inside and outside of the crater. Our spectral approach can be extended to the entire Vestan surface to study the heterogeneous surface composition and its geology.

  4. Ceres' Yellow Spots - Observations with Dawn Framing Camera

    NASA Astrophysics Data System (ADS)

    Schäfer, Michael; Schäfer, Tanja; Cloutis, Edward A.; Izawa, Matthew R. M.; Platz, Thomas; Castillo-Rogez, Julie C.; Hoffmann, Martin; Thangjam, Guneshwar S.; Kneissl, Thomas; Nathues, Andreas; Mengel, Kurt; Williams, David A.; Kallisch, Jan; Ripken, Joachim; Russell, Christopher T.

    2016-04-01

    The Framing Camera (FC) onboard the Dawn spacecraft acquired several spectral data sets of (1) Ceres with increasing spatial resolution (up to 135 m/pixel with nearly global coverage). The FC is equipped with seven color filters (0.4-1.0 μm) plus one panchromatic ('clear') filter [1]. We produced spectral mosaics using photometrically corrected FC color filter images as described in [2]. Even early FC color mosaics obtained during Dawn's approach unexpectedly exhibited quite a diversity of surface materials on Ceres. Besides the ordinary cerean surface material, potentially composed of ammoniated phyllosilicates [3] or some other alteration product of carbonaceous chondrites [4], a large number of bright spots were found on Ceres [5]. These spots are substantially brighter than the average surface (exceeding its triple standard deviation), with the spots within Occator crater being the brightest and most prominent examples (reflectance more than 10 times the average of Ceres). We observed bright spots which are different by their obvious yellow color. This yellow color appears both in a 'true color' RGB display (R=0.65, G=0.55, B=0.44 μm) as well as in a false color display (R=0.97, G=0.75, B=0.44 μm) using a linear 2% stretch. Their spectra show a steep red slope between 0.44 and 0.55 μm (UV drop-off). On the contrary to these yellow spots, the vast majority of bright spots appears white in the aforementioned color displays and exhibit blue sloped spectra, except for a shallow UV drop-off. Thus, yellow spots are easily distinguishable from white spots and the remaining cerean surface by their high values in the ratio 0.55/0.44 μm. We found 8 occurrences of yellow spots on Ceres. Most of them (>70 individual spots) occur both inside and outside crater Dantu, where white spots are also found in the immediate vicinity. Besides Dantu, further occurrences with only a few yellow spots were found at craters Ikapati and Gaue. Less definite occurrences are found at 97°E/24°N, 205°E/22°S, 244°E/31°S, 213°E/37.5°S, and at Azacca crater. Often, the yellow spots exhibit well-defined boundaries, but sometimes we found a fainter diffuse yellow tinge around them, enclosing several individual yellow spots. Rarely, they are associated with mass wasting on steep slopes, most notably on the SE crater wall of Dantu. Recently acquired clear filter images with 35 m/pixel resolution indicate only a small number of yellow spots to be situated nearby craters. These craters could also be interpreted as pits probably formed by exhalation vents. More frequently, we found yellow spots linked to small positive landforms. Only a few of the yellow spots seem to be interrelated with crater floor fractures. As with white bright spots, which were interpreted as evaporite deposits of magnesium-sulfate salts [5], the yellow spots appear to emerge from the sub-surface as a result of material transport, possibly driven by sublimation of ice [5], where vents or cracks penetrate the insulating lag deposits. However, in contrast to the white spots, a different mineralogy seems to have emerged at yellow spots. First comparisons of FC spectra with laboratory spectra indicate pyrite/marcasite as a possible component. The relatively strong UV drop-off may at least indicate some kind of sulfide- or sulfur-bearing mixture. As identifications of minerals based on FC spectra are often ambiguous, further investigations by high-resolution data yet to come from Dawn's VIR spectrometer may shed light into the compositional differences between yellow and white bright spots. References: [1] Sierks, H. et al., Space Sci. Rev., 163, 263-327, 2011. [2] Schäfer, M. et al., EPSC, Vol. 10, #488, 2015. [3] De Sanctis, M. C. et al., Nature 528, 241-244, 2015. [4] Schäfer, T. et al., EGU, #12370, 2016. [5] Nathues, A. et al., Nature 528, 237-240, 2015.

  5. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  6. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  7. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  8. Integration of image capture and processing: beyond single-chip digital camera

    NASA Astrophysics Data System (ADS)

    Lim, SukHwan; El Gamal, Abbas

    2001-05-01

    An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.

  9. Visible camera imaging of plasmas in Proto-MPEX

    NASA Astrophysics Data System (ADS)

    Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.

    2015-11-01

    The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.

  10. Real time heart rate variability assessment from Android smartphone camera photoplethysmography: Postural and device influences.

    PubMed

    Guede-Fernandez, F; Ferrer-Mileo, V; Ramos-Castro, J; Fernandez-Chimeno, M; Garcia-Gonzalez, M A

    2015-01-01

    The aim of this paper is to present a smartphone based system for real-time pulse-to-pulse (PP) interval time series acquisition by frame-to-frame camera image processing. The developed smartphone application acquires image frames from built-in rear-camera at the maximum available rate (30 Hz) and the smartphone GPU has been used by Renderscript API for high performance frame-by-frame image acquisition and computing in order to obtain PPG signal and PP interval time series. The relative error of mean heart rate is negligible. In addition, measurement posture and the employed smartphone model influences on the beat-to-beat error measurement of heart rate and HRV indices have been analyzed. Then, the standard deviation of the beat-to-beat error (SDE) was 7.81 ± 3.81 ms in the worst case. Furthermore, in supine measurement posture, significant device influence on the SDE has been found and the SDE is lower with Samsung S5 than Motorola X. This study can be applied to analyze the reliability of different smartphone models for HRV assessment from real-time Android camera frames processing.

  11. Ambient-Light-Canceling Camera Using Subtraction of Frames

    NASA Technical Reports Server (NTRS)

    Morookian, John Michael

    2004-01-01

    The ambient-light-canceling camera (ALCC) is a proposed near-infrared electronic camera that would utilize a combination of (1) synchronized illumination during alternate frame periods and (2) subtraction of readouts from consecutive frames to obtain images without a background component of ambient light. The ALCC is intended especially for use in tracking the motion of an eye by the pupil center corneal reflection (PCCR) method. Eye tracking by the PCCR method has shown potential for application in human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological deficiencies. In the PCCR method, an eye is illuminated by near-infrared light from a lightemitting diode (LED). Some of the infrared light is reflected from the surface of the cornea. Some of the infrared light enters the eye through the pupil and is reflected from back of the eye out through the pupil a phenomenon commonly observed as the red-eye effect in flash photography. An electronic camera is oriented to image the user's eye. The output of the camera is digitized and processed by algorithms that locate the two reflections. Then from the locations of the centers of the two reflections, the direction of gaze is computed. As described thus far, the PCCR method is susceptible to errors caused by reflections of ambient light. Although a near-infrared band-pass optical filter can be used to discriminate against ambient light, some sources of ambient light have enough in-band power to compete with the LED signal. The mode of operation of the ALCC would complement or supplant spectral filtering by providing more nearly complete cancellation of the effect of ambient light. In the operation of the ALCC, a near-infrared LED would be pulsed on during one camera frame period and off during the next frame period. Thus, the scene would be illuminated by both the LED (signal) light and the ambient (background) light during one frame period, and would be illuminated with only ambient (background) light during the next frame period. The camera output would be digitized and sent to a computer, wherein the pixel values of the background-only frame would be subtracted from the pixel values of the signal-plus-background frame to obtain signal-only pixel values (see figure). To prevent artifacts of motion from entering the images, it would be necessary to acquire image data at a rate greater than the standard video rate of 30 frames per second. For this purpose, the ALCC would exploit a novel control technique developed at NASA s Jet Propulsion Laboratory for advanced charge-coupled-device (CCD) cameras. This technique provides for readout from a subwindow [region of interest (ROI)] within the image frame. Because the desired reflections from the eye would typically occupy a small fraction of the area within the image frame, the ROI capability would make it possible to acquire and subtract pixel values at rates of several hundred frames per second considerably greater than the standard video rate and sufficient to both (1) suppress motion artifacts and (2) track the motion of the eye between consecutive subtractive frame pairs.

  12. 75 FR 61975 - Airworthiness Directives; Airbus Model A300 B4-600 Series Airplanes, Model A300 B4-600R Series...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-10-07

    ... contains regulatory documents #0;having general applicability and legal effect, most of which are keyed #0... programme (42,500 FC [flight cycles]), it has been concluded that a reinforcement of the junction of frame... Goal (ESG). * * * [Failure of the frame base], if not corrected, could affect the structural integrity...

  13. Apollo 12 photography 70 mm, 16 mm, and 35 mm frame index

    NASA Technical Reports Server (NTRS)

    1970-01-01

    For each 70-mm frame, the index presents information on: (1) the focal length of the camera, (2) the photo scale at the principal point of the frame, (3) the selenographic coordinates at the principal point of the frame, (4) the percentage of forward overlap of the frame, (5) the sun angle (medium, low, high), (6) the quality of the photography, (7) the approximate tilt (minimum and maximum) of the camera, and (8) the direction of tilt. A brief description of each frame is also included. The index to the 16-mm sequence photography includes information concerning the approximate surface coverage of the photographic sequence and a brief description of the principal features shown. A column of remarks is included to indicate: (1) if the sequence is plotted on the photographic index map and (2) the quality of the photography. The pictures taken using the lunar surface closeup stereoscopic camera (35 mm) are also described in this same index format.

  14. Motion-Blur-Free High-Speed Video Shooting Using a Resonant Mirror

    PubMed Central

    Inoue, Michiaki; Gu, Qingyi; Takaki, Takeshi; Ishii, Idaku; Tajima, Kenji

    2017-01-01

    This study proposes a novel concept of actuator-driven frame-by-frame intermittent tracking for motion-blur-free video shooting of fast-moving objects. The camera frame and shutter timings are controlled for motion blur reduction in synchronization with a free-vibration-type actuator vibrating with a large amplitude at hundreds of hertz so that motion blur can be significantly reduced in free-viewpoint high-frame-rate video shooting for fast-moving objects by deriving the maximum performance of the actuator. We develop a prototype of a motion-blur-free video shooting system by implementing our frame-by-frame intermittent tracking algorithm on a high-speed video camera system with a resonant mirror vibrating at 750 Hz. It can capture 1024 × 1024 images of fast-moving objects at 750 fps with an exposure time of 0.33 ms without motion blur. Several experimental results for fast-moving objects verify that our proposed method can reduce image degradation from motion blur without decreasing the camera exposure time. PMID:29109385

  15. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  16. A new C-type lectin (FcLec5) from the Chinese white shrimp Fenneropenaeus chinensis.

    PubMed

    Xu, Wen-Teng; Wang, Xian-Wei; Zhang, Xiao-Wen; Zhao, Xiao-Fan; Yu, Xiao-Qiang; Wang, Jin-Xing

    2010-11-01

    C-type lectins are one family of pattern recognition receptors (PRRs) that play important roles in innate immunity. In this work, cDNA and genomic sequences for a new C-type lectin (FcLec5) were obtained from the Chinese white shrimp Fenneropenaeus chinensis. FcLec5 cDNA contains an open reading frame of 1,008 bp and its genomic sequence is 1,137 bp with 4 exons and 3 introns. The predicted FcLec5 protein contains a signal peptide and two carbohydrate recognition domains (CRDs). The N-terminal CRD of FcLec5 has a predicted carbohydrate recognition motif of Gln-Pro-Asp (QPD), while the C-terminal CRD contains a motif of Glu-Pro-Gln (EPQ). Northern blot analysis showed that FcLec5 mRNA was specifically expressed in hepatopancreas. FcLec5 protein was expressed in hepatopancreas and secreted into hemolymph. Real-time PCR showed that FcLec5 transcript exhibited different expression profiles after immune-challenged with Vibrio anguillarum or White Spot Syndrome Virus (WSSV). Recombinant FcLec5 and its two individual CRDs could agglutinate most bacteria tested, and the agglutinating activity was Ca2+-dependent. Besides, the agglutinating activity to gram-negative bacteria is higher than that to gram-positive bacteria. Direct binding assay showed that recombinant FcLec5 could bind to all microorganisms tested (five gram-positive and four gram-negative bacteria, as well as yeast) in a Ca2+-independent manner. Recombinant FcLec5 also directly bound to bacterial peptidoglycan, lipopolysaccharide and lipoteichoic acids. These results suggest that FcLec5 may act as a PRR for bacteria via binding to bacterial cell wall polysaccharides in Chinese white shrimp.

  17. In vitro and in vivo mapping of the Prunus necrotic ringspot virus coat protein C-terminal dimerization domain by bimolecular fluorescence complementation.

    PubMed

    Aparicio, Frederic; Sánchez-Navarro, Jesús A; Pallás, Vicente

    2006-06-01

    Interactions between viral proteins are critical for virus viability. Bimolecular fluorescent complementation (BiFC) technique determines protein interactions in real-time under almost normal physiological conditions. The coat protein (CP) of Prunus necrotic ringspot virus is required for multiple functions in its replication cycle. In this study, the region involved in CP dimerization has been mapped by BiFC in both bacteria and plant tissue. Full-length and C-terminal deleted forms of the CP gene were fused in-frame to the N- and C-terminal fragments of the yellow fluorescent protein. The BiFC analysis showed that a domain located between residues 9 and 27 from the C-end plays a critical role in dimerization. The importance of this C-terminal region in dimer formation and the applicability of the BiFC technique to analyse viral protein interactions are discussed.

  18. Geologic Structures in Crater Walls on Vesta

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David W.; Beck, A. W.; Ammannito, E.; Carsenty, U.; DeSanctis, M. C.; LeCorre, L.; McCoy, T. J.; Reddy, V.; Schroeder, S. E.

    2012-01-01

    The Framing Camera (FC) on the Dawn spacecraft has imaged most of the illuminated surface of Vesta with a resolution of apporpx. 20 m/pixel through different wavelength filters that allow for identification of lithologic units. The Visible and Infrared Mapping Spectrometer (VIR) has imaged the surface at lower spatial resolution but high spectral resolution from 0.25 to 5 micron that allows for detailed mineralogical interpretation. The FC has imaged geologic structures in the walls of fresh craters and on scarps on the margin of the Rheasilvia basin that consist of cliff-forming, competent units, either as blocks or semi-continuous layers, hundreds of m to km below the rims. Different units have different albedos, FC color ratios and VIR spectral characteristics, and different units can be juxtaposed in individual craters. We will describe different examples of these competent units and present preliminary interpretations of the structures. A common occurrence is of blocks several hundred m in size of high albedo (bright) and low albedo (dark) materials protruding from crater walls. In many examples, dark material deposits lie below coherent bright material blocks. In FC Clementine color ratios, bright material is green indicating deeper 1 m pyroxene absorption band. VIR spectra show these to have deeper and wider 1 and 2 micron pyroxene absorption bands than the average vestan surface. The associated dark material has subdued pyroxene absorption features compared to the average vestan surface. Some dark material deposits are consistent with mixtures of HED materials with carbonaceous chondrites. This would indicate that some dark material deposits in crater walls are megabreccia blocks. The same would hold for bright material blocks found above them. Thus, these are not intact crustal units. Marcia crater is atypical in that the dark material forms a semi-continuous, thin layer immediately below bright material. Bright material occurs as one or more layers. In one region, there is an apparent angular unconformity between the bright material and the dark material where bright material layers appear to be truncated against the underlying dark layer. One crater within the Rheasilvia basin contains two distinct types of bright materials outcropping on its walls, one like that found elsewhere on Vesta and the other an anomalous block 200 m across. This material has the highest albedo; almost twice that of the vestan average. Unlike all other bright materials, this block has a subdued 1 micron pyroxene absorption band in FC color ratios. These data indicate that this block represents a distinct vestan lithology that is rarely exposed.

  19. Encrypting Digital Camera with Automatic Encryption Key Deletion

    NASA Technical Reports Server (NTRS)

    Oakley, Ernest C. (Inventor)

    2007-01-01

    A digital video camera includes an image sensor capable of producing a frame of video data representing an image viewed by the sensor, an image memory for storing video data such as previously recorded frame data in a video frame location of the image memory, a read circuit for fetching the previously recorded frame data, an encryption circuit having an encryption key input connected to receive the previously recorded frame data from the read circuit as an encryption key, an un-encrypted data input connected to receive the frame of video data from the image sensor and an encrypted data output port, and a write circuit for writing a frame of encrypted video data received from the encrypted data output port of the encryption circuit to the memory and overwriting the video frame location storing the previously recorded frame data.

  20. ADVANCED HEAT TRANSFER TEST FACILITY, TRA666A. ELEVATIONS. ROOF FRAMING PLAN. ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    ADVANCED HEAT TRANSFER TEST FACILITY, TRA-666A. ELEVATIONS. ROOF FRAMING PLAN. CONCRETE BLOCK SIDING. SLOPED ROOF. ROLL-UP DOOR. AIR INTAKE ENCLOSURE ON NORTH SIDE. F.C. TORKELSON 842-MTR-666-A5, 8/1966. INL INDEX NO. 531-0666-00-851-152258, REV. 2. - Idaho National Engineering Laboratory, Test Reactor Area, Materials & Engineering Test Reactors, Scoville, Butte County, ID

  1. An ultrahigh-speed color video camera operating at 1,000,000 fps with 288 frame memories

    NASA Astrophysics Data System (ADS)

    Kitamura, K.; Arai, T.; Yonai, J.; Hayashida, T.; Kurita, T.; Maruyama, H.; Namiki, J.; Yanagi, T.; Yoshida, T.; van Kuijk, H.; Bosiers, Jan T.; Saita, A.; Kanayama, S.; Hatade, K.; Kitagawa, S.; Etoh, T. Goji

    2008-11-01

    We developed an ultrahigh-speed color video camera that operates at 1,000,000 fps (frames per second) and had capacity to store 288 frame memories. In 2005, we developed an ultrahigh-speed, high-sensitivity portable color camera with a 300,000-pixel single CCD (ISIS-V4: In-situ Storage Image Sensor, Version 4). Its ultrahigh-speed shooting capability of 1,000,000 fps was made possible by directly connecting CCD storages, which record video images, to the photodiodes of individual pixels. The number of consecutive frames was 144. However, longer capture times were demanded when the camera was used during imaging experiments and for some television programs. To increase ultrahigh-speed capture times, we used a beam splitter and two ultrahigh-speed 300,000-pixel CCDs. The beam splitter was placed behind the pick up lens. One CCD was located at each of the two outputs of the beam splitter. The CCD driving unit was developed to separately drive two CCDs, and the recording period of the two CCDs was sequentially switched. This increased the recording capacity to 288 images, an increase of a factor of two over that of conventional ultrahigh-speed camera. A problem with the camera was that the incident light on each CCD was reduced by a factor of two by using the beam splitter. To improve the light sensitivity, we developed a microlens array for use with the ultrahigh-speed CCDs. We simulated the operation of the microlens array in order to optimize its shape and then fabricated it using stamping technology. Using this microlens increased the light sensitivity of the CCDs by an approximate factor of two. By using a beam splitter in conjunction with the microlens array, it was possible to make an ultrahigh-speed color video camera that has 288 frame memories but without decreasing the camera's light sensitivity.

  2. Flexible nuclear medicine camera and method of using

    DOEpatents

    Dilmanian, F.A.; Packer, S.; Slatkin, D.N.

    1996-12-10

    A nuclear medicine camera and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera includes a flexible frame containing a window, a photographic film, and a scintillation screen, with or without a gamma-ray collimator. The frame flexes for following the contour of the examination site on the patient, with the window being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film and the radiation source inside the patient. The frame is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms. 11 figs.

  3. Computational Studies of X-ray Framing Cameras for the National Ignition Facility

    DTIC Science & Technology

    2013-06-01

    Livermore National Laboratory 7000 East Avenue Livermore, CA 94550 USA Abstract The NIF is the world’s most powerful laser facility and is...a phosphor screen where the output is recorded. The x-ray framing cameras have provided excellent information. As the yields at NIF have increased...experiments on the NIF . The basic operation of these cameras is shown in Fig. 1. Incident photons generate photoelectrons both in the pores of the MCP and

  4. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013683 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  5. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013687 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  6. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-13

    ISS041-E-013693 (13 Sept. 2014) --- Photographed with a mounted automated camera, this is one of a number of images featuring the European Space Agency?s Automated Transfer Vehicle (ATV-5 or Georges Lemaitre) docked with the International Space Station. Except for color changes, the images are almost identical. The variation in color from frame to frame is due to the camera?s response to the motion of the orbital outpost, relative to the illumination from the sun.

  7. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  8. Photometric properties of Ceres from telescopic observations using Dawn Framing Camera color filters

    NASA Astrophysics Data System (ADS)

    Reddy, Vishnu; Li, Jian-Yang; Gary, Bruce L.; Sanchez, Juan A.; Stephens, Robert D.; Megna, Ralph; Coley, Daniel; Nathues, Andreas; Le Corre, Lucille; Hoffmann, Martin

    2015-11-01

    The dwarf planet Ceres is likely differentiated similar to the terrestrial planets but with a water/ice dominated mantle and an aqueously altered crust. Detailed modeling of Ceres' phase function has never been performed to understand its surface properties. The Dawn spacecraft began orbital science operations at the dwarf planet in April 2015. We observed Ceres with flight spares of the seven Dawn Framing Camera color filters mounted on ground-based telescopes over the course of three years to model its phase function versus wavelength. Our analysis shows that the modeled geometric albedos derived from both the IAU HG model and the Hapke model are consistent with a flat and featureless spectrum of Ceres, although the values are ∼10% higher than previous measurements. Our models also suggest a wavelength dependence of Ceres' phase function. The IAU G-parameter and the Hapke single-particle phase function parameter, g, are both consistent with decreasing (shallower) phase slope with increasing wavelength. Such a wavelength dependence of phase function is consistent with reddening of spectral slope with increasing phase angle, or phase-reddening. This phase reddening is consistent with previous spectra of Ceres obtained at various phase angles archived in the literature, and consistent with the fact that the modeled geometric albedo spectrum of Ceres is the bluest of all spectra because it represents the spectrum at 0° phase angle. Ground-based FC color filter lightcurve data are consistent with HST albedo maps confirming that Ceres' lightcurve is dominated by albedo and not shape. We detected a positive correlation between 1.1-μm absorption band depth and geometric albedo suggesting brighter areas on Ceres have absorption bands that are deeper. We did not see the "extreme" slope values measured by Perna et al. (Perna, D., et al. [2015]. Astron. Astrophys. 575 (L1-6)), which they have attributed to "resurfacing episodes" on Ceres.

  9. Preliminary Iron Distribution on Vesta

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David W.; Mittlefehldt, David W.

    2013-01-01

    The distribution of iron on the surface of the asteroid Vesta was investigated using Dawn's Gamma Ray and Neutron Detector (GRaND) [1,2]. Iron varies predictably with rock type for the howardite, eucrite, and diogenite (HED) meteorites, thought to be representative of Vesta. The abundance of Fe in howardites ranges from about 12 to 15 wt.%. Basaltic eucrites have the highest abundance, whereas, lower crustal and upper mantle materials (cumulate eucrites and diogenites) have the lowest, and howardites are intermediate [3]. We have completed a mapping study of 7.6 MeV gamma rays produced by neutron capture by Fe as measured by the bismuth germanate (BGO) detector of GRaND [1]. The procedures to determine Fe counting rates are presented in detail here, along with a preliminary distribution map, constituting the necessary initial step to quantification of Fe abundances. We find that the global distribution of Fe counting rates is generally consistent with independent mineralogical and compositional inferences obtained by other instruments on Dawn such as measurements of pyroxene absorption bands by the Visual and Infrared Spectrometer (VIR) [4] and Framing Camera (FC) [5] and neutron absorption measurements by GRaND [6].

  10. CANDU in-reactor quantitative visual-based inspection techniques

    NASA Astrophysics Data System (ADS)

    Rochefort, P. A.

    2009-02-01

    This paper describes two separate visual-based inspection procedures used at CANDU nuclear power generating stations. The techniques are quantitative in nature and are delivered and operated in highly radioactive environments with access that is restrictive, and in one case is submerged. Visual-based inspections at stations are typically qualitative in nature. For example a video system will be used to search for a missing component, inspect for a broken fixture, or locate areas of excessive corrosion in a pipe. In contrast, the methods described here are used to measure characteristic component dimensions that in one case ensure ongoing safe operation of the reactor and in the other support reactor refurbishment. CANDU reactors are Pressurized Heavy Water Reactors (PHWR). The reactor vessel is a horizontal cylindrical low-pressure calandria tank approximately 6 m in diameter and length, containing heavy water as a neutron moderator. Inside the calandria, 380 horizontal fuel channels (FC) are supported at each end by integral end-shields. Each FC holds 12 fuel bundles. The heavy water primary heat transport water flows through the FC pressure tube, removing the heat from the fuel bundles and delivering it to the steam generator. The general design of the reactor governs both the type of measurements that are required and the methods to perform the measurements. The first inspection procedure is a method to remotely measure the gap between FC and other in-core horizontal components. The technique involves delivering vertically a module with a high-radiation-resistant camera and lighting into the core of a shutdown but fuelled reactor. The measurement is done using a line-of-sight technique between the components. Compensation for image perspective and viewing elevation to the measurement is required. The second inspection procedure measures flaws within the reactor's end shield FC calandria tube rolled joint area. The FC calandria tube (the outer shell of the FC) is sealed by rolling its ends into the rolled joint area. During reactor refurbishment, the original FC calandria tubes are removed, potentially scratching the rolled joint area and, thereby, compromising the seal with the new FC calandria tube. The procedure involves delivering an inspection module having a radiation-resistant camera, standard lighting, and a structured lighting projector. The surface is inspected by rotating the module within the rolled joint area. If a flaw is detected, its depth and width are gauged from the profile variation of the structured lighting in a captured image. As well, the diameter profile of the area is measured from the analysis of a series of captured circumferential images of the structured lighting profiles on the surface.

  11. Driving techniques for high frame rate CCD camera

    NASA Astrophysics Data System (ADS)

    Guo, Weiqiang; Jin, Longxu; Xiong, Jingwu

    2008-03-01

    This paper describes a high-frame rate CCD camera capable of operating at 100 frames/s. This camera utilizes Kodak KAI-0340, an interline transfer CCD with 640(vertical)×480(horizontal) pixels. Two output ports are used to read out CCD data and pixel rates approaching 30 MHz. Because of its reduced effective opacity of vertical charge transfer registers, interline transfer CCD can cause undesired image artifacts, such as random white spots and smear generated in the registers. To increase frame rate, a kind of speed-up structure has been incorporated inside KAI-0340, then it is vulnerable to a vertical stripe effect. The phenomena which mentioned above may severely impair the image quality. To solve these problems, some electronic methods of eliminating these artifacts are adopted. Special clocking mode can dump the unwanted charge quickly, then the fast readout of the images, cleared of smear, follows immediately. Amplifier is used to sense and correct delay mismatch between the dual phase vertical clock pulses, the transition edges become close to coincident, so vertical stripes disappear. Results obtained with the CCD camera are shown.

  12. Sequential detection of web defects

    DOEpatents

    Eichel, Paul H.; Sleefe, Gerard E.; Stalker, K. Terry; Yee, Amy A.

    2001-01-01

    A system for detecting defects on a moving web having a sequential series of identical frames uses an imaging device to form a real-time camera image of a frame and a comparitor to comparing elements of the camera image with corresponding elements of an image of an exemplar frame. The comparitor provides an acceptable indication if the pair of elements are determined to be statistically identical; and a defective indication if the pair of elements are determined to be statistically not identical. If the pair of elements is neither acceptable nor defective, the comparitor recursively compares the element of said exemplar frame with corresponding elements of other frames on said web until one of the acceptable or defective indications occur.

  13. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  14. Students' framing of laboratory exercises using infrared cameras

    NASA Astrophysics Data System (ADS)

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-12-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N =30 ) partook in four IR-camera laboratory activities, designed around the predict-observe-explain approach of White and Gunstone. The activities involved central thermal concepts that focused on heat conduction and dissipative processes such as friction and collisions. Students' interactions within each activity were videotaped and the analysis focuses on how a purposefully selected group of three students engaged with the exercises. As the basis for an interpretative study, a "thick" narrative description of the students' epistemological and conceptual framing of the exercises and how they took advantage of the disciplinary affordance of IR cameras in the thermal domain is provided. Findings include that the students largely shared their conceptual framing of the four activities, but differed among themselves in their epistemological framing, for instance, in how far they found it relevant to digress from the laboratory instructions when inquiring into thermal phenomena. In conclusion, the study unveils the disciplinary affordances of infrared cameras, in the sense of their use in providing access to knowledge about macroscopic thermal science.

  15. Flexible nuclear medicine camera and method of using

    DOEpatents

    Dilmanian, F. Avraham; Packer, Samuel; Slatkin, Daniel N.

    1996-12-10

    A nuclear medicine camera 10 and method of use photographically record radioactive decay particles emitted from a source, for example a small, previously undetectable breast cancer, inside a patient. The camera 10 includes a flexible frame 20 containing a window 22, a photographic film 24, and a scintillation screen 26, with or without a gamma-ray collimator 34. The frame 20 flexes for following the contour of the examination site on the patient, with the window 22 being disposed in substantially abutting contact with the skin of the patient for reducing the distance between the film 24 and the radiation source inside the patient. The frame 20 is removably affixed to the patient at the examination site for allowing the patient mobility to wear the frame 20 for a predetermined exposure time period. The exposure time may be several days for obtaining early qualitative detection of small malignant neoplasms.

  16. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  17. 640x480 PtSi Stirling-cooled camera system

    NASA Astrophysics Data System (ADS)

    Villani, Thomas S.; Esposito, Benjamin J.; Davis, Timothy J.; Coyle, Peter J.; Feder, Howard L.; Gilmartin, Harvey R.; Levine, Peter A.; Sauer, Donald J.; Shallcross, Frank V.; Demers, P. L.; Smalser, P. J.; Tower, John R.

    1992-09-01

    A Stirling cooled 3 - 5 micron camera system has been developed. The camera employs a monolithic 640 X 480 PtSi-MOS focal plane array. The camera system achieves an NEDT equals 0.10 K at 30 Hz frame rate with f/1.5 optics (300 K background). At a spatial frequency of 0.02 cycles/mRAD the vertical and horizontal Minimum Resolvable Temperature are in the range of MRT equals 0.03 K (f/1.5 optics, 300 K background). The MOS focal plane array achieves a resolution of 480 TV lines per picture height independent of background level and position within the frame.

  18. Multi-camera synchronization core implemented on USB3 based FPGA platform

    NASA Astrophysics Data System (ADS)

    Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado

    2015-03-01

    Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.

  19. Studies on the formation, temporal evolution and forensic applications of camera "fingerprints".

    PubMed

    Kuppuswamy, R

    2006-06-02

    A series of experiments was conducted by exposing negative film in brand new cameras of different make and model. The exposures were repeated at regular time intervals spread over a period of 2 years. The processed film negatives were studied under a stereomicroscope (10-40x) in transmitted illumination for the presence of the characterizing features on their four frame-edges. These features were then related to those present on the masking frame of the cameras by examining the latter in reflected light stereomicroscopy (10-40x). The purpose of the study was to determine the origin and permanence of the frame-edge-marks, and also the processes by which the marks may probably alter with time. The investigations have arrived at the following conclusions: (i) the edge-marks have originated principally from the imperfections received on the film mask from the manufacturing and also occasionally from the accumulated dirt, dust and fiber on the film mask over an extended time period. (ii) The edge profiles of the cameras have remained fixed over a considerable period of time so as to be of a valuable identification medium. (iii) The marks are found to be varying in nature even with those cameras manufactured at similar time. (iv) The influence of f/number and object distance has great effect in the recording of the frame-edge marks during exposure of the film. The above findings would serve as a useful addition to the technique of camera edge-mark comparisons.

  20. Laser-Camera Vision Sensing for Spacecraft Mobile Robot Navigation

    NASA Technical Reports Server (NTRS)

    Maluf, David A.; Khalil, Ahmad S.; Dorais, Gregory A.; Gawdiak, Yuri

    2002-01-01

    The advent of spacecraft mobile robots-free-flyng sensor platforms and communications devices intended to accompany astronauts or remotely operate on space missions both inside and outside of a spacecraft-has demanded the development of a simple and effective navigation schema. One such system under exploration involves the use of a laser-camera arrangement to predict relative positioning of the mobile robot. By projecting laser beams from the robot, a 3D reference frame can be introduced. Thus, as the robot shifts in position, the position reference frame produced by the laser images is correspondingly altered. Using normalization and camera registration techniques presented in this paper, the relative translation and rotation of the robot in 3D are determined from these reference frame transformations.

  1. Vision Based SLAM in Dynamic Scenes

    DTIC Science & Technology

    2012-12-20

    the correct relative poses between cameras at frame F. For this purpose, we detect and match SURF features between cameras in dilierent groups, and...all cameras in s uch a challenging case. For a compa rison, we disabled the ’ inte r-camera pose estimation’ and applied the ’ intra-camera pose esti

  2. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the accuracy of the obtained temporal noise values was estimated.

  3. Optical flow estimation on image sequences with differently exposed frames

    NASA Astrophysics Data System (ADS)

    Bengtsson, Tomas; McKelvey, Tomas; Lindström, Konstantin

    2015-09-01

    Optical flow (OF) methods are used to estimate dense motion information between consecutive frames in image sequences. In addition to the specific OF estimation method itself, the quality of the input image sequence is of crucial importance to the quality of the resulting flow estimates. For instance, lack of texture in image frames caused by saturation of the camera sensor during exposure can significantly deteriorate the performance. An approach to avoid this negative effect is to use different camera settings when capturing the individual frames. We provide a framework for OF estimation on such sequences that contain differently exposed frames. Information from multiple frames are combined into a total cost functional such that the lack of an active data term for saturated image areas is avoided. Experimental results demonstrate that using alternate camera settings to capture the full dynamic range of an underlying scene can clearly improve the quality of flow estimates. When saturation of image data is significant, the proposed methods show superior performance in terms of lower endpoint errors of the flow vectors compared to a set of baseline methods. Furthermore, we provide some qualitative examples of how and when our method should be used.

  4. Genetic contributions of the serotonin transporter to social learning of fear and economic decision making.

    PubMed

    Crişan, Liviu G; Pana, Simona; Vulturar, Romana; Heilman, Renata M; Szekely, Raluca; Druğa, Bogdan; Dragoş, Nicolae; Miu, Andrei C

    2009-12-01

    Serotonin (5-HT) modulates emotional and cognitive functions such as fear conditioning (FC) and decision making. This study investigated the effects of a functional polymorphism in the regulatory region (5-HTTLPR) of the human 5-HT transporter (5-HTT) gene on observational FC, risk taking and susceptibility to framing in decision making under uncertainty, as well as multidimensional anxiety and autonomic control of the heart in healthy volunteers. The present results indicate that in comparison to the homozygotes for the long (l) version of 5-HTTLPR, the carriers of the short (s) version display enhanced observational FC, reduced financial risk taking and increased susceptibility to framing in economic decision making. We also found that s-carriers have increased trait anxiety due to threat in social evaluation, and ambiguous threat perception. In addition, s-carriers also show reduced autonomic control over the heart, and a pattern of reduced vagal tone and increased sympathetic activity in comparison to l-homozygotes. This is the first genetic study that identifies the association of a functional polymorphism in a key neurotransmitter-related gene with complex social-emotional and cognitive processes. The present set of results suggests an endophenotype of anxiety disorders, characterized by enhanced social learning of fear, impaired decision making and dysfunctional autonomic activity.

  5. Genetic contributions of the serotonin transporter to social learning of fear and economic decision making

    PubMed Central

    Crişan, Liviu G.; Pană, Simona; Vulturar, Romana; Heilman, Renata M.; Szekely, Raluca; Drugă, Bogdan; Dragoş, Nicolae

    2009-01-01

    Serotonin (5-HT) modulates emotional and cognitive functions such as fear conditioning (FC) and decision making. This study investigated the effects of a functional polymorphism in the regulatory region (5-HTTLPR) of the human 5-HT transporter (5-HTT) gene on observational FC, risk taking and susceptibility to framing in decision making under uncertainty, as well as multidimensional anxiety and autonomic control of the heart in healthy volunteers. The present results indicate that in comparison to the homozygotes for the long (l) version of 5-HTTLPR, the carriers of the short (s) version display enhanced observational FC, reduced financial risk taking and increased susceptibility to framing in economic decision making. We also found that s-carriers have increased trait anxiety due to threat in social evaluation, and ambiguous threat perception. In addition, s-carriers also show reduced autonomic control over the heart, and a pattern of reduced vagal tone and increased sympathetic activity in comparison to l-homozygotes. This is the first genetic study that identifies the association of a functional polymorphism in a key neurotransmitter-related gene with complex social–emotional and cognitive processes. The present set of results suggests an endophenotype of anxiety disorders, characterized by enhanced social learning of fear, impaired decision making and dysfunctional autonomic activity. PMID:19535614

  6. Robust Video Stabilization Using Particle Keypoint Update and l1-Optimized Camera Path

    PubMed Central

    Jeon, Semi; Yoon, Inhye; Jang, Jinbeum; Yang, Seungji; Kim, Jisung; Paik, Joonki

    2017-01-01

    Acquisition of stabilized video is an important issue for various type of digital cameras. This paper presents an adaptive camera path estimation method using robust feature detection to remove shaky artifacts in a video. The proposed algorithm consists of three steps: (i) robust feature detection using particle keypoints between adjacent frames; (ii) camera path estimation and smoothing; and (iii) rendering to reconstruct a stabilized video. As a result, the proposed algorithm can estimate the optimal homography by redefining important feature points in the flat region using particle keypoints. In addition, stabilized frames with less holes can be generated from the optimal, adaptive camera path that minimizes a temporal total variation (TV). The proposed video stabilization method is suitable for enhancing the visual quality for various portable cameras and can be applied to robot vision, driving assistant systems, and visual surveillance systems. PMID:28208622

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gaffney, Kelly

    Movies have transformed our perception of the world. With slow motion photography, we can see a hummingbird flap its wings, and a bullet pierce an apple. The remarkably small and extremely fast molecular world that determines how your body functions cannot be captured with even the most sophisticated movie camera today. To see chemistry in real time requires a camera capable of seeing molecules that are one ten billionth of a foot with a frame rate of 10 trillion frames per second! SLAC has embarked on the construction of just such a camera. Please join me as I discuss howmore » this molecular movie camera will work and how it will change our perception of the molecular world.« less

  8. Geometrical calibration television measuring systems with solid state photodetectors

    NASA Astrophysics Data System (ADS)

    Matiouchenko, V. G.; Strakhov, V. V.; Zhirkov, A. O.

    2000-11-01

    The various optical measuring methods for deriving information about the size and form of objects are now used in difference branches- mechanical engineering, medicine, art, criminalistics. Measuring by means of the digital television systems is one of these methods. The development of this direction is promoted by occurrence on the market of various types and costs small-sized television cameras and frame grabbers. There are many television measuring systems using the expensive cameras, but accuracy performances of low cost cameras are also interested for the system developers. For this reason inexpensive mountingless camera SK1004CP (format 1/3', cost up to 40$) and frame grabber Aver2000 were used in experiments.

  9. Earth Observation taken during the 41G mission

    NASA Image and Video Library

    2009-06-25

    41G-120-056 (October 1984) --- Parts of Israel, Lebanon, Palestine, Syria and Jordan and part of the Mediterranean Sea are seen in this nearly-vertical, large format camera's view from the Earth-orbiting Space Shuttle Challenger. The Sea of Galilee is at center frame and the Dead Sea at bottom center. The frame's center coordinates are 32.5 degrees north latitude and 35.5 degrees east longitude. A Linhof camera, using 4" x 5" film, was used to expose the frame through one of the windows on Challenger's aft flight deck.

  10. Sensor fusion of cameras and a laser for city-scale 3D reconstruction.

    PubMed

    Bok, Yunsu; Choi, Dong-Geol; Kweon, In So

    2014-11-04

    This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.

  11. Universal ICT Picosecond Camera

    NASA Astrophysics Data System (ADS)

    Lebedev, Vitaly B.; Syrtzev, V. N.; Tolmachyov, A. M.; Feldman, Gregory G.; Chernyshov, N. A.

    1989-06-01

    The paper reports on the design of an ICI camera operating in the mode of linear or three-frame image scan. The camera incorporates two tubes: time-analyzing ICI PIM-107 1 with cathode S-11, and brightness amplifier PMU-2V (gain about 104) for the image shaped by the first tube. The camera is designed on the basis of streak camera AGAT-SF3 2 with almost the same power sources, but substantially modified pulse electronics. Schematically, the design of tube PIM-107 is depicted in the figure. The tube consists of cermet housing 1, photocathode 2 made in a separate vacuum volume and introduced into the housing by means of a manipulator. In a direct vicinity of the photocathode, accelerating electrode is located made of a fine-structure grid. An electrostatic lens formed by focusing electrode 4 and anode diaphragm 5 produces a beam of electrons with a "remote crossover". The authors have suggested this term for an electron beam whose crossover is 40 to 60 mm away from the anode diaphragm plane which guarantees high sensitivity of scan plates 6 with respect to multiaperture framing diaphragm 7. Beyond every diaphragm aperture, a pair of deflecting plates 8 is found shielded from compensation plates 10 by diaphragm 9. The electronic image produced by the photocathode is focused on luminescent screen 11. The tube is controlled with the help of two saw-tooth voltages applied in antiphase across plates 6 and 10. Plates 6 serve for sweeping the electron beam over the surface of diaphragm 7. The beam is either allowed toward the screen, or delayed by the diaphragm walls. In such a manner, three frames are obtained, the number corresponding to that of the diaphragm apertures. Plates 10 serve for stopping the compensation of the image streak sweep on the screen. To avoid overlapping of frames, plates 8 receive static potentials responsible for shifting frames on the screen. Changing the potentials applied to plates 8, one can control the spacing between frames and partially or fully overlap the frames. This sort of control is independent of the frequency of frame running and of their duration, and can only determine frame positioning on the screen. Since diaphragm 7 is located in the area of crossover and electron trajectories cross in the crossover, the frame is not decomposed into separate elements during its formation. The image is transferred onto the screen practically within the entire time of frame duration increasing the aperture ratio of the tube as compared to that in Ref. 3.

  12. A higher-speed compressive sensing camera through multi-diode design

    NASA Astrophysics Data System (ADS)

    Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore

    2013-05-01

    Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.

  13. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  14. Anomalous crater Marcia on asteroid 4 Vesta: Spectral signatures and their geological relationship

    NASA Astrophysics Data System (ADS)

    Giebner, T.; Jaumann, R.; Schroeder, S.; Krohn, K.

    2016-12-01

    DAWN Framing Camera (FC) images are used in this study to analyze the diverse spectral signatures of crater Marcia. As the FC offers high spatial resolution as well as several color filters it is well suited to resolve geological correlations on Vestas surface. Our approach comprises the analysis of images from four FC filters ( F3, F4, F5 and F6) that cover the pyroxene absorption band at 0.9 um and the comparison of Vesta data with HED meteorite spectra. We use the ratios R 750/915 (F3/F4) and R 965/830 (F5/F6) [nm] to separate HED lithologies spectrally and depict corresponding areas on HAMO mosaics ( 60 m/px). Additionally, higher resolution LAMO images ( 20 m/px) are analyzed to reveal the geologic setting. In this work, Marcia is broadly classified into three spectral regions. The first region is located in the northwestern part of the crater as well as in the central peak area and shows the most HED-like signature within the Marcia region. The other two regions, with one of them also describing Marcia ejecta, are spectrally further away from HED lithologies and likely display a mixing with more howarditic-rich material associated with carbonaceous chondrite clasts and relatively higher OH and H concentrations (e.g., [1], [2], [3]). In general, these other two regions are also associated with thick flow features within the crater, while the HED-like area does not show such prominent flows. Hence, these darker regions seem to display post-impact material inflow of the weathered howarditic surface regolith. We conclude that the Marcia impactor likely struck through the howarditic regolith and hit the eucritic crust underneath. Depicting this HED-like signature globally, it resides mostly in the Rheasilvia basin and ejecta blanket, as well as in very young crater ejecta in the equatorial region, consistent with it being a signature of fresh basaltic crust. [1] M. C. De Sanctis et al. (2012b) The Astrophysical Journal Letters, 758:L36 (5pp) [2] T. McCord et al. (2012) Nature 491, 83-86 [3] T. H. Prettyman et al. (2012) Science 338, 242-246

  15. An Acoustic Charge Transport Imager for High Definition Television

    NASA Technical Reports Server (NTRS)

    Hunt, William D.; Brennan, Kevin; May, Gary; Glenn, William E.; Richardson, Mike; Solomon, Richard

    1999-01-01

    This project, over its term, included funding to a variety of companies and organizations. In addition to Georgia Tech these included Florida Atlantic University with Dr. William E. Glenn as the P.I., Kodak with Mr. Mike Richardson as the P.I. and M.I.T./Polaroid with Dr. Richard Solomon as the P.I. The focus of the work conducted by these organizations was the development of camera hardware for High Definition Television (HDTV). The focus of the research at Georgia Tech was the development of new semiconductor technology to achieve a next generation solid state imager chip that would operate at a high frame rate (I 70 frames per second), operate at low light levels (via the use of avalanche photodiodes as the detector element) and contain 2 million pixels. The actual cost required to create this new semiconductor technology was probably at least 5 or 6 times the investment made under this program and hence we fell short of achieving this rather grand goal. We did, however, produce a number of spin-off technologies as a result of our efforts. These include, among others, improved avalanche photodiode structures, significant advancement of the state of understanding of ZnO/GaAs structures and significant contributions to the analysis of general GaAs semiconductor devices and the design of Surface Acoustic Wave resonator filters for wireless communication. More of these will be described in the report. The work conducted at the partner sites resulted in the development of 4 prototype HDTV cameras. The HDTV camera developed by Kodak uses the Kodak KAI-2091M high- definition monochrome image sensor. This progressively-scanned charge-coupled device (CCD) can operate at video frame rates and has 9 gm square pixels. The photosensitive area has a 16:9 aspect ratio and is consistent with the "Common Image Format" (CIF). It features an active image area of 1928 horizontal by 1084 vertical pixels and has a 55% fill factor. The camera is designed to operate in continuous mode with an output data rate of 5MHz, which gives a maximum frame rate of 4 frames per second. The MIT/Polaroid group developed two cameras under this program. The cameras have effectively four times the current video spatial resolution and at 60 frames per second are double the normal video frame rate.

  16. Enhancement Strategies for Frame-To Uas Stereo Visual Odometry

    NASA Astrophysics Data System (ADS)

    Kersten, J.; Rodehorst, V.

    2016-06-01

    Autonomous navigation of indoor unmanned aircraft systems (UAS) requires accurate pose estimations usually obtained from indirect measurements. Navigation based on inertial measurement units (IMU) is known to be affected by high drift rates. The incorporation of cameras provides complementary information due to the different underlying measurement principle. The scale ambiguity problem for monocular cameras is avoided when a light-weight stereo camera setup is used. However, also frame-to-frame stereo visual odometry (VO) approaches are known to accumulate pose estimation errors over time. Several valuable real-time capable techniques for outlier detection and drift reduction in frame-to-frame VO, for example robust relative orientation estimation using random sample consensus (RANSAC) and bundle adjustment, are available. This study addresses the problem of choosing appropriate VO components. We propose a frame-to-frame stereo VO method based on carefully selected components and parameters. This method is evaluated regarding the impact and value of different outlier detection and drift-reduction strategies, for example keyframe selection and sparse bundle adjustment (SBA), using reference benchmark data as well as own real stereo data. The experimental results demonstrate that our VO method is able to estimate quite accurate trajectories. Feature bucketing and keyframe selection are simple but effective strategies which further improve the VO results. Furthermore, introducing the stereo baseline constraint in pose graph optimization (PGO) leads to significant improvements.

  17. Geiger-mode APD camera system for single-photon 3D LADAR imaging

    NASA Astrophysics Data System (ADS)

    Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir

    2012-06-01

    The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.

  18. Selection of optical model of stereophotography experiment for determination the cloud base height as a problem of testing of statistical hypotheses

    NASA Astrophysics Data System (ADS)

    Chulichkov, Alexey I.; Nikitin, Stanislav V.; Emilenko, Alexander S.; Medvedev, Andrey P.; Postylyakov, Oleg V.

    2017-10-01

    Earlier, we developed a method for estimating the height and speed of clouds from cloud images obtained by a pair of digital cameras. The shift of a fragment of the cloud in the right frame relative to its position in the left frame is used to estimate the height of the cloud and its velocity. This shift is estimated by the method of the morphological analysis of images. However, this method requires that the axes of the cameras are parallel. Instead of real adjustment of the axes, we use virtual camera adjustment, namely, a transformation of a real frame, the result of which could be obtained if all the axes were perfectly adjusted. For such adjustment, images of stars as infinitely distant objects were used: on perfectly aligned cameras, images on both the right and left frames should be identical. In this paper, we investigate in more detail possible mathematical models of cloud image deformations caused by the misalignment of the axes of two cameras, as well as their lens aberration. The simplest model follows the paraxial approximation of lens (without lens aberrations) and reduces to an affine transformation of the coordinates of one of the frames. The other two models take into account the lens distortion of the 3rd and 3rd and 5th orders respectively. It is shown that the models differ significantly when converting coordinates near the edges of the frame. Strict statistical criteria allow choosing the most reliable model, which is as much as possible consistent with the measurement data. Further, each of these three models was used to determine parameters of the image deformations. These parameters are used to provide cloud images to mean what they would have when measured using an ideal setup, and then the distance to cloud is calculated. The results were compared with data of a laser range finder.

  19. 34. DETAILS AND SECTIONS OF SHIELDING TANK FUEL ELEMENT SUPPORT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    34. DETAILS AND SECTIONS OF SHIELDING TANK FUEL ELEMENT SUPPORT FRAME. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-S-4. INEL INDEX CODE NUMBER: 075 0701 60 851 151978. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  20. Frames of Reference in the Classroom

    ERIC Educational Resources Information Center

    Grossman, Joshua

    2012-01-01

    The classic film "Frames of Reference" effectively illustrates concepts involved with inertial and non-inertial reference frames. In it, Donald G. Ivey and Patterson Hume use the cameras perspective to allow the viewer to see motion in reference frames translating with a constant velocity, translating while accelerating, and rotating--all with…

  1. Development and use of an L3CCD high-cadence imaging system for Optical Astronomy

    NASA Astrophysics Data System (ADS)

    Sheehan, Brendan J.; Butler, Raymond F.

    2008-02-01

    A high cadence imaging system, based on a Low Light Level CCD (L3CCD) camera, has been developed for photometric and polarimetric applications. The camera system is an iXon DV-887 from Andor Technology, which uses a CCD97 L3CCD detector from E2V technologies. This is a back illuminated device, giving it an extended blue response, and has an active area of 512×512 pixels. The camera system allows frame-rates ranging from 30 fps (full frame) to 425 fps (windowed & binned frame). We outline the system design, concentrating on the calibration and control of the L3CCD camera. The L3CCD detector can be either triggered directly by a GPS timeserver/frequency generator or be internally triggered. A central PC remotely controls the camera computer system and timeserver. The data is saved as standard `FITS' files. The large data loads associated with high frame rates, leads to issues with gathering and storing the data effectively. To overcome such problems, a specific data management approach is used, and a Python/PYRAF data reduction pipeline was written for the Linux environment. This uses calibration data collected either on-site, or from lab based measurements, and enables a fast and reliable method for reducing images. To date, the system has been used twice on the 1.5 m Cassini Telescope in Loiano (Italy) we present the reduction methods and observations made.

  2. High-speed plasma imaging: A lightning bolt

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G.A.; Whiteson, D.O.

    Using a gated intensified digital Kodak Ektapro camera system, the authors captured a lightning bolt at 1,000 frames per second, with 100-{micro}s exposure time on each consecutive frame. As a thunder storm approaches while darkness descended (7:50 pm) on July 21, 1994, they photographed lightning bolts with an f22 105-mm lens and 100% gain on the intensified camera. This 15-frame sequence shows a cloud to ground stroke at a distance of about 1.5 km, which has a series of stepped leaders propagating downwards, following by the upward-propagating main return stroke.

  3. Deep-UV-sensitive high-frame-rate backside-illuminated CCD camera developments

    NASA Astrophysics Data System (ADS)

    Dawson, Robin M.; Andreas, Robert; Andrews, James T.; Bhaskaran, Mahalingham; Farkas, Robert; Furst, David; Gershstein, Sergey; Grygon, Mark S.; Levine, Peter A.; Meray, Grazyna M.; O'Neal, Michael; Perna, Steve N.; Proefrock, Donald; Reale, Michael; Soydan, Ramazan; Sudol, Thomas M.; Swain, Pradyumna K.; Tower, John R.; Zanzucchi, Pete

    2002-04-01

    New applications for ultra-violet imaging are emerging in the fields of drug discovery and industrial inspection. High throughput is critical for these applications where millions of drug combinations are analyzed in secondary screenings or high rate inspection of small feature sizes over large areas is required. Sarnoff demonstrated in1990 a back illuminated, 1024 X 1024, 18 um pixel, split-frame-transfer device running at > 150 frames per second with high sensitivity in the visible spectrum. Sarnoff designed, fabricated and delivered cameras based on these CCDs and is now extending this technology to devices with higher pixel counts and higher frame rates through CCD architectural enhancements. The high sensitivities obtained in the visible spectrum are being pushed into the deep UV to support these new medical and industrial inspection applications. Sarnoff has achieved measured quantum efficiencies > 55% at 193 nm, rising to 65% at 300 nm, and remaining almost constant out to 750 nm. Optimization of the sensitivity is being pursued to tailor the quantum efficiency for particular wavelengths. Characteristics of these high frame rate CCDs and cameras will be described and results will be presented demonstrating high UV sensitivity down to 150 nm.

  4. Multiport backside-illuminated CCD imagers for high-frame-rate camera applications

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Sauer, Donald J.; Hseuh, Fu-Lung; Shallcross, Frank V.; Taylor, Gordon C.; Meray, Grazyna M.; Tower, John R.; Harrison, Lorna J.; Lawler, William B.

    1994-05-01

    Two multiport, second-generation CCD imager designs have been fabricated and successfully tested. They are a 16-port 512 X 512 array and a 32-port 1024 X 1024 array. Both designs are back illuminated, have on-chip CDS, lateral blooming control, and use a split vertical frame transfer architecture with full frame storage. The 512 X 512 device has been operated at rates over 800 frames per second. The 1024 X 1024 device has been operated at rates over 300 frames per second. The major changes incorporated in the second-generation design are, reduction in gate length in the output area to give improved high-clock-rate performance, modified on-chip CDS circuitry for reduced noise, and optimized implants to improve performance of blooming control at lower clock amplitude. This paper discusses the imager design improvements and presents measured performance results at high and moderate frame rates. The design and performance of three moderate frame rate cameras are discussed.

  5. Evaluation of Eye Metrics as a Detector of Fatigue

    DTIC Science & Technology

    2010-03-01

    eyeglass frames . The cameras are angled upward toward the eyes and extract real-time pupil diameter, eye-lid movement, and eye-ball movement. The...because the cameras were mounted on eyeglass -like frames , the system was able to continuously monitor the eye throughout all sessions. Overall, the...of “ fitness for duty” testing and “real-time monitoring” of operator performance has been slow (Institute of Medicine, 2004). Oculometric-based

  6. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and comparison with ISS-LIS and GLM

    NASA Astrophysics Data System (ADS)

    Schultz, C. J.; Lang, T. J.; Leake, S.; Runco, M.; Blakeslee, R. J.

    2017-12-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how georeferenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration. Camera images from the crew cameras and high definition video from the Chiba University Meteor Camera were combined with lightning data from the National Lightning Detection Network (NLDN), ISS-Lightning Imaging Sensor (ISS-LIS), the Geostationary Lightning Mapper (GLM) and lightning mapping arrays. These cameras provide significant spatial resolution advantages ( 10 times or better) over ISS-LIS and GLM, but with lower temporal resolution. Therefore, they can serve as a complementarity analysis tool for studying lightning and thunderstorm processes from space. Lightning sensor data, Visible Infrared Imaging Radiometer Suite (VIIRS) derived city light maps, and other geographic databases were combined with the ISS attitude and position data to reverse geolocate each image or frame. An open-source Python toolkit has been developed to assist with this effort. Next, the locations and sizes of all flashes in each frame or image were computed and compared with flash characteristics from all available lightning datasets. This allowed for characterization of cloud features that are below the 4-km and 8-km resolution of ISS-LIS and GLM which may reduce the light that reaches the ISS-LIS or GLM sensor. In the case of video, consecutive frames were overlaid to determine the rate of change of the light escaping cloud top. Characterization of the rate of change in geometry, more generally the radius, of light escaping cloud top was integrated with the NLDN, ISS-LIS and GLM to understand how the peak rate of change and the peak area of each flash aligned with each lightning system in time. Flash features like leaders could be inferred from the video frames as well. Testing is being done to see if leader speeds may be accurately calculated under certain circumstances.

  7. Sliding-window analysis tracks fluctuations in amygdala functional connectivity associated with physiological arousal and vigilance during fear conditioning.

    PubMed

    Baczkowski, Blazej M; Johnstone, Tom; Walter, Henrik; Erk, Susanne; Veer, Ilya M

    2017-06-01

    We evaluated whether sliding-window analysis can reveal functionally relevant brain network dynamics during a well-established fear conditioning paradigm. To this end, we tested if fMRI fluctuations in amygdala functional connectivity (FC) can be related to task-induced changes in physiological arousal and vigilance, as reflected in the skin conductance level (SCL). Thirty-two healthy individuals participated in the study. For the sliding-window analysis we used windows that were shifted by one volume at a time. Amygdala FC was calculated for each of these windows. Simultaneously acquired SCL time series were averaged over time frames that corresponded to the sliding-window FC analysis, which were subsequently regressed against the whole-brain seed-based amygdala sliding-window FC using the GLM. Surrogate time series were generated to test whether connectivity dynamics could have occurred by chance. In addition, results were contrasted against static amygdala FC and sliding-window FC of the primary visual cortex, which was chosen as a control seed, while a physio-physiological interaction (PPI) was performed as cross-validation. During periods of increased SCL, the left amygdala became more strongly coupled with the bilateral insula and anterior cingulate cortex, core areas of the salience network. The sliding-window analysis yielded a connectivity pattern that was unlikely to have occurred by chance, was spatially distinct from static amygdala FC and from sliding-window FC of the primary visual cortex, but was highly comparable to that of the PPI analysis. We conclude that sliding-window analysis can reveal functionally relevant fluctuations in connectivity in the context of an externally cued task. Copyright © 2017 Elsevier Inc. All rights reserved.

  8. 3-D Velocimetry of Strombolian Explosions

    NASA Astrophysics Data System (ADS)

    Taddeucci, J.; Gaudin, D.; Orr, T. R.; Scarlato, P.; Houghton, B. F.; Del Bello, E.

    2014-12-01

    Using two synchronized high-speed cameras we were able to reconstruct the three-dimensional displacement and velocity field of bomb-sized pyroclasts in Strombolian explosions at Stromboli Volcano. Relatively low-intensity Strombolian-style activity offers a rare opportunity to observe volcanic processes that remain hidden from view during more violent explosive activity. Such processes include the ejection and emplacement of bomb-sized clasts along pure or drag-modified ballistic trajectories, in-flight bomb collision, and gas liberation dynamics. High-speed imaging of Strombolian activity has already opened new windows for the study of the abovementioned processes, but to date has only utilized two-dimensional analysis with limited motion detection and ability to record motion towards or away from the observer. To overcome this limitation, we deployed two synchronized high-speed video cameras at Stromboli. The two cameras, located sixty meters apart, filmed Strombolian explosions at 500 and 1000 frames per second and with different resolutions. Frames from the two cameras were pre-processed and combined into a single video showing frames alternating from one to the other camera. Bomb-sized pyroclasts were then manually identified and tracked in the combined video, together with fixed reference points located as close as possible to the vent. The results from manual tracking were fed to a custom software routine that, knowing the relative position of the vent and cameras, and the field of view of the latter, provided the position of each bomb relative to the reference points. By tracking tens of bombs over five to ten frames at different intervals during one explosion, we were able to reconstruct the three-dimensional evolution of the displacement and velocity fields of bomb-sized pyroclasts during individual Strombolian explosions. Shifting jet directivity and dispersal angle clearly appear from the three-dimensional analysis.

  9. Nature of the "Orange" Material on Vesta From Dawn

    NASA Technical Reports Server (NTRS)

    LeCorre, L.; Reddy, V.; Schmedemann, N.; Becker, K. J.; OBrien, D. P.; Yamashita, N.; Peplowski, P. N.; Prettyman, T. H.; Li, J.-Y.; Coultis, E. A.; hide

    2014-01-01

    From ground-based observations of Vesta, it is well-known that the vestan surface has a large variation in albedo. Analysis of images acquired by the Hubble Space Telescope allowed production of the first color maps of Vesta and showed a diverse surface in terms of reflectance. Thanks to images collected by the Dawn spacecraft at Vesta, it became obvious that these specific units observed previously can be linked to geological features. The presence of the darkest material mostly around impact craters and scattered in the Western hemisphere has been associated with carbonaceous chondrite contamination [4]; whereas the brightest materials are believed to result from exposure of unaltered material from the subsurface of Vesta (in fresh looking impact crater rims and in Rheasilvia's ejecta and rim remants). Here we focus on a distinct material characterized by a steep slope in the near-IR relative to all other kinds of materials found on Vesta. It was first detected when combining Dawn Framing Camera (FC) color images in Clementine false-color composites [5] during the Approach phase of the mission (100000 to 5200 km from Vesta). We investigate the mineralogical and elemental composition of this material and its relationship with the HEDs (Howardite-Eucrite- Diogenite group of meteorites).

  10. Standard design for National Ignition Facility x-ray streak and framing cameras.

    PubMed

    Kimbrough, J R; Bell, P M; Bradley, D K; Holder, J P; Kalantar, D K; MacPhee, A G; Telford, S

    2010-10-01

    The x-ray streak camera and x-ray framing camera for the National Ignition Facility were redesigned to improve electromagnetic pulse hardening, protect high voltage circuits from pressure transients, and maximize the use of common parts and operational software. Both instruments use the same PC104 based controller, interface, power supply, charge coupled device camera, protective hermetically sealed housing, and mechanical interfaces. Communication is over fiber optics with identical facility hardware for both instruments. Each has three triggers that can be either fiber optic or coax. High voltage protection consists of a vacuum sensor to enable the high voltage and pulsed microchannel plate phosphor voltage. In the streak camera, the high voltage is removed after the sweep. Both rely on the hardened aluminum box and a custom power supply to reduce electromagnetic pulse/electromagnetic interference (EMP/EMI) getting into the electronics. In addition, the streak camera has an EMP/EMI shield enclosing the front of the streak tube.

  11. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera

    PubMed Central

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-01-01

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023

  12. Per-Pixel Coded Exposure for High-Speed and High-Resolution Imaging Using a Digital Micromirror Device Camera.

    PubMed

    Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei

    2016-03-04

    High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.

  13. A photoelastic modulator-based birefringence imaging microscope for measuring biological specimens

    NASA Astrophysics Data System (ADS)

    Freudenthal, John; Leadbetter, Andy; Wolf, Jacob; Wang, Baoliang; Segal, Solomon

    2014-11-01

    The photoelastic modulator (PEM) has been applied to a variety of polarimetric measurements. However, nearly all such applications use point-measurements where each point (spot) on the sample is measured one at a time. The main challenge for employing the PEM in a camera-based imaging instrument is that the PEM modulates too fast for typical cameras. The PEM modulates at tens of KHz. To capture the specific polarization information that is carried on the modulation frequency of the PEM, the camera needs to be at least ten times faster. However, the typical frame rates of common cameras are only in the tens or hundreds frames per second. In this paper, we report a PEM-camera birefringence imaging microscope. We use the so-called stroboscopic illumination method to overcome the incompatibility of the high frequency of the PEM to the relatively slow frame rate of a camera. We trigger the LED light source using a field-programmable gate array (FPGA) in synchrony with the modulation of the PEM. We show the measurement results of several standard birefringent samples as a part of the instrument calibration. Furthermore, we show results observed in two birefringent biological specimens, a human skin tissue that contains collagen and a slice of mouse brain that contains bundles of myelinated axonal fibers. Novel applications of this PEM-based birefringence imaging microscope to both research communities and industrial applications are being tested.

  14. Development of a driving method suitable for ultrahigh-speed shooting in a 2M-fps 300k-pixel single-chip color camera

    NASA Astrophysics Data System (ADS)

    Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji

    2012-03-01

    We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.

  15. A novel C-type lectin with two CRD domains from Chinese shrimp Fenneropenaeus chinensis functions as a pattern recognition protein.

    PubMed

    Zhang, Xiao-Wen; Xu, Wen-Teng; Wang, Xian-Wei; Mu, Yi; Zhao, Xiao-Fan; Yu, Xiao-Qiang; Wang, Jin-Xing

    2009-05-01

    Lectins are regarded as potential immune recognition proteins. In this study, a novel C-type lectin (Fc-Lec2) was cloned from the hepatopancreas of Chinese shrimp, Fenneropenaeus chinensis. The cDNA of Fc-Lec2 is 1219 bp with an open reading frame (ORF) of 1002 bp that encodes a protein of 333 amino acids. Fc-Lec2 contains a signal peptide and two different carbohydrate recognition domains (CRDs) arranged in tandem. The first CRD contains a QPD (Gln-Pro-Asp) motif that has a predicted binding specificity for galactose and the second CRD contains a EPN (Glu-Pro-Asn) motif for mannose. Fc-Lec2 was constitutively expressed in the hepatopancreas of normal shrimp, and its expression was up-regulated in the hepatopancreas of shrimp challenged with bacteria or viruses. Recombinant mature Fc-Lec2 and its two individual CRDs (CRD1 and 2) did not have hemagglutinating activity against animal red blood cells, but agglutinated some gram-positive and gram-negative bacteria in a calcium-dependent manner. The three recombinant proteins also bound to bacteria in the absence of calcium. Fc-Lec2 seems to have broader specificity and higher affinity for bacteria and polysaccharides (peptidoglycan, lipoteichoic acid and lipopolysaccharide) than each of the two individual CRDs. These data suggest that the two CRDs have synergistic effect, and the intact lectin may be more effective in response to bacterial infection, the Fc-Lec2 performs its pattern recognition function by binding to polysaccharides of pathogen cells.

  16. Navigation accuracy comparing non-covered frame and use of plastic sterile drapes to cover the reference frame in 3D acquisition.

    PubMed

    Corenman, Donald S; Strauch, Eric L; Dornan, Grant J; Otterstrom, Eric; Zalepa King, Lisa

    2017-09-01

    Advancements in surgical navigation technology coupled with 3-dimensional (3D) radiographic data have significantly enhanced the accuracy and efficiency of spinal fusion implant placement. Increased usage of such technology has led to rising concerns regarding maintenance of the sterile field, as makeshift drape systems are fraught with breaches thus presenting increased risk of surgical site infections (SSIs). A clinical need exists for a sterile draping solution with these techniques. Our objective was to quantify expected accuracy error associated with 2MM and 4MM thickness Sterile-Z Patient Drape ® using Medtronic O-Arm ® Surgical Imaging with StealthStation ® S7 ® Navigation System. Camera distance to reference frame was investigated for contribution to accuracy error. A testing jig was placed on the radiolucent table and the Medtronic passive reference frame was attached to jig. The StealthStation ® S7 ® navigation camera was placed at various distances from testing jig and the geometry error of reference frame was captured for three different drape configurations: no drape, 2MM drape and 4MM drape. The O-Arm ® gantry location and StealthStation ® S7 ® camera position was maintained and seven 3D acquisitions for each of drape configurations were measured. Data was analyzed by a two-factor analysis of variance (ANOVA) and Bonferroni comparisons were used to assess the independent effects of camera angle and drape on accuracy error. Median (and maximum) measurement accuracy error was higher for the 2MM than for the 4MM drape for each camera distance. The most extreme error observed (4.6 mm) occurred when using the 2MM and the 'far' camera distance. The 4MM drape was found to induce an accuracy error of 0.11 mm (95% confidence interval, 0.06-0.15; P<0.001) relative to the no drape testing, regardless of camera distance. Medium camera distance produced lower accuracy error than either the close (additional 0.08 mm error; 95% CI, 0-0.15; P=0.035) or far (additional 0.21mm error; 95% CI, 0.13-0.28; P<0.001) camera distances, regardless of whether a drape was used. In comparison to the 'no drape' condition, the accuracy error of 0.11 mm when using a 4MM film drape is minimal and clinically insignificant.

  17. The Television Framing Methods of the National Basketball Association: An Agenda-Setting Application.

    ERIC Educational Resources Information Center

    Fortunato, John A.

    2001-01-01

    Identifies and analyzes the exposure and portrayal framing methods that are utilized by the National Basketball Association (NBA). Notes that key informant interviews provide insight into the exposure framing method and reveal two portrayal instruments: cameras and announcers; and three framing strategies: depicting the NBA as a team game,…

  18. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  19. Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment

    NASA Astrophysics Data System (ADS)

    Weed, Jonathan Robert

    The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.

  20. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  1. An Innovative Procedure for Calibration of Strapdown Electro-Optical Sensors Onboard Unmanned Air Vehicles

    PubMed Central

    Fasano, Giancarmine; Accardo, Domenico; Moccia, Antonio; Rispoli, Attilio

    2010-01-01

    This paper presents an innovative method for estimating the attitude of airborne electro-optical cameras with respect to the onboard autonomous navigation unit. The procedure is based on the use of attitude measurements under static conditions taken by an inertial unit and carrier-phase differential Global Positioning System to obtain accurate camera position estimates in the aircraft body reference frame, while image analysis allows line-of-sight unit vectors in the camera based reference frame to be computed. The method has been applied to the alignment of the visible and infrared cameras installed onboard the experimental aircraft of the Italian Aerospace Research Center and adopted for in-flight obstacle detection and collision avoidance. Results show an angular uncertainty on the order of 0.1° (rms). PMID:22315559

  2. Multi-frame image processing with panning cameras and moving subjects

    NASA Astrophysics Data System (ADS)

    Paolini, Aaron; Humphrey, John; Curt, Petersen; Kelmelis, Eric

    2014-06-01

    Imaging scenarios commonly involve erratic, unpredictable camera behavior or subjects that are prone to movement, complicating multi-frame image processing techniques. To address these issues, we developed three techniques that can be applied to multi-frame image processing algorithms in order to mitigate the adverse effects observed when cameras are panning or subjects within the scene are moving. We provide a detailed overview of the techniques and discuss the applicability of each to various movement types. In addition to this, we evaluated algorithm efficacy with demonstrated benefits using field test video, which has been processed using our commercially available surveillance product. Our results show that algorithm efficacy is significantly improved in common scenarios, expanding our software's operational scope. Our methods introduce little computational burden, enabling their use in real-time and low-power solutions, and are appropriate for long observation periods. Our test cases focus on imaging through turbulence, a common use case for multi-frame techniques. We present results of a field study designed to test the efficacy of these techniques under expanded use cases.

  3. 29. PLAN OF THE ARVFS FIELD TEST FACILITY SHOWING BUNKER, ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    29. PLAN OF THE ARVFS FIELD TEST FACILITY SHOWING BUNKER, CABLE CHASE, SHIELDING TANK AND FRAME ASSEMBLY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-1. INEL INDEX CODE NUMBER: 075 0701 851 151970. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  4. Large format geiger-mode avalanche photodiode LADAR camera

    NASA Astrophysics Data System (ADS)

    Yuan, Ping; Sudharsanan, Rengarajan; Bai, Xiaogang; Labios, Eduardo; Morris, Bryan; Nicholson, John P.; Stuart, Gary M.; Danny, Harrison

    2013-05-01

    Recently Spectrolab has successfully demonstrated a compact 32x32 Laser Detection and Range (LADAR) camera with single photo-level sensitivity with small size, weight, and power (SWAP) budget for threedimensional (3D) topographic imaging at 1064 nm on various platforms. With 20-kHz frame rate and 500- ps timing uncertainty, this LADAR system provides coverage down to inch-level fidelity and allows for effective wide-area terrain mapping. At a 10 mph forward speed and 1000 feet above ground level (AGL), it covers 0.5 square-mile per hour with a resolution of 25 in2/pixel after data averaging. In order to increase the forward speed to fit for more platforms and survey a large area more effectively, Spectrolab is developing 32x128 Geiger-mode LADAR camera with 43 frame rate. With the increase in both frame rate and array size, the data collection rate is improved by 10 times. With a programmable bin size from 0.3 ps to 0.5 ns and 14-bit timing dynamic range, LADAR developers will have more freedom in system integration for various applications. Most of the special features of Spectrolab 32x32 LADAR camera, such as non-uniform bias correction, variable range gate width, windowing for smaller arrays, and short pixel protection, are implemented in this camera.

  5. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  6. Ground volume assessment using 'Structure from Motion' photogrammetry with a smartphone and a compact camera

    NASA Astrophysics Data System (ADS)

    Wróżyński, Rafał; Pyszny, Krzysztof; Sojka, Mariusz; Przybyła, Czesław; Murat-Błażejewska, Sadżide

    2017-06-01

    The article describes how the Structure-from-Motion (SfM) method can be used to calculate the volume of anthropogenic microtopography. In the proposed workflow, data is obtained using mass-market devices such as a compact camera (Canon G9) and a smartphone (iPhone5). The volume is computed using free open source software (VisualSFMv0.5.23, CMPMVSv0.6.0., MeshLab) on a PCclass computer. The input data is acquired from video frames. To verify the method laboratory tests on the embankment of a known volume has been carried out. Models of the test embankment were built using two independent measurements made with those two devices. No significant differences were found between the models in a comparative analysis. The volumes of the models differed from the actual volume just by 0.7‰ and 2‰. After a successful laboratory verification, field measurements were carried out in the same way. While building the model from the data acquired with a smartphone, it was observed that a series of frames, approximately 14% of all the frames, was rejected. The missing frames caused the point cloud to be less dense in the place where they had been rejected. This affected the model's volume differed from the volume acquired with a camera by 7%. In order to improve the homogeneity, the frame extraction frequency was increased in the place where frames have been previously missing. A uniform model was thereby obtained with point cloud density evenly distributed. There was a 1.5% difference between the embankment's volume and the volume calculated from the camera-recorded video. The presented method permits the number of input frames to be increased and the model's accuracy to be enhanced without making an additional measurement, which may not be possible in the case of temporary features.

  7. 10. 22'X34' original blueprint, VariableAngle Launcher, 'SIDE VIEW CAMERA CARSTEEL ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    10. 22'X34' original blueprint, Variable-Angle Launcher, 'SIDE VIEW CAMERA CAR-STEEL FRAME AND AXLES' drawn at 1/2'=1'-0'. (BOURD Sketch # 209124). - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. Variable-Interval Sequenced-Action Camera (VINSAC). Dissemination Document No. 1.

    ERIC Educational Resources Information Center

    Ward, Ted

    The 16 millimeter (mm) Variable-Interval Sequenced-Action Camera (VINSAC) is designed for inexpensive photographic recording of effective teacher instruction and use of instructional materials for teacher education and research purposes. The camera photographs single frames at preselected time intervals (.5 second to 20 seconds) which are…

  9. Students' Framing of Laboratory Exercises Using Infrared Cameras

    ERIC Educational Resources Information Center

    Haglund, Jesper; Jeppsson, Fredrik; Hedberg, David; Schönborn, Konrad J.

    2015-01-01

    Thermal science is challenging for students due to its largely imperceptible nature. Handheld infrared cameras offer a pedagogical opportunity for students to see otherwise invisible thermal phenomena. In the present study, a class of upper secondary technology students (N = 30) partook in four IR-camera laboratory activities, designed around the…

  10. Performance characterization of UV science cameras developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-07-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-α and to detect the Hanle effect in the line core. Due to the nature of Lyman-α polarizationin the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. The CLASP cameras were designed to operate with ≤ 10 e-/pixel/second dark current, ≤ 25 e- read noise, a gain of 2.0 +- 0.5 and ≤ 1.0% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  11. Effects of frame rate and image resolution on pulse rate measured using multiple camera imaging photoplethysmography

    NASA Astrophysics Data System (ADS)

    Blackford, Ethan B.; Estepp, Justin R.

    2015-03-01

    Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.

  12. Optical fringe-reflection deflectometry with bundle adjustment

    NASA Astrophysics Data System (ADS)

    Xiao, Yong-Liang; Li, Sikun; Zhang, Qican; Zhong, Jianxin; Su, Xianyu; You, Zhisheng

    2018-06-01

    Liquid crystal display (LCD) screens are located outside of a camera's field of view in fringe-reflection deflectometry. Therefore, fringes that are displayed on LCD screens are obtained through specular reflection by a fixed camera. Thus, the pose calibration between the camera and LCD screen is one of the main challenges in fringe-reflection deflectometry. A markerless planar mirror is used to reflect the LCD screen more than three times, and the fringes are mapped into the fixed camera. The geometrical calibration can be accomplished by estimating the pose between the camera and the virtual image of fringes. Considering the relation between their pose, the incidence and reflection rays can be unified in the camera frame, and a forward triangulation intersection can be operated in the camera frame to measure three-dimensional (3D) coordinates of the specular surface. In the final optimization, constraint-bundle adjustment is operated to refine simultaneously the camera intrinsic parameters, including distortion coefficients, estimated geometrical pose between the LCD screen and camera, and 3D coordinates of the specular surface, with the help of the absolute phase collinear constraint. Simulation and experiment results demonstrate that the pose calibration with planar mirror reflection is simple and feasible, and the constraint-bundle adjustment can enhance the 3D coordinate measurement accuracy in fringe-reflection deflectometry.

  13. Synchronization of video recording and laser pulses including background light suppression

    NASA Technical Reports Server (NTRS)

    Kalshoven, Jr., James E. (Inventor); Tierney, Jr., Michael (Inventor); Dabney, Philip W. (Inventor)

    2004-01-01

    An apparatus for and a method of triggering a pulsed light source, in particular a laser light source, for predictable capture of the source by video equipment. A frame synchronization signal is derived from the video signal of a camera to trigger the laser and position the resulting laser light pulse in the appropriate field of the video frame and during the opening of the electronic shutter, if such shutter is included in the camera. Positioning of the laser pulse in the proper video field allows, after recording, for the viewing of the laser light image with a video monitor using the pause mode on a standard cassette-type VCR. This invention also allows for fine positioning of the laser pulse to fall within the electronic shutter opening. For cameras with externally controllable electronic shutters, the invention provides for background light suppression by increasing shutter speed during the frame in which the laser light image is captured. This results in the laser light appearing in one frame in which the background scene is suppressed with the laser light being uneffected, while in all other frames, the shutter speed is slower, allowing for the normal recording of the background scene. This invention also allows for arbitrary (manual or external) triggering of the laser with full video synchronization and background light suppression.

  14. A New Hyperspectral Designed for Small UAS Tested in Real World Applications

    NASA Astrophysics Data System (ADS)

    Marcucci, E.; Saiet, E., II; Hatfield, M. C.

    2014-12-01

    The ability to investigate landscape and vegetation from airborne instruments offers many advantages, including high resolution data, ability to deploy instruments over a specific area, and repeat measurements. The Alaska Center for Unmanned Aircraft Systems Integration (ACUASI) has recently integrated a hyperspectral imaging camera onto their Ptarmigan hexacopter. The Rikola Hyperspectral Camera manufactured by VTT and Rikola, Ltd. is capable of obtaining data within the 400-950 nm range with an accuracy of ~1 nm. Using the compact flash on the UAV limits the maximum number of channels to 24 this summer. The camera uses a single frame to sequentially record the spectral bands of interest in a 37° field-of-view. Because the camera collects data as single frames it takes a finite amount of time to compile the complete spectral. Although each frame takes only 5 nanoseconds, co-registration of frames is still required. The hovering ability of the hexacopter helps eliminate frame shift. GPS records data for incorporation into a larger dataset. Conservatively, the Ptarmigan can fly at an altitude of 400 feet, for 15 minutes, and 7000 feet away from the operator. The airborne hyperspectral instrument will be extremely useful to scientists as a platform that can provide data on-request. Since the spectral range of the camera is ideal for the study of vegetation, this study 1) examines seasonal changes of vegetation of the Fairbanks area, 2) ground-truths satellite measurements, and 3) ties vegetation conditions around a weather tower to the tower readings. Through this proof of concept, ACUASI provides a means for scientists to request the most up-to-date and location-specific data for their field sites. Additionally, the resolution of the airborne instruments is much higher than that of satellite data, these may be readily tasked, and they have the advantage over manned flights in terms of manpower and cost.

  15. A simple demonstration when studying the equivalence principle

    NASA Astrophysics Data System (ADS)

    Mayer, Valery; Varaksina, Ekaterina

    2016-06-01

    The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.

  16. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  17. Image system for three dimensional, 360 DEGREE, time sequence surface mapping of moving objects

    DOEpatents

    Lu, Shin-Yee

    1998-01-01

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360.degree. all around coverage of theobject-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120.degree. apart from one another.

  18. Image system for three dimensional, 360{degree}, time sequence surface mapping of moving objects

    DOEpatents

    Lu, S.Y.

    1998-12-22

    A three-dimensional motion camera system comprises a light projector placed between two synchronous video cameras all focused on an object-of-interest. The light projector shines a sharp pattern of vertical lines (Ronchi ruling) on the object-of-interest that appear to be bent differently to each camera by virtue of the surface shape of the object-of-interest and the relative geometry of the cameras, light projector and object-of-interest. Each video frame is captured in a computer memory and analyzed. Since the relative geometry is known and the system pre-calibrated, the unknown three-dimensional shape of the object-of-interest can be solved for by matching the intersections of the projected light lines with orthogonal epipolar lines corresponding to horizontal rows in the video camera frames. A surface reconstruction is made and displayed on a monitor screen. For 360{degree} all around coverage of the object-of-interest, two additional sets of light projectors and corresponding cameras are distributed about 120{degree} apart from one another. 20 figs.

  19. 3D kinematic measurement of human movement using low cost fish-eye cameras

    NASA Astrophysics Data System (ADS)

    Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.

    2017-02-01

    3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.

  20. Ultrahigh- and high-speed photography, videography, and photonics '91; Proceedings of the Meeting, San Diego, CA, July 24-26, 1991

    NASA Astrophysics Data System (ADS)

    Jaanimagi, Paul A.

    1992-01-01

    This volume presents papers grouped under the topics on advances in streak and framing camera technology, applications of ultrahigh-speed photography, characterizing high-speed instrumentation, high-speed electronic imaging technology and applications, new technology for high-speed photography, high-speed imaging and photonics in detonics, and high-speed velocimetry. The papers presented include those on a subpicosecond X-ray streak camera, photocathodes for ultrasoft X-ray region, streak tube dynamic range, high-speed TV cameras for streak tube readout, femtosecond light-in-flight holography, and electrooptical systems characterization techniques. Attention is also given to high-speed electronic memory video recording techniques, high-speed IR imaging of repetitive events using a standard RS-170 imager, use of a CCD array as a medium-speed streak camera, the photography of shock waves in explosive crystals, a single-frame camera based on the type LD-S-10 intensifier tube, and jitter diagnosis for pico- and femtosecond sources.

  1. 36. DETAILS AND SECTIONS OF SHIELDING TANK, FUEL ELEMENT SUPPORT ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    36. DETAILS AND SECTIONS OF SHIELDING TANK, FUEL ELEMENT SUPPORT FRAME AND SUPPORT PLATFORM, AND SAFETY MECHANISM ASSEMBLY (SPRING-LOADED HINGE). F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-S-1. INEL INDEX CODE NUMBER: 075 0701 60 851 151975. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  2. 30. ELEVATION OF ARVFS FIELD TEST FACILITY SHOWING VIEW OF ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    30. ELEVATION OF ARVFS FIELD TEST FACILITY SHOWING VIEW OF SOUTH SIDE OF FACILITY, INCLUDING BUNKER, CABLE CHASE, SHIELDING TANK, AND FRAME ASSEMBLY. F.C. TORKELSON DRAWING NUMBER 842-ARVFS-701-2. INEL INDEX CODE NUMBER: 075 0701 851 151971. - Idaho National Engineering Laboratory, Advanced Reentry Vehicle Fusing System, Scoville, Butte County, ID

  3. The appearance of Carbonaceous Chondrites on (1) Ceres from observations by the Dawn Framing Camera

    NASA Astrophysics Data System (ADS)

    Schäfer, Tanja; Schäfer, Michael; Mengel, Kurt; Cloutis, Edward A.; Izawa, Matthew R. M.; Thangjam, Guneshwar; Hoffmann, Martin; Platz, Thomas; Nathues, Andreas; Kallisch, Jan; Ripken, Joachim; Russel, Christopher T.

    2016-04-01

    NASA's Dawn spacecraft reached dwarf planet Ceres in March 2015 and started data acquisition using three different instruments. These are the Framing Camera (FC; [1]), the Visible & Infrared Spectrometer (VIR; [2]), and the Gamma Ray and Neutron Detector (GRaND; [3]). In our work we focus on the potential appearance of carbonaceous chondritic (CC) material on the cerean surface using Dawn FC color mosaics covering the VIS/NIR wavelength region. In preparation of the Dawn arrival at Ceres, a discrimination scheme for CC groups using FC color ratios was developed by [4] and is based on 121 CC laboratory spectra compiled from RELAB. As the cerean surface material mainly differs by its spectral slope over the whole FC wavelength range (0.44-0.97 μm), we classified the color mosaics by this parameter. We applied the CC discrimination scheme only to those regions on the cerean surface (more than 90 %) which exhibit spectral slopes ≥ -1 % reflectance per μm to exclude the strongly negative sloped regions of large young craters such as Occator, Haulani, and Oxo. These are not likely to be similar to pure CC material as can be seen by their brightness and their bluish spectral slope [5]. We found that the surface material of Ceres is, among the suite of CCs, most similar to Ivuna samples artificially heated to 200 and 300°C [6] and unusual CCs, which naturally experienced heating. The latter ones comprise Dhofar 225, Y-86789 and Y-82162, which have been determined to have undergone aqueous alteration and subsequent thermal metamorphism (e.g. [7,8]).Our comparison with VIR data shows, that the spectra of Ivuna heated to 200°C and 300°C match well the OH-absorption at 2.7 μm but do not show the smaller 3.05-3.1 μm absorption observed on Ceres [9,10,11]. Nevertheless, the remarkably flat UV drop-off detected on the cerean surface may, at least spectrally, correspond to highly aqueously altered and subsequently thermally metamorphosed CC material. Further alteration of this material on a parent body like Ceres may produce spectral changes affecting the 3 μm region, while showing no additional modification in the VIS/NIR region. Scenarios of thermal and geophysical evolution models allow Ceres' differentiation into a core of dehydrated silicates and a shell of hydrated silicates overlain by an icy shell [12,13]. The widespread occurence of material on the cerean surface, spectrally similar to thermally altered CC material, suggests that we possibly see the mineralogy of the hydrated-dehydrated boundary of Ceres exposed by impact gardening and simultaneous loss of the icy shell. Also differing recent models of a convecting mud ocean on Ceres, introduced by [14] and enhanced by [15], allow a lag deposit of aqueously altered fine material on the surface, spectrally corresponding to mildly heated Ivuna samples. References: [1] Sierks, H. et al. 2011. Space Sci. Rev., 163, 1-4, 263-327. [2] De Sanctis, C.M. et al. 2011. Space Sci. Rev., 163, 1-4, 329-369. [3] Prettyman, T.H. et al. 2011. Space Sci. Rev., 163, 1-4, 371-459. [4] Schäfer, T. et al., 2015. Icarus 265, 149-160. [5] Nathues, A. et al., 2015. Nature 528 (7581), 237-240. [6] Hiroi, T. et al., 1996. Lunar Planet. Sci. 27, 551. [7] Brearley, A.J., Jones, R.H., 1998. Chondritic meteorites. In: Planetary Materials, Papike, J.J. (Ed.). Rev. in Mineralogy and Geochem. 36 (1), ch. 3, 1-398. [8] Ivanova, M.A. et al., 2010. Meteoritics & Planet. Sci. 45 (7), 1108-1123. [9] King, T.V.V., et al., 1992. Science 255, 1551-1553. [10] De Sanctis, M.C. et al., 2015. Nature 528 (7581), 241-244. [11] Milliken, R.E., Rivkin, A.S., 2009. Nature Geosci. 2 (4), 258-261. [12] Castillo-Rogez, J.C., McCord, T.B., 2010. Icarus 205 (2), 443-459. [13] Neveu, M., Desch, S.J., Castillo-Rogez, J.C., 2015. J. Geophys. Res. Planets 120 (2), 123-154. [14] Travis, B.J. et al., 2015. Lunar Planet. Sci., #2360. [15] Neveu, M., Desch, S.J., 2015. Geophys. Res. Lett. 42 (23), 10197-10206.

  4. System selects framing rate for spectrograph camera

    NASA Technical Reports Server (NTRS)

    1965-01-01

    Circuit using zero-order light is reflected to a photomultiplier in the incoming radiation of a spectrograph monitor to provide an error signal which controls the advancing and driving rate of the film through the camera.

  5. Comet Wild 2 Up Close and Personal

    NASA Technical Reports Server (NTRS)

    2004-01-01

    On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced 'Vilt-2'). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but 'stretched' so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter.

  6. Data rate enhancement of optical camera communications by compensating inter-frame gaps

    NASA Astrophysics Data System (ADS)

    Nguyen, Duy Thong; Park, Youngil

    2017-07-01

    Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.

  7. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  8. Development of biostereometric experiments. [stereometric camera system

    NASA Technical Reports Server (NTRS)

    Herron, R. E.

    1978-01-01

    The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.

  9. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System

    PubMed Central

    Keleshis, C; Ionita, CN; Yadava, G; Patel, V; Bednarek, DR; Hoffmann, KR; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873) PMID:18836570

  10. LabVIEW Graphical User Interface for a New High Sensitivity, High Resolution Micro-Angio-Fluoroscopic and ROI-CBCT System.

    PubMed

    Keleshis, C; Ionita, Cn; Yadava, G; Patel, V; Bednarek, Dr; Hoffmann, Kr; Verevkin, A; Rudin, S

    2008-01-01

    A graphical user interface based on LabVIEW software was developed to enable clinical evaluation of a new High-Sensitivity Micro-Angio-Fluoroscopic (HSMAF) system for real-time acquisition, display and rapid frame transfer of high-resolution region-of-interest images. The HSMAF detector consists of a CsI(Tl) phosphor, a light image intensifier (LII), and a fiber-optic taper coupled to a progressive scan, frame-transfer, charged-coupled device (CCD) camera which provides real-time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. Images can be captured in continuous or triggered mode, and the camera can be programmed by a computer using Camera Link serial communication. A graphical user interface was developed to control the camera modes such as gain and pixel binning as well as to acquire, store, display, and process the images. The program, written in LabVIEW, has the following capabilities: camera initialization, synchronized image acquisition with the x-ray pulses, roadmap and digital subtraction angiography acquisition (DSA), flat field correction, brightness and contrast control, last frame hold in fluoroscopy, looped playback of the acquired images in angiography, recursive temporal filtering and LII gain control. Frame rates can be up to 30 fps in full-resolution mode. The user friendly implementation of the interface along with the high framerate acquisition and display for this unique high-resolution detector should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents and hence enable more accurate diagnoses and image guided interventions. (Support: NIH Grants R01NS43924, R01EB002873).

  11. Proposed patient motion monitoring system using feature point tracking with a web camera.

    PubMed

    Miura, Hideharu; Ozawa, Shuichi; Matsuura, Takaaki; Yamada, Kiyoshi; Nagata, Yasushi

    2017-12-01

    Patient motion monitoring systems play an important role in providing accurate treatment dose delivery. We propose a system that utilizes a web camera (frame rate up to 30 fps, maximum resolution of 640 × 480 pixels) and an in-house image processing software (developed using Microsoft Visual C++ and OpenCV). This system is simple to use and convenient to set up. The pyramidal Lucas-Kanade method was applied to calculate motions for each feature point by analysing two consecutive frames. The image processing software employs a color scheme where the defined feature points are blue under stable (no movement) conditions and turn red along with a warning message and an audio signal (beeping alarm) for large patient movements. The initial position of the marker was used by the program to determine the marker positions in all the frames. The software generates a text file that contains the calculated motion for each frame and saves it as a compressed audio video interleave (AVI) file. We proposed a patient motion monitoring system using a web camera, which is simple and convenient to set up, to increase the safety of treatment delivery.

  12. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP)

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtin, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1% in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1% polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30%) quantum efficiency at the Lyman-$\\alpha$ line. The CLASP cameras were designed to operate with =10 e- /pixel/second dark current, = 25 e- read noise, a gain of 2.0 and =0.1% residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  13. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, D.; Beabout, B.; Stewart, M.

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro-polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with 10 e-/pixel/second dark current, 25 e- read noise, a gain of 2.0 +/- 0.5 and 1.0 percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; dark current, read noise, camera gain and residual non-linearity.

  14. Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  15. Dense Region of Impact Craters

    NASA Image and Video Library

    2011-09-23

    NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.

  16. Inexpensive Neutron Imaging Cameras Using CCDs for Astronomy

    NASA Astrophysics Data System (ADS)

    Hewat, A. W.

    We have developed inexpensive neutron imaging cameras using CCDs originally designed for amateur astronomical observation. The low-light, high resolution requirements of such CCDs are similar to those for neutron imaging, except that noise as well as cost is reduced by using slower read-out electronics. For example, we use the same 2048x2048 pixel ;Kodak; KAI-4022 CCD as used in the high performance PCO-2000 CCD camera, but our electronics requires ∼5 sec for full-frame read-out, ten times slower than the PCO-2000. Since neutron exposures also require several seconds, this is not seen as a serious disadvantage for many applications. If higher frame rates are needed, the CCD unit on our camera can be easily swapped for a faster readout detector with similar chip size and resolution, such as the PCO-2000 or the sCMOS PCO.edge 4.2.

  17. A passive terahertz video camera based on lumped element kinetic inductance detectors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rowe, Sam, E-mail: sam.rowe@astro.cf.ac.uk; Pascale, Enzo; Doyle, Simon

    We have developed a passive 350 GHz (850 μm) video-camera to demonstrate lumped element kinetic inductance detectors (LEKIDs)—designed originally for far-infrared astronomy—as an option for general purpose terrestrial terahertz imaging applications. The camera currently operates at a quasi-video frame rate of 2 Hz with a noise equivalent temperature difference per frame of ∼0.1 K, which is close to the background limit. The 152 element superconducting LEKID array is fabricated from a simple 40 nm aluminum film on a silicon dielectric substrate and is read out through a single microwave feedline with a cryogenic low noise amplifier and room temperature frequencymore » domain multiplexing electronics.« less

  18. False-Color Image of an Impact Crater on Vesta

    NASA Image and Video Library

    2011-08-24

    NASA Dawn spacecraft obtained this false-color image right of an impact crater in asteroid Vesta equatorial region with its framing camera on July 25, 2011. The view on the left is from the camera clear filter.

  19. Rapid orthophoto development system.

    DOT National Transportation Integrated Search

    2013-06-01

    The DMC system procured in the project represented state-of-the-art, large-format digital aerial camera systems at the start of : project. DMC is based on the frame camera model, and to achieve large ground coverage with high spatial resolution, the ...

  20. Compressive Video Recovery Using Block Match Multi-Frame Motion Estimation Based on Single Pixel Cameras

    PubMed Central

    Bi, Sheng; Zeng, Xiao; Tang, Xin; Qin, Shujia; Lai, King Wai Chiu

    2016-01-01

    Compressive sensing (CS) theory has opened up new paths for the development of signal processing applications. Based on this theory, a novel single pixel camera architecture has been introduced to overcome the current limitations and challenges of traditional focal plane arrays. However, video quality based on this method is limited by existing acquisition and recovery methods, and the method also suffers from being time-consuming. In this paper, a multi-frame motion estimation algorithm is proposed in CS video to enhance the video quality. The proposed algorithm uses multiple frames to implement motion estimation. Experimental results show that using multi-frame motion estimation can improve the quality of recovered videos. To further reduce the motion estimation time, a block match algorithm is used to process motion estimation. Experiments demonstrate that using the block match algorithm can reduce motion estimation time by 30%. PMID:26950127

  1. Multithreaded hybrid feature tracking for markerless augmented reality.

    PubMed

    Lee, Taehee; Höllerer, Tobias

    2009-01-01

    We describe a novel markerless camera tracking approach and user interaction methodology for augmented reality (AR) on unprepared tabletop environments. We propose a real-time system architecture that combines two types of feature tracking. Distinctive image features of the scene are detected and tracked frame-to-frame by computing optical flow. In order to achieve real-time performance, multiple operations are processed in a synchronized multi-threaded manner: capturing a video frame, tracking features using optical flow, detecting distinctive invariant features, and rendering an output frame. We also introduce user interaction methodology for establishing a global coordinate system and for placing virtual objects in the AR environment by tracking a user's outstretched hand and estimating a camera pose relative to it. We evaluate the speed and accuracy of our hybrid feature tracking approach, and demonstrate a proof-of-concept application for enabling AR in unprepared tabletop environments, using bare hands for interaction.

  2. Comet Wild 2 Up Close and Personal

    NASA Image and Video Library

    2004-01-02

    On January 2, 2004 NASA's Stardust spacecraft made a close flyby of comet Wild 2 (pronounced "Vilt-2"). Among the equipment the spacecraft carried on board was a navigation camera. This is the 34th of the 72 images taken by Stardust's navigation camera during close encounter. The exposure time was 10 milliseconds. The two frames are actually of 1 single exposure. The frame on the left depicts the comet as the human eye would see it. The frame on the right depicts the same image but "stretched" so that the faint jets emanating from Wild 2 can be plainly seen. Comet Wild 2 is about five kilometers (3.1 miles) in diameter. http://photojournal.jpl.nasa.gov/catalog/PIA05571

  3. Ultra-fast high-resolution hybrid and monolithic CMOS imagers in multi-frame radiography

    NASA Astrophysics Data System (ADS)

    Kwiatkowski, Kris; Douence, Vincent; Bai, Yibin; Nedrow, Paul; Mariam, Fesseha; Merrill, Frank; Morris, Christopher L.; Saunders, Andy

    2014-09-01

    A new burst-mode, 10-frame, hybrid Si-sensor/CMOS-ROIC FPA chip has been recently fabricated at Teledyne Imaging Sensors. The intended primary use of the sensor is in the multi-frame 800 MeV proton radiography at LANL. The basic part of the hybrid is a large (48×49 mm2) stitched CMOS chip of 1100×1100 pixel count, with a minimum shutter speed of 50 ns. The performance parameters of this chip are compared to the first generation 3-frame 0.5-Mpixel custom hybrid imager. The 3-frame cameras have been in continuous use for many years, in a variety of static and dynamic experiments at LANSCE. The cameras can operate with a per-frame adjustable integration time of ~ 120ns-to- 1s, and inter-frame time of 250ns to 2s. Given the 80 ms total readout time, the original and the new imagers can be externally synchronized to 0.1-to-5 Hz, 50-ns wide proton beam pulses, and record up to ~1000-frame radiographic movies typ. of 3-to-30 minute duration. The performance of the global electronic shutter is discussed and compared to that of a high-resolution commercial front-illuminated monolithic CMOS imager.

  4. The Multi-Spectral Imaging Diagnostic on Alcator C-MOD and TCV

    NASA Astrophysics Data System (ADS)

    Linehan, B. L.; Mumgaard, R. T.; Duval, B. P.; Theiler, C. G.; TCV Team

    2017-10-01

    The Multi-Spectral Imaging (MSI) diagnostic is a new instrument that captures simultaneous spectrally filtered images from a common sight view while maintaining a large tendue and high spatial resolution. The system uses a polychromator layout where each image is sequentially filtered. This procedure yields a high transmission for each spectral channel with minimal vignetting and aberrations. A four-wavelength system was installed on Alcator C-Mod and then moved to TCV. The system uses industrial cameras to simultaneously image the divertor region at 95 frames per second at f/# 2.8 via a coherent fiber bundle (C-Mod) or a lens-based relay optic (TCV). The images are absolutely calibrated and spatially registered enabling accurate measurement of atomic line ratios and absolute line intensities. The images will be used to study divertor detachment by imaging impurities and Balmer series emissions. Furthermore, the large field of view and an ability to support many types of detectors opens the door for other novel approaches to optically measuring plasma with high temporal, spatial, and spectral resolution. Such measurements will allow for the study of Stark broadening and divertor turbulence. Here, we present the first measurements taken with this cavity imaging system. USDoE awards DE-FC02-99ER54512 and award DE-AC05-06OR23100, ORISE, administered by ORAU.

  5. Polygonal Craters on Dwarf-Planet Ceres

    NASA Astrophysics Data System (ADS)

    Otto, K. A.; Jaumann, R.; Krohn, K.; Buczkowski, D. L.; von der Gathen, I.; Kersten, E.; Mest, S. C.; Preusker, F.; Roatsch, T.; Schenk, P. M.; Schröder, S.; Schulzeck, F.; Scully, J. E. C.; Stepahn, K.; Wagner, R.; Williams, D. A.; Raymond, C. A.; Russell, C. T.

    2015-10-01

    With approximately 950 km diameter and a mass of #1/3 of the total mass of the asteroid belt, (1) Ceres is the largest and most massive object in the Main Asteroid Belt. As an intact proto-planet, Ceres is key to understanding the origin and evolution of the terrestrialplanets [1]. In particular, the role of water during planet formation is of interest, because the differentiated dwarf-planet is thought to possess a water rich mantle overlying a rocky core [2]. The Dawn space craft arrived at Ceres in March this year after completing its mission at (4) Vesta. At Ceres, the on-board Framing Camera (FC) collected image data which revealed a large variety of impact crater morphologies including polygonal craters (Figure 1). Polygonal craters show straight rim sections aligned to form an angular shape. They are commonly associated with fractures in the target material. Simple polygonal craters develop during the excavation stage when the excavation flow propagates faster along preexisting fractures [3, 5]. Complex polygonal craters adopt their shape during the modification stage when slumping along fractures is favoured [3]. Polygonal craters are known from a variety of planetary bodies including Earth [e.g. 4], the Moon [e.g. 5], Mars [e.g. 6], Mercury [e.g. 7], Venus [e.g. 8] and outer Solar System icy satellites [e.g. 9].

  6. Investigating the Origin of Bright Materials on Vesta: Synthesis, Conclusions, and Implications

    NASA Technical Reports Server (NTRS)

    Li, Jian-Yang; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Schroder, S. E.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.

    2012-01-01

    The Dawn spacecraft started orbiting the second largest asteroid (4) Vesta in August 2011, revealing the details of its surface at an unprecedented pixel scale as small as approx.70 m in Framing Camera (FC) clear and color filter images and approx.180 m in the Visible and Infrared Spectrometer (VIR) data in its first two science orbits, the Survey Orbit and the High Altitude Mapping Orbit (HAMO) [1]. The surface of Vesta displays the greatest diversity in terms of geology and mineralogy of all asteroids studied in detail [2, 3]. While the albedo of Vesta of approx.0.38 in the visible wavelengths [4, 5] is one of the highest among all asteroids, the surface of Vesta shows the largest variation of albedos found on a single asteroid, with geometric albedos ranging at least from approx.0.10 to approx.0.67 in HAMO images [5]. There are many distinctively bright and dark areas observed on Vesta, associated with various geological features and showing remarkably different forms. Here we report our initial attempt to understand the origin of the areas that are distinctively brighter than their surroundings. The dark materials on Vesta clearly are different in origin from bright materials and are reported in a companion paper [6].

  7. A state observer for using a slow camera as a sensor for fast control applications

    NASA Astrophysics Data System (ADS)

    Gahleitner, Reinhard; Schagerl, Martin

    2013-03-01

    This contribution concerns about a problem that often arises in vision based control, when a camera is used as a sensor for fast control applications, or more precisely, when the sample rate of the control loop is higher than the frame rate of the camera. In control applications for mechanical axes, e.g. in robotics or automated production, a camera and some image processing can be used as a sensor to detect positions or angles. The sample time in these applications is typically in the range of a few milliseconds or less and this demands the use of a camera with a high frame rate up to 1000 fps. The presented solution is a special state observer that can work with a slower and therefore cheaper camera to estimate the state variables at the higher sample rate of the control loop. To simplify the image processing for the determination of positions or angles and make it more robust, some LED markers are applied to the plant. Simulation and experimental results show that the concept can be used even if the plant is unstable like the inverted pendulum.

  8. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  9. Evaluation of sequential images for photogrammetrically point determination

    NASA Astrophysics Data System (ADS)

    Kowalczyk, M.

    2011-12-01

    Close range photogrammetry encounters many problems with reconstruction of objects three-dimensional shape. Relative orientation parameters of taken photos makes usually key role leading to right solution of this problem. Automation of technology process is hardly performed due to recorded scene complexity and configuration of camera positions. This configuration makes the process of joining photos into one set usually impossible automatically. Application of camcorder is the solution widely proposed in literature for support in 3D models creation. Main advantages of this tool are connected with large number of recorded images and camera positions. Exterior orientation changes barely between two neighboring frames. Those features of film sequence gives possibilities for creating models with basic algorithms, working faster and more robust, than with remotely taken photos. The first part of this paper presents results of experiments determining interior orientation parameters of some sets of frames, presenting three-dimensional test field. This section describes calibration repeatability of film frames taken from camcorder. It is important due to stability of interior camera geometric parameters. Parametric model of systematical errors was applied for correcting images. Afterwards a short film of the same test field had been taken for determination of check points group. This part has been done for controlling purposes of camera application in measurement tasks. Finally there are presented some results of experiments which compare determination of recorded object points in 3D space. In common digital photogrammetry, where separate photos are used, first levels of image pyramids are taken to connect with feature based matching. This complicated process creates a lot of emergencies, which can produce false detections of image similarities. In case of digital film camera, authors of publications avoid this dangerous step, going straightly to area based matching, aiming high degree of similarity for two corresponding film frames. First approximation, in establishing connections between photos, comes from whole image distance. This image distance method can work with more than just two dimensions of translation vector. Scale and angles are also used for improving image matching. This operation creates more similar looking frames where corresponding characteristic points lays close to each other. Procedure searching for pairs of points works faster and more accurately, because analyzed areas can be reduced. Another proposed solution comes from image created by adding differences between particular frames, gives more rough results, but works much faster than standard matching.

  10. Exploring Formation Models for Ceres Tholi and Montes

    NASA Astrophysics Data System (ADS)

    Ruesch, O.; Platz, T.; McFadden, L. A.; Hiesinger, H.; Schenk, P.; Sykes, M. V.; Schmidt, B. E.; Buczkowski, D.; Thangjam, G.; Raymond, C. A.; Russell, C. T.

    2015-12-01

    Dawn Framing Camera (FC) images of Ceres surface revealed tholi and mons, i.e., positive relief features with sub-circular to irregular basal shapes and varying height to diameter ratios and flank slopes. These domes and mounts are tentatively interpreted as volcanic constructs [1]. Alternative formation mechanisms, e.g., uplifting by diapirism or shallow intrusions [e.g., 2], could have also led to the observed features with different geological implications. FC images derived local digital elevation models reveal that the largest dome on Ceres (near Rongo crater) has a ~100 km wide base, concave downward margins with slopes of 10°-20°, a relatively flat top reaching altitudes of ~5 km relative to surrounding, and a summit pit chain of putative endogenic origin. A relevant mons on Ceres is a cone-shaped relief (10°S/316°E) with a ~30x20 km base, reaching a high of ~5 km relative to surroundings. Flank slopes approach a concave upward shape. These constructs are located in a complex geological area having resurfaced units with onlap contacts. Because of the varying morphometries of the reliefs, we explore several physical models of volcanic constructs, e.g., steep-sided dome and shield volcano. Physical models are based on radially spreading viscous gravity currents with a free upper surface [e.g., 3, 4]. Testing formation scenarios will exploit recently developed methods, such as time-variable viscosity and fixed-volume models [5], and constant flow rate models [6]. We aim to provide constraints on viable emplacement mechanisms for the different reliefs. [1] Platz et al. (2015), EPSC abstract 915, vol. 10; [2] Fagents, S.A. (2003), JGR, vol. 108, E12, 5139; [3] Huppert, H. (1982), J. Fluid Mech., vol. 121, pp. 43-58; [4] Lacey et al. (1981), EPSL, vol. 54, pp. 139-143; [5] Glaze et al. (2012), LPSC abstract 1074 ; [6] Glaze et al. (2015), LPSC abstract 1326.

  11. Geologic Mapping of the Av-11 Pinaria Quadrangle of Asteroid 4 Vesta

    NASA Astrophysics Data System (ADS)

    Schenk, P.; Hoogenboom, T.; Williams, D.; Yingst, R. A.; Jaumann, R.; Gaskell, R.; Preusker, F.; Nathues, A.; Roatsch, T.

    2012-04-01

    As part of the Dawn's orbital mapping investigation of Vesta, the Science Team is conducting geologic mapping of the surface in the form of 15 quadrangle maps, including quadrangle Av-11 (Pinaria). The base map is a monochrome Framing Camera (FC) mosaic at ~70 m/pixel, supplemented by Digital Terrain Models (DTM) and FC color ratio images, both at ~250 m/pixel, slope and contour maps, and Visible and Infrared (VIR) hyperspectral images. Av-11 straddles the 45-degree longitude in the South Polar Region, and is dominated by the rim of the ~505 km south polar topographic feature, Rheasilvia. Sparsely cratered (relatively), Av-11 is dominated by a 20 km high rim scarp (Matronalia Rupes) and by arcuate ridges and troughs forming a radial to spiral pattern across the basin floor. Primary geologic features of Av-11 include the following. Ridge-and-groove terrain radiating arcuately from the central mound unit, interpreted to be structural disruption of the basin floor associated with basin formation. The largest crater in Av-11 is Pinaria (37 km). Mass wasting deposits are observed on its floor. Secondary crater chains and fields are also evident. Mass wasting observed along Rheasilvia rim scarp and in the largest craters indicates scarp failure is a significant process. Parallel fault scarps mark this deposit of slumped debris at the base of 20 km high Matronalia Rupes, which may have formed during or shortly after basin excavation. We interpret most of these deposits as slump material emplaced as a result of the effects of basin formation and collapse. Lobate materials are characterized by lineations and lobate scarps and interpreted as Rheasilvia ejecta deposit outside Rheasilvia rim (the smoothest terrain on Vesta), and are consistent with formation by ejecta. Partial burial of older craters near the edge of these deposits are also observed.

  12. Infrared Imaging Camera Final Report CRADA No. TC02061.0

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roos, E. V.; Nebeker, S.

    This was a collaborative effort between the University of California, Lawrence Livermore National Laboratory (LLNL) and Cordin Company (Cordin) to enhance the U.S. ability to develop a commercial infrared camera capable of capturing high-resolution images in a l 00 nanoseconds (ns) time frame. The Department of Energy (DOE), under an Initiative for Proliferation Prevention (IPP) project, funded the Russian Federation Nuclear Center All-Russian Scientific Institute of Experimental Physics (RFNC-VNIIEF) in Sarov. VNIIEF was funded to develop a prototype commercial infrared (IR) framing camera and to deliver a prototype IR camera to LLNL. LLNL and Cordin were partners with VNIIEF onmore » this project. A prototype IR camera was delivered by VNIIEF to LLNL in December 2006. In June of 2007, LLNL and Cordin evaluated the camera and the test results revealed that the camera exceeded presently available commercial IR cameras. Cordin believes that the camera can be sold on the international market. The camera is currently being used as a scientific tool within Russian nuclear centers. This project was originally designated as a two year project. The project was not started on time due to changes in the IPP project funding conditions; the project funding was re-directed through the International Science and Technology Center (ISTC), which delayed the project start by over one year. The project was not completed on schedule due to changes within the Russian government export regulations. These changes were directed by Export Control regulations on the export of high technology items that can be used to develop military weapons. The IR camera was on the list that export controls required. The ISTC and Russian government, after negotiations, allowed the delivery of the camera to LLNL. There were no significant technical or business changes to the original project.« less

  13. Full-Frame Reference for Test Photo of Moon

    NASA Image and Video Library

    2005-09-10

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment HiRISE camera on NASA Mars Reconnaissance Orbiter.

  14. Estimating pixel variances in the scenes of staring sensors

    DOEpatents

    Simonson, Katherine M [Cedar Crest, NM; Ma, Tian J [Albuquerque, NM

    2012-01-24

    A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.

  15. Visual Odometry Based on Structural Matching of Local Invariant Features Using Stereo Camera Sensor

    PubMed Central

    Núñez, Pedro; Vázquez-Martín, Ricardo; Bandera, Antonio

    2011-01-01

    This paper describes a novel sensor system to estimate the motion of a stereo camera. Local invariant image features are matched between pairs of frames and linked into image trajectories at video rate, providing the so-called visual odometry, i.e., motion estimates from visual input alone. Our proposal conducts two matching sessions: the first one between sets of features associated to the images of the stereo pairs and the second one between sets of features associated to consecutive frames. With respect to previously proposed approaches, the main novelty of this proposal is that both matching algorithms are conducted by means of a fast matching algorithm which combines absolute and relative feature constraints. Finding the largest-valued set of mutually consistent matches is equivalent to finding the maximum-weighted clique on a graph. The stereo matching allows to represent the scene view as a graph which emerge from the features of the accepted clique. On the other hand, the frame-to-frame matching defines a graph whose vertices are features in 3D space. The efficiency of the approach is increased by minimizing the geometric and algebraic errors to estimate the final displacement of the stereo camera between consecutive acquired frames. The proposed approach has been tested for mobile robotics navigation purposes in real environments and using different features. Experimental results demonstrate the performance of the proposal, which could be applied in both industrial and service robot fields. PMID:22164016

  16. Slow Speed--Fast Motion: Time-Lapse Recordings in Physics Education

    ERIC Educational Resources Information Center

    Vollmer, Michael; Möllmann, Klaus-Peter

    2018-01-01

    Video analysis with a 30 Hz frame rate is the standard tool in physics education. The development of affordable high-speed-cameras has extended the capabilities of the tool for much smaller time scales to the 1 ms range, using frame rates of typically up to 1000 frames s[superscript -1], allowing us to study transient physics phenomena happening…

  17. Pool boiling of ethanol and FC-72 on open microchannel surfaces

    NASA Astrophysics Data System (ADS)

    Kaniowski, Robert; Pastuszko, Robert

    2018-06-01

    The paper presents experimental investigations into pool boiling heat transfer for open microchannel surfaces. Parallel microchannels fabricated by machining were about 0.3 mm wide, and 0.2 to 0.5 mm deep and spaced every 0.1 mm. The experiments were carried out for ethanol, and FC-72 at atmospheric pressure. The image acquisition speed was 493 fps (at resolution 400 × 300 pixels with Photonfocus PHOT MV-D1024-160-CL camera). Visualization investigations aimed to identify nucleation sites and flow patterns and to determine the bubble departure diameter and frequency at various superheats. The primary factor in the increase of heat transfer coefficient at increasing heat flux was a growing number of active pores and increased departure frequency. Heat transfer coefficients obtained in this study were noticeably higher than those from a smooth surface.

  18. Study of atmospheric discharges caracteristics using with a standard video camera

    NASA Astrophysics Data System (ADS)

    Ferraz, E. C.; Saba, M. M. F.

    In this study is showed some preliminary statistics on lightning characteristics such as: flash multiplicity, number of ground contact points, formation of new and altered channels and presence of continuous current in the strokes that form the flash. The analysis is based on the images of a standard video camera (30 frames.s-1). The results obtained for some flashes will be compared to the images of a high-speed CCD camera (1000 frames.s-1). The camera observing site is located in São José dos Campos (23°S,46° W) at an altitude of 630m. This observational site has nearly 360° field of view at a height of 25m. It is possible to visualize distant thunderstorms occurring within a radius of 25km from the site. The room, situated over a metal structure, has water and power supplies, a telephone line and a small crane on the roof. KEY WORDS: Video images, Lightning, Multiplicity, Stroke.

  19. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  20. Frames of Reference in the Classroom

    NASA Astrophysics Data System (ADS)

    Grossman, Joshua

    2012-12-01

    The classic film "Frames of Reference"1,2 effectively illustrates concepts involved with inertial and non-inertial reference frames. In it, Donald G. Ivey and Patterson Hume use the cameras perspective to allow the viewer to see motion in reference frames translating with a constant velocity, translating while accelerating, and rotating—all with respect to the Earth frame. The film is a classic for good reason, but today it does have a couple of drawbacks: 1) The film by nature only accommodates passive learning. It does not give students the opportunity to try any of the experiments themselves. 2) The dated style of the 50-year-old film can distract students from the physics content. I present here a simple setup that can recreate many of the movies demonstrations in the classroom. The demonstrations can be used to supplement the movie or in its place, if desired. All of the materials except perhaps the inexpensive web camera should likely be available already in most teaching laboratories. Unlike previously described activities, these experiments do not require travel to another location3 or an involved setup.4,5

  1. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  2. High speed imaging - An important industrial tool

    NASA Technical Reports Server (NTRS)

    Moore, Alton; Pinelli, Thomas E.

    1986-01-01

    High-speed photography, which is a rapid sequence of photographs that allow an event to be analyzed through the stoppage of motion or the production of slow-motion effects, is examined. In high-speed photography 16, 35, and 70 mm film and framing rates between 64-12,000 frames per second are utilized to measure such factors as angles, velocities, failure points, and deflections. The use of dual timing lamps in high-speed photography and the difficulties encountered with exposure and programming the camera and event are discussed. The application of video cameras to the recording of high-speed events is described.

  3. Dynamic characteristics of far-field radiation of current modulated phase-locked diode laser arrays

    NASA Technical Reports Server (NTRS)

    Elliott, R. A.; Hartnett, K.

    1987-01-01

    A versatile and powerful streak camera/frame grabber system for studying the evolution of the near and far field radiation patterns of diode lasers was assembled and tested. Software needed to analyze and display the data acquired with the steak camera/frame grabber system was written and the total package used to record and perform preliminary analyses on the behavior of two types of laser, a ten emitter gain guided array and a flared waveguide Y-coupled array. Examples of the information which can be gathered with this system are presented.

  4. One-click scanning of large-size documents using mobile phone camera

    NASA Astrophysics Data System (ADS)

    Liu, Sijiang; Jiang, Bo; Yang, Yuanjie

    2016-07-01

    Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.

  5. Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems

    NASA Technical Reports Server (NTRS)

    Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.

    2015-01-01

    Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.

  6. Fast-camera imaging on the W7-X stellarator

    NASA Astrophysics Data System (ADS)

    Ballinger, S. B.; Terry, J. L.; Baek, S. G.; Tang, K.; Grulke, O.

    2017-10-01

    Fast cameras recording in the visible range have been used to study filamentary (``blob'') edge turbulence in tokamak plasmas, revealing that emissive filaments aligned with the magnetic field can propagate perpendicular to it at speeds on the order of 1 km/s in the SOL or private flux region. The motion of these filaments has been studied in several tokamaks, including MAST, NSTX, and Alcator C-Mod. Filaments were also observed in the W7-X Stellarator using fast cameras during its initial run campaign. For W7-X's upcoming 2017-18 run campaign, we have installed a Phantom V710 fast camera with a view of the machine cross section and part of a divertor module in order to continue studying edge and divertor filaments. The view is coupled to the camera via a coherent fiber bundle. The Phantom camera is able to record at up to 400,000 frames per second and has a spatial resolution of roughly 2 cm in the view. A beam-splitter is used to share the view with a slower machine-protection camera. Stepping-motor actuators tilt the beam-splitter about two orthogonal axes, making it possible to frame user-defined sub-regions anywhere within the view. The diagnostic has been prepared to be remotely controlled via MDSplus. The MIT portion of this work is supported by US DOE award DE-SC0014251.

  7. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  8. Development of a table tennis robot for ball interception using visual feedback

    NASA Astrophysics Data System (ADS)

    Parnichkun, Manukid; Thalagoda, Janitha A.

    2016-07-01

    This paper presents a concept of intercepting a moving table tennis ball using a robot. The robot has four degrees of freedom(DOF) which are simplified in such a way that The system is able to perform the task within the bounded limit. It employs computer vision to localize the ball. For ball identification, Colour Based Threshold Segmentation(CBTS) and Background Subtraction(BS) methodologies are used. Coordinate Transformation(CT) is employed to transform the data, which is taken based on camera coordinate frame to the general coordinate frame. The sensory system consisted of two HD Web Cameras. The computation time of image processing from web cameras is long .it is not possible to intercept table tennis ball using only image processing. Therefore the projectile motion model is employed to predict the final destination of the ball.

  9. Satellite markers: a simple method for ground truth car pose on stereo video

    NASA Astrophysics Data System (ADS)

    Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco

    2018-04-01

    Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.

  10. Effectiveness of fluorescence-based methods to detect in situ demineralization and remineralization on smooth surfaces.

    PubMed

    Moriyama, C M; Rodrigues, J A; Lussi, A; Diniz, M B

    2014-01-01

    This study aimed to evaluate the effectiveness of fluorescence-based methods (DIAGNOdent, LF; DIAGNOdent pen, LFpen, and VistaProof fluorescence camera, FC) in detecting demineralization and remineralization on smooth surfaces in situ. Ten volunteers wore acrylic palatal appliances, each containing 6 enamel blocks that were demineralized for 14 days by exposure to a 20% sucrose solution and 3 of them were remineralized for 7 days with fluoride dentifrice. Sixty enamel blocks were evaluated at baseline, after demineralization and 30 blocks after remineralization by two examiners using LF, LFpen and FC. They were submitted to surface microhardness (SMH) and cross-sectional microhardness analysis. The integrated loss of surface hardness (ΔKHN) was calculated. The intraclass correlation coefficient for interexaminer reproducibility ranged from 0.21 (FC) to 0.86 (LFpen). SMH, LF and LFpen values presented significant differences among the three phases. However, FC fluorescence values showed no significant differences between the demineralization and remineralization phases. Fluorescence values for baseline, demineralized and remineralized enamel were, respectively, 5.4 ± 1.0, 9.2 ± 2.2 and 7.0 ± 1.5 for LF; 10.5 ± 2.0, 15.0 ± 3.2 and 12.5 ± 2.9 for LFpen, and 1.0 ± 0.0, 1.0 ± 0.1 and 1.0 ± 0.1 for FC. SMH and ΔKHN showed significant differences between demineralization and remineralization phases. There was a negative and significant correlation between SMH and LF and LFpen in the remineralization phase. In conclusion, LF and LFpen devices were effective in detecting demineralization and remineralization on smooth surfaces provoked in situ.

  11. Strategic options towards an affordable high-performance infrared camera

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Dutta, Achyut K.; Lewis, Jay; Dhar, Nibir K.

    2016-05-01

    The promise of infrared (IR) imaging attaining low-cost akin to CMOS sensors success has been hampered by the inability to achieve cost advantages that are necessary for crossover from military and industrial applications into the consumer and mass-scale commercial realm despite well documented advantages. Banpil Photonics is developing affordable IR cameras by adopting new strategies to speed-up the decline of the IR camera cost curve. We present a new short-wave IR (SWIR) camera; 640x512 pixel InGaAs uncooled system that is high sensitivity low noise (<50e-), high dynamic range (100 dB), high-frame rates (> 500 frames per second (FPS)) at full resolution, and low power consumption (< 1 W) in a compact system. This camera paves the way towards mass market adoption by not only demonstrating high-performance IR imaging capability value add demanded by military and industrial application, but also illuminates a path towards justifiable price points essential for consumer facing application industries such as automotive, medical, and security imaging adoption. Among the strategic options presented include new sensor manufacturing technologies that scale favorably towards automation, multi-focal plane array compatible readout electronics, and dense or ultra-small pixel pitch devices.

  12. The Atlases of Vesta derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2013-12-01

    The Dawn Framing Camera acquired during its two HAMO (High Altitude Mapping Orbit) phases in 2011 and 2012 about 6,000 clear filter images with a resolution of about 60 m/pixel. We combined these images in a global ortho-rectified mosaic of Vesta (60 m/pixel resolution). Only very small areas near the northern pole were still in darkness and are missing in the mosaic. The Dawn Framing Camera also acquired about 10,000 high-resolution clear filter images (about 20 m/pixel) of Vesta during its Low Altitude Mapping Orbit (LAMO). Unfortunately, the northern part of Vesta was still in darkness during this phase, good illumination (incidence angle < 70°) was only available for 66.8 % of the surface [1]. We used the LAMO images to calculate another global mosaic of Vesta, this time with 20 m/pixel resolution. Both global mosaics were used to produce atlases of Vesta: a HAMO atlas with 15 tiles at a scale of 1:500,000 and a LAMO atlas with 30 tiles at a scale between 1:200,000 and 1:225,180. The nomenclature used in these atlases is based on names and places historically associated with the Roman goddess Vesta, and is compliant with the rules of the IAU. 65 names for geological features were already approved by the IAU, 39 additional names are currently under review. Selected examples of both atlases will be shown in this presentation. Reference: [1]Roatsch, Th., etal., High-resolution Vesta Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images. Planetary and Space Science (2013), http://dx.doi.org/10.1016/j.pss.2013.06.024i

  13. Low cost thermal camera for use in preclinical detection of diabetic peripheral neuropathy in primary care setting

    NASA Astrophysics Data System (ADS)

    Joshi, V.; Manivannan, N.; Jarry, Z.; Carmichael, J.; Vahtel, M.; Zamora, G.; Calder, C.; Simon, J.; Burge, M.; Soliz, P.

    2018-02-01

    Diabetic peripheral neuropathy (DPN) accounts for around 73,000 lower-limb amputations annually in the US on patients with diabetes. Early detection of DPN is critical. Current clinical methods for diagnosing DPN are subjective and effective only at later stages. Until recently, thermal cameras used for medical imaging have been expensive and hence prohibitive to be installed in primary care setting. The objective of this study is to compare results from a low-cost thermal camera with a high-end thermal camera used in screening for DPN. Thermal imaging has demonstrated changes in microvascular function that correlates with nerve function affected by DPN. The limitations for using low-cost cameras for DPN imaging are: less resolution (active pixels), frame rate, thermal sensitivity etc. We integrated two FLIR Lepton (80x60 active pixels, 50° HFOV, thermal sensitivity < 50mK) as one unit. Right and left cameras record the videos of right and left foot respectively. A compactible embedded system (raspberry pi3 model Bv1.2) is used to configure the sensors, capture and stream the video via ethernet. The resulting video has 160x120 active pixels (8 frames/second). We compared the temperature measurement of feet obtained using low-cost camera against the gold standard highend FLIR SC305. Twelve subjects (aged 35-76) were recruited. Difference in the temperature measurements between cameras was calculated for each subject and the results show that the difference between the temperature measurements of two cameras (mean difference=0.4, p-value=0.2) is not statistically significant. We conclude that the low-cost thermal camera system shows potential for use in detecting early-signs of DPN in under-served and rural clinics.

  14. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  15. Accuracy and precision of a custom camera-based system for 2D and 3D motion tracking during speech and nonspeech motor tasks

    PubMed Central

    Feng, Yongqiang; Max, Ludo

    2014-01-01

    Purpose Studying normal or disordered motor control requires accurate motion tracking of the effectors (e.g., orofacial structures). The cost of electromagnetic, optoelectronic, and ultrasound systems is prohibitive for many laboratories, and limits clinical applications. For external movements (lips, jaw), video-based systems may be a viable alternative, provided that they offer high temporal resolution and sub-millimeter accuracy. Method We examined the accuracy and precision of 2D and 3D data recorded with a system that combines consumer-grade digital cameras capturing 60, 120, or 240 frames per second (fps), retro-reflective markers, commercially-available computer software (APAS, Ariel Dynamics), and a custom calibration device. Results Overall mean error (RMSE) across tests was 0.15 mm for static tracking and 0.26 mm for dynamic tracking, with corresponding precision (SD) values of 0.11 and 0.19 mm, respectively. The effect of frame rate varied across conditions, but, generally, accuracy was reduced at 240 fps. The effect of marker size (3 vs. 6 mm diameter) was negligible at all frame rates for both 2D and 3D data. Conclusion Motion tracking with consumer-grade digital cameras and the APAS software can achieve sub-millimeter accuracy at frame rates that are appropriate for kinematic analyses of lip/jaw movements for both research and clinical purposes. PMID:24686484

  16. Marker-less multi-frame motion tracking and compensation in PET-brain imaging

    NASA Astrophysics Data System (ADS)

    Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.

    2015-03-01

    In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.

  17. Relativistic Astronomy

    NASA Astrophysics Data System (ADS)

    Zhang, Bing; Li, Kunyang

    2018-02-01

    The “Breakthrough Starshot” aims at sending near-speed-of-light cameras to nearby stellar systems in the future. Due to the relativistic effects, a transrelativistic camera naturally serves as a spectrograph, a lens, and a wide-field camera. We demonstrate this through a simulation of the optical-band image of the nearby galaxy M51 in the rest frame of the transrelativistic camera. We suggest that observing celestial objects using a transrelativistic camera may allow one to study the astronomical objects in a special way, and to perform unique tests on the principles of special relativity. We outline several examples that suggest transrelativistic cameras may make important contributions to astrophysics and suggest that the Breakthrough Starshot cameras may be launched in any direction to serve as a unique astronomical observatory.

  18. Utilizing ISS Camera Systems for Scientific Analysis of Lightning Characteristics and Comparison with ISS-LIS and GLM

    NASA Technical Reports Server (NTRS)

    Schultz, Christopher J.; Lang, Timothy J.; Leake, Skye; Runco, Mario, Jr.; Blakeslee, Richard J.

    2017-01-01

    Video and still frame images from cameras aboard the International Space Station (ISS) are used to inspire, educate, and provide a unique vantage point from low-Earth orbit that is second to none; however, these cameras have overlooked capabilities for contributing to scientific analysis of the Earth and near-space environment. The goal of this project is to study how geo referenced video/images from available ISS camera systems can be useful for scientific analysis, using lightning properties as a demonstration.

  19. Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views

    DTIC Science & Technology

    2014-11-10

    collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video

  20. Ground-based remote sensing with long lens video camera for upper-stem diameter and other tree crown measurements

    Treesearch

    Neil A. Clark; Sang-Mook Lee

    2004-01-01

    This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...

  1. A framed, 16-image Kirkpatrick–Baez x-ray microscope

    DOE PAGES

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.; ...

    2017-09-08

    A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less

  2. A framed, 16-image Kirkpatrick–Baez x-ray microscope

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, F. J.; Bahr, R. E.; Goncharov, V. N.

    A 16-image Kirkpatrick–Baez (KB)–type x-ray microscope consisting of compact KB mirrors has been assembled for the first time with mirrors aligned to allow it to be coupled to a high-speed framing camera. The high-speed framing camera has four independently gated strips whose emission sampling interval is ~30 ps. Images are arranged four to a strip with ~60-ps temporal spacing between frames on a strip. By spacing the timing of the strips, a frame spacing of ~15 ps is achieved. A framed resolution of ~6-um is achieved with this combination in a 400-um region of laser–plasma x-ray emission in the 2-more » to 8-keV energy range. A principal use of the microscope is to measure the evolution of the implosion stagnation region of cryogenic DT target implosions on the University of Rochester’s OMEGA Laser System. The unprecedented time and spatial resolution achieved with this framed, multi-image KB microscope have made it possible to accurately determine the cryogenic implosion core emission size and shape at the peak of stagnation. In conclusion, these core size measurements, taken in combination with those of ion temperature, neutron-production temporal width, and neutron yield allow for inference of core pressures, currently exceeding 50 GBar in OMEGA cryogenic target implosions.« less

  3. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  4. Center of parcel with picture tube wall along walkway. Leaning ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Center of parcel with picture tube wall along walkway. Leaning Tower of Bottle Village at frame right; oblique view of Rumpus Room, remnants of Little Hut destroyed by Northridge earthquake at frame left. Camera facing northeast. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  5. KENNEDY SPACE CENTER, FLA. - The camera installed on the aft skirt of a solid rocket booster is seen here, framed by the railing. The installation is in preparation for a vibration test of the Mobile Launcher Platform with SRBs and external tank mounted. The MLP will roll from one bay to another in the Vehicle Assembly Building.

    NASA Image and Video Library

    2003-11-06

    KENNEDY SPACE CENTER, FLA. - The camera installed on the aft skirt of a solid rocket booster is seen here, framed by the railing. The installation is in preparation for a vibration test of the Mobile Launcher Platform with SRBs and external tank mounted. The MLP will roll from one bay to another in the Vehicle Assembly Building.

  6. Pulsed x-ray sources for characterization of gated framing cameras

    NASA Astrophysics Data System (ADS)

    Filip, Catalin V.; Koch, Jeffrey A.; Freeman, Richard R.; King, James A.

    2017-08-01

    Gated X-ray framing cameras are used to measure important characteristics of inertial confinement fusion (ICF) implosions such as size and symmetry, with 50 ps time resolution in two dimensions. A pulsed source of hard (>8 keV) X-rays, would be a valuable calibration device, for example for gain-droop measurements of the variation in sensitivity of the gated strips. We have explored the requirements for such a source and a variety of options that could meet these requirements. We find that a small-size dense plasma focus machine could be a practical single-shot X-ray source for this application if timing uncertainties can be overcome.

  7. Characterization of x-ray framing cameras for the National Ignition Facility using single photon pulse height analysis.

    PubMed

    Holder, J P; Benedetti, L R; Bradley, D K

    2016-11-01

    Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.

  8. A Digital Video System for Observing and Recording Occultations

    NASA Astrophysics Data System (ADS)

    Barry, M. A. Tony; Gault, Dave; Pavlov, Hristo; Hanna, William; McEwan, Alistair; Filipović, Miroslav D.

    2015-09-01

    Stellar occultations by asteroids and outer solar system bodies can offer ground based observers with modest telescopes and camera equipment the opportunity to probe the shape, size, atmosphere, and attendant moons or rings of these distant objects. The essential requirements of the camera and recording equipment are: good quantum efficiency and low noise; minimal dead time between images; good horological faithfulness of the image timestamps; robustness of the recording to unexpected failure; and low cost. We describe an occultation observing and recording system which attempts to fulfil these requirements and compare the system with other reported camera and recorder systems. Five systems have been built, deployed, and tested over the past three years, and we report on three representative occultation observations: one being a 9 ± 1.5 s occultation of the trans-Neptunian object 28978 Ixion (m v =15.2) at 3 seconds per frame; one being a 1.51 ± 0.017 s occultation of Deimos, the 12 km diameter satellite of Mars, at 30 frames per second; and one being a 11.04 ± 0.4 s occultation, recorded at 7.5 frames per second, of the main belt asteroid 361 Havnia, representing a low magnitude drop (Δm v = ~0.4) occultation.

  9. Particle displacement tracking applied to air flows

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.

    1991-01-01

    Electronic Particle Image Velocimeter (PIV) techniques offer many advantages over conventional photographic PIV methods such as fast turn around times and simplified data reduction. A new all electronic PIV technique was developed which can measure high speed gas velocities. The Particle Displacement Tracking (PDT) technique employs a single cw laser, small seed particles (1 micron), and a single intensified, gated CCD array frame camera to provide a simple and fast method of obtaining two-dimensional velocity vector maps with unambiguous direction determination. Use of a single CCD camera eliminates registration difficulties encountered when multiple cameras are used to obtain velocity magnitude and direction information. An 80386 PC equipped with a large memory buffer frame-grabber board provides all of the data acquisition and data reduction operations. No array processors of other numerical processing hardware are required. Full video resolution (640x480 pixel) is maintained in the acquired images, providing high resolution video frames of the recorded particle images. The time between data acquisition to display of the velocity vector map is less than 40 sec. The new electronic PDT technique is demonstrated on an air nozzle flow with velocities less than 150 m/s.

  10. Single frequency thermal wave radar: A next-generation dynamic thermography for quantitative non-destructive imaging over wide modulation frequency ranges.

    PubMed

    Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas

    2018-04-01

    Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.

  11. Single frequency thermal wave radar: A next-generation dynamic thermography for quantitative non-destructive imaging over wide modulation frequency ranges

    NASA Astrophysics Data System (ADS)

    Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas

    2018-04-01

    Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.

  12. Mars Science Laboratory Frame Manager for Centralized Frame Tree Database and Target Pointing

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Leger, Chris; Peters, Stephen; Carsten, Joseph; Diaz-Calderon, Antonio

    2013-01-01

    The FM (Frame Manager) flight software module is responsible for maintaining the frame tree database containing coordinate transforms between frames. The frame tree is a proper tree structure of directed links, consisting of surface and rover subtrees. Actual frame transforms are updated by their owner. FM updates site and saved frames for the surface tree. As the rover drives to a new area, a new site frame with an incremented site index can be created. Several clients including ARM and RSM (Remote Sensing Mast) update their related rover frames that they own. Through the onboard centralized FM frame tree database, client modules can query transforms between any two frames. Important applications include target image pointing for RSM-mounted cameras and frame-referenced arm moves. The use of frame tree eliminates cumbersome, error-prone calculations of coordinate entries for commands and thus simplifies flight operations significantly.

  13. In-vessel visible inspection system on KSTAR

    NASA Astrophysics Data System (ADS)

    Chung, Jinil; Seo, D. C.

    2008-08-01

    To monitor the global formation of the initial plasma and damage to the internal structures of the vacuum vessel, an in-vessel visible inspection system has been installed and operated on the Korean superconducting tokamak advanced research (KSTAR) device. It consists of four inspection illuminators and two visible/H-alpha TV cameras. Each illuminator uses four 150W metal-halide lamps with separate lamp controllers, and programmable progressive scan charge-coupled device cameras with 1004×1004 resolution at 48frames/s and a resolution of 640×480 at 210frames/s are used to capture images. In order to provide vessel inspection capability under any operation condition, the lamps and cameras are fully controlled from the main control room and protected by shutters from deposits during plasma operation. In this paper, we describe the design and operation results of the visible inspection system with the images of the KSTAR Ohmic discharges during the first plasma campaign.

  14. Body worn camera

    NASA Astrophysics Data System (ADS)

    Aishwariya, A.; Pallavi Sudhir, Gulavani; Garg, Nemesa; Karthikeyan, B.

    2017-11-01

    A body worn camera is small video camera worn on the body, typically used by police officers to record arrests, evidence from crime scenes. It helps preventing and resolving complaints brought by members of the public; and strengthening police transparency, performance, and accountability. The main constants of this type of the system are video format, resolution, frames rate, and audio quality. This system records the video in .mp4 format with 1080p resolution and 30 frames per second. One more important aspect to while designing this system is amount of power the system requires as battery management becomes very critical. The main design challenges are Size of the Video, Audio for the video. Combining both audio and video and saving it in .mp4 format, Battery, size that is required for 8 hours of continuous recording, Security. For prototyping this system is implemented using Raspberry Pi model B.

  15. Inspecting rapidly moving surfaces for small defects using CNN cameras

    NASA Astrophysics Data System (ADS)

    Blug, Andreas; Carl, Daniel; Höfler, Heinrich

    2013-04-01

    A continuous increase in production speed and manufacturing precision raises a demand for the automated detection of small image features on rapidly moving surfaces. An example are wire drawing processes where kilometers of cylindrical metal surfaces moving with 10 m/s have to be inspected for defects such as scratches, dents, grooves, or chatter marks with a lateral size of 100 μm in real time. Up to now, complex eddy current systems are used for quality control instead of line cameras, because the ratio between lateral feature size and surface speed is limited by the data transport between camera and computer. This bottleneck is avoided by "cellular neural network" (CNN) cameras which enable image processing directly on the camera chip. This article reports results achieved with a demonstrator based on this novel analogue camera - computer system. The results show that computational speed and accuracy of the analogue computer system are sufficient to detect and discriminate the different types of defects. Area images with 176 x 144 pixels are acquired and evaluated in real time with frame rates of 4 to 10 kHz - depending on the number of defects to be detected. These frame rates correspond to equivalent line rates on line cameras between 360 and 880 kHz, a number far beyond the available features. Using the relation between lateral feature size and surface speed as a figure of merit, the CNN based system outperforms conventional image processing systems by an order of magnitude.

  16. High-Speed Videography Instrumentation And Procedures

    NASA Astrophysics Data System (ADS)

    Miller, C. E.

    1982-02-01

    High-speed videography has been an electronic analog of low-speed film cameras, but having the advantages of instant-replay and simplicity of operation. Recent advances have pushed frame-rates into the realm of the rotating prism camera. Some characteristics of videography systems are discussed in conjunction with applications in sports analysis, and with sports equipment testing.

  17. Automatic treatment of flight test images using modern tools: SAAB and Aeritalia joint approach

    NASA Astrophysics Data System (ADS)

    Kaelldahl, A.; Duranti, P.

    The use of onboard cine cameras, as well as that of on ground cinetheodolites, is very popular in flight tests. The high resolution of film and the high frame rate of cinecameras are still not exceeded by video technology. Video technology can successfully enter the flight test scenario once the availability of solid-state optical sensors dramatically reduces the dimensions, and weight of TV cameras, thus allowing to locate them in positions compatible with space or operational limitations (e.g., HUD cameras). A proper combination of cine and video cameras is the typical solution for a complex flight test program. The output of such devices is very helpful in many flight areas. Several sucessful applications of this technology are summarized. Analysis of the large amount of data produced (frames of images) requires a very long time. The analysis is normally carried out manually. In order to improve the situation, in the last few years, several flight test centers have devoted their attention to possible techniques which allow for quicker and more effective image treatment.

  18. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  19. Explosives Instrumentation Group Trial 6/77-Propellant Fire Trials (Series Two).

    DTIC Science & Technology

    1981-10-01

    frames/s. A 19 mm Sony U-Matic video cassette recorder (VCR) and camera were used to view the hearth from a tower 100 m from ground-zero (GZ). Normal...camera started. This procedure permitted increased recording time of the event. A 19 mm Sony U-Matic VCR and camera was used to view the container...Lumpur, Malaysia Exchange Section, British Library, U.K. Periodicals Recording Section, Science Reference Library, British Library, U.K. Library, Chemical

  20. Adaptive-Repetitive Visual-Servo Control of Low-Flying Aerial Robots via Uncalibrated High-Flying Cameras

    NASA Astrophysics Data System (ADS)

    Guo, Dejun; Bourne, Joseph R.; Wang, Hesheng; Yim, Woosoon; Leang, Kam K.

    2017-08-01

    This paper presents the design and implementation of an adaptive-repetitive visual-servo control system for a moving high-flying vehicle (HFV) with an uncalibrated camera to monitor, track, and precisely control the movements of a low-flying vehicle (LFV) or mobile ground robot. Applications of this control strategy include the use of high-flying unmanned aerial vehicles (UAVs) with computer vision for monitoring, controlling, and coordinating the movements of lower altitude agents in areas, for example, where GPS signals may be unreliable or nonexistent. When deployed, a remote operator of the HFV defines the desired trajectory for the LFV in the HFV's camera frame. Due to the circular motion of the HFV, the resulting motion trajectory of the LFV in the image frame can be periodic in time, thus an adaptive-repetitive control system is exploited for regulation and/or trajectory tracking. The adaptive control law is able to handle uncertainties in the camera's intrinsic and extrinsic parameters. The design and stability analysis of the closed-loop control system is presented, where Lyapunov stability is shown. Simulation and experimental results are presented to demonstrate the effectiveness of the method for controlling the movement of a low-flying quadcopter, demonstrating the capabilities of the visual-servo control system for localization (i.e.,, motion capturing) and trajectory tracking control. In fact, results show that the LFV can be commanded to hover in place as well as track a user-defined flower-shaped closed trajectory, while the HFV and camera system circulates above with constant angular velocity. On average, the proposed adaptive-repetitive visual-servo control system reduces the average RMS tracking error by over 77% in the image plane and over 71% in the world frame compared to using just the adaptive visual-servo control law.

  1. Reliability of sagittal plane hip, knee, and ankle joint angles from a single frame of video data using the GAITRite camera system.

    PubMed

    Ross, Sandy A; Rice, Clinton; Von Behren, Kristyn; Meyer, April; Alexander, Rachel; Murfin, Scott

    2015-01-01

    The purpose of this study was to establish intra-rater, intra-session, and inter-rater, reliability of sagittal plane hip, knee, and ankle angles with and without reflective markers using the GAITRite walkway and single video camera between student physical therapists and an experienced physical therapist. This study included thirty-two healthy participants age 20-59, stratified by age and gender. Participants performed three successful walks with and without markers applied to anatomical landmarks. GAITRite software was used to digitize sagittal hip, knee, and ankle angles at two phases of gait: (1) initial contact; and (2) mid-stance. Intra-rater reliability was more consistent for the experienced physical therapist, regardless of joint or phase of gait. Intra-session reliability was variable, the experienced physical therapist showed moderate to high reliability (intra-class correlation coefficient (ICC) = 0.50-0.89) and the student physical therapist showed very poor to high reliability (ICC = 0.07-0.85). Inter-rater reliability was highest during mid-stance at the knee with markers (ICC = 0.86) and lowest during mid-stance at the hip without markers (ICC = 0.25). Reliability of a single camera system, especially at the knee joint shows promise. Depending on the specific type of reliability, error can be attributed to the testers (e.g. lack of digitization practice and marker placement), participants (e.g. loose fitting clothing) and camera systems (e.g. frame rate and resolution). However, until the camera technology can be upgraded to a higher frame rate and resolution, and the software can be linked to the GAITRite walkway, the clinical utility for pre/post measures is limited.

  2. CMOS Imaging Sensor Technology for Aerial Mapping Cameras

    NASA Astrophysics Data System (ADS)

    Neumann, Klaus; Welzenbach, Martin; Timm, Martin

    2016-06-01

    In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.

  3. High-Speed Video Analysis in a Conceptual Physics Class

    NASA Astrophysics Data System (ADS)

    Desbien, Dwain M.

    2011-09-01

    The use of probe ware and computers has become quite common in introductory physics classrooms. Video analysis is also becoming more popular and is available to a wide range of students through commercially available and/or free software.2,3 Video analysis allows for the study of motions that cannot be easily measured in the traditional lab setting and also allows real-world situations to be analyzed. Many motions are too fast to easily be captured at the standard video frame rate of 30 frames per second (fps) employed by most video cameras. This paper will discuss using a consumer camera that can record high-frame-rate video in a college-level conceptual physics class. In particular this will involve the use of model rockets to determine the acceleration during the boost period right at launch and compare it to a simple model of the expected acceleration.

  4. Vesta's Elemental Composition

    NASA Technical Reports Server (NTRS)

    Prettyman, T. H.; Beck, A. W.; Feldman, W. C.; Lawrence, D. J.; McCoy, T. J.; McSween, H. Y.; Mittlefehldt, D. W.; Peplowski, P. N.; Raymond, C. A.; Reedy, R. C.; hide

    2014-01-01

    Many lines of evidence (e.g. common geochemistry, chronology, O-isotope trends, and the presence of different HED rock types in polymict breccias) indicate that the howardite, eucrite, and diogenite (HED) meteorites originated from a single parent body. Meteorite studies show that this protoplanet underwent igneous differentiation to form a metallic core, an ultramafic mantle, and a basaltic crust. A spectroscopic match between the HEDs and 4 Vesta along with a plausible mechanism for their transfer to Earth, perhaps as chips off V-type asteroids ejected from Vesta's southern impact basin, supports the consensus view that many of these achondritic meteorites are samples of Vesta's crust and upper mantle. The HED-Vesta connection was put to the test by the NASA Dawn mission, which spent a year in close proximity to Vesta. Measurements by Dawn's three instruments, redundant Framing Cameras (FC), a Visible-InfraRed (VIR) spectrometer, and a Gamma Ray and Neutron Detector (GRaND), along with radio science have strengthened the link. Gravity measurements by Dawn are consistent with a differentiated, silicate body, with a dense Fe-rich core. The range of pyroxene compositions determined by VIR overlaps that of the howardites. Elemental abundances determined by nuclear spectroscopy are also consistent with HED-compositions. Observations by GRaND provided a new view of Vesta inaccessible by telescopic observations. Here, we summarize the results of Dawn's geochemical investigation of Vesta and their implications.

  5. Abrogation of Microsatellite-instable Tumors Using a Highly Selective Suicide Gene/Prodrug Combination

    PubMed Central

    Ferrás, Cristina; Oude Vrielink, Joachim AF; Verspuy, Johan WA; te Riele, Hein; Tsaalbi-Shtylik, Anastasia; de Wind, Niels

    2009-01-01

    A substantial fraction of sporadic and inherited colorectal and endometrial cancers in humans is deficient in DNA mismatch repair (MMR). These cancers are characterized by length alterations in ubiquitous simple sequence repeats, a phenotype called microsatellite instability. Here we have exploited this phenotype by developing a novel approach for the highly selective gene therapy of MMR-deficient tumors. To achieve this selectivity, we mutated the VP22FCU1 suicide gene by inserting an out-of-frame microsatellite within its coding region. We show that in a significant fraction of microsatellite-instable (MSI) cells carrying the mutated suicide gene, full-length protein becomes expressed within a few cell doublings, presumably resulting from a reverting frameshift within the inserted microsatellite. Treatment of these cells with the innocuous prodrug 5-fluorocytosine (5-FC) induces strong cytotoxicity and we demonstrate that this owes to multiple bystander effects conferred by the suicide gene/prodrug combination. In a mouse model, MMR-deficient tumors that contained the out-of-frame VP22FCU1 gene displayed strong remission after treatment with 5-FC, without any obvious adverse systemic effects to the mouse. By virtue of its high selectivity and potency, this conditional enzyme/prodrug combination may hold promise for the treatment or prevention of MMR-deficient cancer in humans. PMID:19471249

  6. Performance Characterization of UV Science Cameras Developed for the Chromospheric Lyman-Alpha Spectro-Polarimeter

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2014-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras will be built and tested for flight with the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The goal of the CLASP mission is to observe the scattering polarization in Lyman-alpha and to detect the Hanle effect in the line core. Due to the nature of Lyman-alpha polarization in the chromosphere, strict measurement sensitivity requirements are imposed on the CLASP polarimeter and spectrograph systems; science requirements for polarization measurements of Q/I and U/I are 0.1 percent in the line core. CLASP is a dual-beam spectro- polarimeter, which uses a continuously rotating waveplate as a polarization modulator, while the waveplate motor driver outputs trigger pulses to synchronize the exposures. The CCDs are operated in frame-transfer mode; the trigger pulse initiates the frame transfer, effectively ending the ongoing exposure and starting the next. The strict requirement of 0.1 percent polarization accuracy is met by using frame-transfer cameras to maximize the duty cycle in order to minimize photon noise. Coating the e2v CCD57-10 512x512 detectors with Lumogen-E coating allows for a relatively high (30 percent) quantum efficiency at the Lyman-alpha line. The CLASP cameras were designed to operate with a gain of 2.0 +/- 0.5, less than or equal to 25 e- readout noise, less than or equal to 10 e-/second/pixel dark current, and less than 0.1percent residual non-linearity. We present the results of the performance characterization study performed on the CLASP prototype camera; system gain, dark current, read noise, and residual non-linearity.

  7. High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project

    NASA Astrophysics Data System (ADS)

    Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique

    2015-04-01

    Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.

  8. Vision System Measures Motions of Robot and External Objects

    NASA Technical Reports Server (NTRS)

    Talukder, Ashit; Matthies, Larry

    2008-01-01

    A prototype of an advanced robotic vision system both (1) measures its own motion with respect to a stationary background and (2) detects other moving objects and estimates their motions, all by use of visual cues. Like some prior robotic and other optoelectronic vision systems, this system is based partly on concepts of optical flow and visual odometry. Whereas prior optoelectronic visual-odometry systems have been limited to frame rates of no more than 1 Hz, a visual-odometry subsystem that is part of this system operates at a frame rate of 60 to 200 Hz, given optical-flow estimates. The overall system operates at an effective frame rate of 12 Hz. Moreover, unlike prior machine-vision systems for detecting motions of external objects, this system need not remain stationary: it can detect such motions while it is moving (even vibrating). The system includes a stereoscopic pair of cameras mounted on a moving robot. The outputs of the cameras are digitized, then processed to extract positions and velocities. The initial image-data-processing functions of this system are the same as those of some prior systems: Stereoscopy is used to compute three-dimensional (3D) positions for all pixels in the camera images. For each pixel of each image, optical flow between successive image frames is used to compute the two-dimensional (2D) apparent relative translational motion of the point transverse to the line of sight of the camera. The challenge in designing this system was to provide for utilization of the 3D information from stereoscopy in conjunction with the 2D information from optical flow to distinguish between motion of the camera pair and motions of external objects, compute the motion of the camera pair in all six degrees of translational and rotational freedom, and robustly estimate the motions of external objects, all in real time. To meet this challenge, the system is designed to perform the following image-data-processing functions: The visual-odometry subsystem (the subsystem that estimates the motion of the camera pair relative to the stationary background) utilizes the 3D information from stereoscopy and the 2D information from optical flow. It computes the relationship between the 3D and 2D motions and uses a least-mean-squares technique to estimate motion parameters. The least-mean-squares technique is suitable for real-time implementation when the number of external-moving-object pixels is smaller than the number of stationary-background pixels.

  9. Optical space weathering on Vesta: Radiative-transfer models and Dawn observations

    NASA Astrophysics Data System (ADS)

    Blewett, David T.; Denevi, Brett W.; Le Corre, Lucille; Reddy, Vishnu; Schröder, Stefan E.; Pieters, Carle M.; Tosi, Federico; Zambon, Francesca; De Sanctis, Maria Cristina; Ammannito, Eleonora; Roatsch, Thomas; Raymond, Carol A.; Russell, Christopher T.

    2016-02-01

    Exposure to ion and micrometeoroid bombardment in the space environment causes physical and chemical changes in the surface of an airless planetary body. These changes, called space weathering, can strongly influence a surface's optical characteristics, and hence complicate interpretation of composition from reflectance spectroscopy. Prior work using data from the Dawn spacecraft (Pieters, C.M. et al. [2012]. Nature 491, 79-82) found that accumulation of nanophase metallic iron (npFe0), which is a key space-weathering product on the Moon, does not appear to be important on Vesta, and instead regolith evolution is dominated by mixing with carbonaceous chondrite (CC) material delivered by impacts. In order to gain further insight into the nature of space weathering on Vesta, we constructed model reflectance spectra using Hapke's radiative-transfer theory and used them as an aid to understanding multispectral observations obtained by Dawn's Framing Cameras (FC). The model spectra, for a howardite mineral assemblage, include both the effects of npFe0 and that of a mixed CC component. We found that a plot of the 438-nm/555-nm ratio vs. the 555-nm reflectance for the model spectra helps to separate the effects of lunar-style space weathering (LSSW) from those of CC-mixing. We then constructed ratio-reflectance pixel scatterplots using FC images for four areas of contrasting composition: a eucritic area at Vibidia crater, a diogenitic area near Antonia crater, olivine-bearing material within Bellicia crater, and a light mantle unit (referred to as an ;orange patch; in some previous studies, based on steep spectral slope in the visible) northeast of Oppia crater. In these four cases the observed spectral trends are those expected from CC-mixing, with no evidence for weathering dominated by production of npFe0. In order to survey a wider range of surfaces, we also defined a spectral parameter that is a function of the change in 438-nm/555-nm ratio and the 555-nm reflectance between fresh and mature surfaces, permitting the spectral change to be classified as LSSW-like or CC-mixing-like. When applied to 21 fresh and mature FC spectral pairs, it was found that none have changes consistent with LSSW. We discuss Vesta's lack of LSSW in relation to the possible agents of space weathering, the effects of physical and compositional differences among asteroid surfaces, and the possible role of magnetic shielding from the solar wind.

  10. Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones

    PubMed Central

    Wang, Zhen; Jin, Bingwen; Geng, Weidong

    2017-01-01

    The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765

  11. Whirlwind Drama During Spirit's 496th Sol

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This movie clip shows a dust devil growing in size and blowing across the plain inside Mars' Gusev Crater. The clip consists of frames taken by the navigation camera on NASA's Mars Exploration Rover Spirit during the morning of the rover's 496th martian day, or sol (May 26, 2005). Contrast has been enhanced for anything in the images that changes from frame to frame, that is, for the dust moved by wind.

  12. A Summary of the Evaluation of PPG Herculite XP Glass in Punched Window and Storefront Assemblies

    DTIC Science & Technology

    2013-01-01

    frames for all IGU windows extruded from existing dies. The glazing was secured to the frame on all four sides with a 1/2-in bead width of DOW 995...lite and non-laminated IGU debris tests. A wood frame with a 4-in wide slit was placed behind the window to transform the debris cloud into a narrow...speed camera DIC Set-up laser deflection gauge shock tube window wood frame with slit high speed camerawell lit backdrop Debris Tracking Set-up laser

  13. Cloning of the cDNA for a hematopoietic cell-specific protein related to CD20 and the {beta} subunit of the high-affinity IgE receptor: Evidence for a family of proteins with four membrane-spanning regions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Adra, C.N.; Morrison, P.; Lim, B.

    1994-10-11

    The authors report the cloning of the cDNA for a human gene whose mRNA is expressed specifically in hematopoietic cells. A long open reading frame in the 1.7-kb mRNA encodes a 214-aa protein of 25 kDa with four hydrophobic regions consistent with a protein that traverses the membrane four times. To reflect the structure and expression of this gene in diverse hematopoietic lineages of lymphoid and myeloid origin, the authors named the gene HTm{sub 4}. The protein is about 20% homologous to two other {open_quotes}four-transmembrane{close_quotes} proteins; the B-cell-specific antigen CD20 and the {beta} subunit of the high-affinity receptor for IgE,more » Fc{sub {epsilon}}RI{beta}. The highest homologies among the three proteins are found in the transmembrane domains, but conserved residues are also recognized in the inter-transmembrane domains and in the N and C termini. Using fluorescence in situ hybridization, they localized HTm{sub 4} to human chromosome 11q12-13.1, where the CD20 and Fc{sub {epsilon}}RI{beta} genes are also located. Both the murine homologue for CD20, Ly-44, and the murine Fc{sub {epsilon}}RI{beta} gene map to the same region in murine chromosome 19. The authors propose that the HTm{sub 4}, CD20, and Fc{sub {epsilon}}RI{beta} genes evolved from the same ancestral gene to form a family of four-transmembrane proteins. It is possible that other related members exist. Similar to CD20 and Fc{sub {epsilon}}RI{beta}, it is likely that Htm{sub 4} has a role in signal transduction and, like Fc{sub {epsilon}}RI{beta}, might be a subunit associated with receptor complexes.« less

  14. Polarizing aperture stereoscopic cinema camera

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    2012-03-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.

  15. Polarizing aperture stereoscopic cinema camera

    NASA Astrophysics Data System (ADS)

    Lipton, Lenny

    2012-07-01

    The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.

  16. Underwater image mosaicking and visual odometry

    NASA Astrophysics Data System (ADS)

    Sadjadi, Firooz; Tangirala, Sekhar; Sorber, Scott

    2017-05-01

    This paper summarizes the results of studies in underwater odometery using a video camera for estimating the velocity of an unmanned underwater vehicle (UUV). Underwater vehicles are usually equipped with sonar and Inertial Measurement Unit (IMU) - an integrated sensor package that combines multiple accelerometers and gyros to produce a three dimensional measurement of both specific force and angular rate with respect to an inertial reference frame for navigation. In this study, we investigate the use of odometry information obtainable from a video camera mounted on a UUV to extract vehicle velocity relative to the ocean floor. A key challenge with this process is the seemingly bland (i.e. featureless) nature of video data obtained underwater which could make conventional approaches to image-based motion estimation difficult. To address this problem, we perform image enhancement, followed by frame to frame image transformation, registration and mosaicking/stitching. With this approach the velocity components associated with the moving sensor (vehicle) are readily obtained from (i) the components of the transform matrix at each frame; (ii) information about the height of the vehicle above the seabed; and (iii) the sensor resolution. Preliminary results are presented.

  17. Heterogeneous CPU-GPU moving targets detection for UAV video

    NASA Astrophysics Data System (ADS)

    Li, Maowen; Tang, Linbo; Han, Yuqi; Yu, Chunlei; Zhang, Chao; Fu, Huiquan

    2017-07-01

    Moving targets detection is gaining popularity in civilian and military applications. On some monitoring platform of motion detection, some low-resolution stationary cameras are replaced by moving HD camera based on UAVs. The pixels of moving targets in the HD Video taken by UAV are always in a minority, and the background of the frame is usually moving because of the motion of UAVs. The high computational cost of the algorithm prevents running it at higher resolutions the pixels of frame. Hence, to solve the problem of moving targets detection based UAVs video, we propose a heterogeneous CPU-GPU moving target detection algorithm for UAV video. More specifically, we use background registration to eliminate the impact of the moving background and frame difference to detect small moving targets. In order to achieve the effect of real-time processing, we design the solution of heterogeneous CPU-GPU framework for our method. The experimental results show that our method can detect the main moving targets from the HD video taken by UAV, and the average process time is 52.16ms per frame which is fast enough to solve the problem.

  18. 3D Position and Velocity Vector Computations of Objects Jettisoned from the International Space Station Using Close-Range Photogrammetry Approach

    NASA Technical Reports Server (NTRS)

    Papanyan, Valeri; Oshle, Edward; Adamo, Daniel

    2008-01-01

    Measurement of the jettisoned object departure trajectory and velocity vector in the International Space Station (ISS) reference frame is vitally important for prompt evaluation of the object s imminent orbit. We report on the first successful application of photogrammetric analysis of the ISS imagery for the prompt computation of the jettisoned object s position and velocity vectors. As post-EVA analyses examples, we present the Floating Potential Probe (FPP) and the Russian "Orlan" Space Suit jettisons, as well as the near-real-time (provided in several hours after the separation) computations of the Video Stanchion Support Assembly Flight Support Assembly (VSSA-FSA) and Early Ammonia Servicer (EAS) jettisons during the US astronauts space-walk. Standard close-range photogrammetry analysis was used during this EVA to analyze two on-board camera image sequences down-linked from the ISS. In this approach the ISS camera orientations were computed from known coordinates of several reference points on the ISS hardware. Then the position of the jettisoned object for each time-frame was computed from its image in each frame of the video-clips. In another, "quick-look" approach used in near-real time, orientation of the cameras was computed from their position (from the ISS CAD model) and operational data (pan and tilt) then location of the jettisoned object was calculated only for several frames of the two synchronized movies. Keywords: Photogrammetry, International Space Station, jettisons, image analysis.

  19. The Example of Using the Xiaomi Cameras in Inventory of Monumental Objects - First Results

    NASA Astrophysics Data System (ADS)

    Markiewicz, J. S.; Łapiński, S.; Bienkowski, R.; Kaliszewska, A.

    2017-11-01

    At present, digital documentation recorded in the form of raster or vector files is the obligatory way of inventorying historical objects. Today, photogrammetry is becoming more and more popular and is becoming the standard of documentation in many projects involving the recording of all possible spatial data on landscape, architecture, or even single objects. Low-cost sensors allow for the creation of reliable and accurate three-dimensional models of investigated objects. This paper presents the results of a comparison between the outcomes obtained when using three sources of image: low-cost Xiaomi cameras, a full-frame camera (Canon 5D Mark II) and middle-frame camera (Hasselblad-Hd4). In order to check how the results obtained from the two sensors differ the following parameters were analysed: the accuracy of the orientation of the ground level photos on the control and check points, the distribution of appointed distortion in the self-calibration process, the flatness of the walls, the discrepancies between point clouds from the low-cost cameras and references data. The results presented below are a result of co-operation of researchers from three institutions: the Systems Research Institute PAS, The Department of Geodesy and Cartography at the Warsaw University of Technology and the National Museum in Warsaw.

  20. Full-Frame Reference for Test Photo of Moon

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images.

    Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

    The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.

  1. Lunar Reconnaissance Orbiter Camera (LROC) instrument overview

    USGS Publications Warehouse

    Robinson, M.S.; Brylow, S.M.; Tschimmel, M.; Humm, D.; Lawrence, S.J.; Thomas, P.C.; Denevi, B.W.; Bowman-Cisneros, E.; Zerr, J.; Ravine, M.A.; Caplinger, M.A.; Ghaemi, F.T.; Schaffner, J.A.; Malin, M.C.; Mahanti, P.; Bartels, A.; Anderson, J.; Tran, T.N.; Eliason, E.M.; McEwen, A.S.; Turtle, E.; Jolliff, B.L.; Hiesinger, H.

    2010-01-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) and Narrow Angle Cameras (NACs) are on the NASA Lunar Reconnaissance Orbiter (LRO). The WAC is a 7-color push-frame camera (100 and 400 m/pixel visible and UV, respectively), while the two NACs are monochrome narrow-angle linescan imagers (0.5 m/pixel). The primary mission of LRO is to obtain measurements of the Moon that will enable future lunar human exploration. The overarching goals of the LROC investigation include landing site identification and certification, mapping of permanently polar shadowed and sunlit regions, meter-scale mapping of polar regions, global multispectral imaging, a global morphology base map, characterization of regolith properties, and determination of current impact hazards.

  2. "Teacher in Space" Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40670 (18 Sept. 1985) --- The two teachers, Sharon Christa McAuliffe and Barbara R. Morgan (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. McAuliffe zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA

  3. "Teacher in Space" Trainees - Arriflex Motion Picture Camera

    NASA Image and Video Library

    1985-09-20

    S85-40671 (18 Sept. 1985) --- The two teachers, Barbara R. Morgan and Sharon Christa McAuliffe (out of frame) have hands-on experience with an Arriflex motion picture camera following a briefing on space photography. The two began training Sept. 10, 1985 with the STS-51L crew and learning basic procedures for space travelers. The second week of training included camera training, aircraft familiarization and other activities. Morgan zeroes in on a test subject during a practice session with the Arriflex. Photo credit: NASA

  4. Deployment of the RCA Satcom K-2 communications satellite

    NASA Image and Video Library

    1985-11-28

    61B-38-36W (28 Nov 1985) --- The 4,144-pound RCA Satcom K-2 communications satellite is photographed as it spins from the cargo bay of the Earth-orbiting Atlantis. A TV camera at right records the deployment for a later playback to Earth. This frame was photographed with a handheld Hasselblad camera inside the spacecraft.

  5. Validation of Viewing Reports: Exploration of a Photographic Method.

    ERIC Educational Resources Information Center

    Fletcher, James E.; Chen, Charles Chao-Ping

    A time lapse camera loaded with Super 8 film was employed to photographically record the area in front of a conventional television receiver in selected homes. The camera took one picture each minute for three days, including in the same frame the face of the television receiver. Family members kept a conventional viewing diary of their viewing…

  6. A device for synchronizing biomechanical data with cine film.

    PubMed

    Rome, L C

    1995-03-01

    Biomechanists are faced with two problems in synchronizing continuous physiological data to discrete, frame-based kinematic data from films. First, the accuracy of most synchronization techniques is good only to one frame and hence depends on framing rate. Second, even if perfectly correlated at the beginning of a 'take', the film and physiological data may become progressively desynchronized as the 'take' proceeds. A system is described, which provides synchronization between cine film and continuous physiological data with an accuracy of +/- 0.2 ms, independent of framing rate and the duration of the film 'take'. Shutter pulses from the camera were output to a computer recording system where they were recorded and counted, and to a digital device which counted the pulses and illuminated the count on the bank of LEDs which was filmed with the subject. Synchronization was performed by using the rising edge of the shutter pulse and by comparing the frame number imprinted on the film to the frame number recorded by the computer system. In addition to providing highly accurate synchronization over long film 'takes', this system provides several other advantages. First, having frame numbers imprinted both on the film and computer record greatly facilitates analysis. Second, the LEDs were designed to show the 'take number' while the camera is coming up to speed, thereby avoiding the use of cue cards which disturb the animal. Finally, use of this device results in considerable savings in film.

  7. Context View from 11' on ladder from southeast corner of ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    Context View from 11' on ladder from southeast corner of Bottle Village parcel, just inside fence. Doll Head Shrine at far left frame, Living Trailer (c.1960 "Spartanette") in center frame. Little Wishing Well at far right frame. Some shrines and small buildings were destroyed in the January 1994 Northridge earthquake, and only their perimeter walls and foundations exist. Camera facing north northwest. - Grandma Prisbrey's Bottle Village, 4595 Cochran Street, Simi Valley, Ventura County, CA

  8. Obstacle Detection in Indoor Environment for Visually Impaired Using Mobile Camera

    NASA Astrophysics Data System (ADS)

    Rahman, Samiur; Ullah, Sana; Ullah, Sehat

    2018-01-01

    Obstacle detection can improve the mobility as well as the safety of visually impaired people. In this paper, we present a system using mobile camera for visually impaired people. The proposed algorithm works in indoor environment and it uses a very simple technique of using few pre-stored floor images. In indoor environment all unique floor types are considered and a single image is stored for each unique floor type. These floor images are considered as reference images. The algorithm acquires an input image frame and then a region of interest is selected and is scanned for obstacle using pre-stored floor images. The algorithm compares the present frame and the next frame and compute mean square error of the two frames. If mean square error is less than a threshold value α then it means that there is no obstacle in the next frame. If mean square error is greater than α then there are two possibilities; either there is an obstacle or the floor type is changed. In order to check if the floor is changed, the algorithm computes mean square error of next frame and all stored floor types. If minimum of mean square error is less than a threshold value α then flour is changed otherwise there exist an obstacle. The proposed algorithm works in real-time and 96% accuracy has been achieved.

  9. CIFAR10-DVS: An Event-Stream Dataset for Object Classification

    PubMed Central

    Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping

    2017-01-01

    Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as “CIFAR10-DVS.” The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification. PMID:28611582

  10. Optimizing low-light microscopy with back-illuminated electron multiplying charge-coupled device: enhanced sensitivity, speed, and resolution.

    PubMed

    Coates, Colin G; Denvir, Donal J; McHale, Noel G; Thornbury, Keith D; Hollywood, Mark A

    2004-01-01

    The back-illuminated electron multiplying charge-coupled device (EMCCD) camera is having a profound influence on the field of low-light dynamic cellular microscopy, combining highest possible photon collection efficiency with the ability to virtually eliminate the readout noise detection limit. We report here the use of this camera, in 512 x 512 frame-transfer chip format at 10-MHz pixel readout speed, in optimizing a demanding ultra-low-light intracellular calcium flux microscopy setup. The arrangement employed includes a spinning confocal Nipkow disk, which, while facilitating the need to both generate images at very rapid frame rates and minimize background photons, yields very weak signals. The challenge for the camera lies not just in detecting as many of these scarce photons as possible, but also in operating at a frame rate that meets the temporal resolution requirements of many low-light microscopy approaches, a particular demand of smooth muscle calcium flux microscopy. Results presented illustrate both the significant sensitivity improvement offered by this technology over the previous standard in ultra-low-light CCD detection, the GenIII+intensified charge-coupled device (ICCD), and also portray the advanced temporal and spatial resolution capabilities of the EMCCD. Copyright 2004 Society of Photo-Optical Instrumentation Engineers.

  11. CIFAR10-DVS: An Event-Stream Dataset for Object Classification.

    PubMed

    Li, Hongmin; Liu, Hanchao; Ji, Xiangyang; Li, Guoqi; Shi, Luping

    2017-01-01

    Neuromorphic vision research requires high-quality and appropriately challenging event-stream datasets to support continuous improvement of algorithms and methods. However, creating event-stream datasets is a time-consuming task, which needs to be recorded using the neuromorphic cameras. Currently, there are limited event-stream datasets available. In this work, by utilizing the popular computer vision dataset CIFAR-10, we converted 10,000 frame-based images into 10,000 event streams using a dynamic vision sensor (DVS), providing an event-stream dataset of intermediate difficulty in 10 different classes, named as "CIFAR10-DVS." The conversion of event-stream dataset was implemented by a repeated closed-loop smooth (RCLS) movement of frame-based images. Unlike the conversion of frame-based images by moving the camera, the image movement is more realistic in respect of its practical applications. The repeated closed-loop image movement generates rich local intensity changes in continuous time which are quantized by each pixel of the DVS camera to generate events. Furthermore, a performance benchmark in event-driven object classification is provided based on state-of-the-art classification algorithms. This work provides a large event-stream dataset and an initial benchmark for comparison, which may boost algorithm developments in even-driven pattern recognition and object classification.

  12. Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.

    PubMed

    Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart

    2017-01-01

    Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.

  13. Mission Specialist (MS) Bluford exercises on middeck treadmill

    NASA Image and Video Library

    1983-09-05

    STS008-13-0361 (30 Aug.-5 Sept. 1983) --- Astronaut Guion S. Bluford, STS-8 mission specialist, assists Dr. William E. Thornton (out of frame) with a medical test that requires use of the treadmill exercising device designed for spaceflight by the STS-8 medical doctor. This frame was shot with a 35mm camera. Photo credit: NASA

  14. Mission Specialist Hawley works with the SWUIS experiment

    NASA Image and Video Library

    2013-11-18

    STS093-350-022 (22-27 July 1999) --- Astronaut Steven A. Hawley, mission specialist, works with the Southwest Ultraviolet Imaging System (SWUIS) experiment onboard the Earth-orbiting Space Shuttle Columbia. The SWUIS is based around a Maksutov-design Ultraviolet (UV) telescope and a UV-sensitive, image-intensified Charge-Coupled Device (CCD) camera that frames at video frame rates.

  15. High frame rate imaging systems developed in Northwest Institute of Nuclear Technology

    NASA Astrophysics Data System (ADS)

    Li, Binkang; Wang, Kuilu; Guo, Mingan; Ruan, Linbo; Zhang, Haibing; Yang, Shaohua; Feng, Bing; Sun, Fengrong; Chen, Yanli

    2007-01-01

    This paper presents high frame rate imaging systems developed in Northwest Institute of Nuclear Technology in recent years. Three types of imaging systems are included. The first type of system utilizes EG&G RETICON Photodiode Array (PDA) RA100A as the image sensor, which can work at up to 1000 frame per second (fps). Besides working continuously, the PDA system is also designed to switch to capture flash light event working mode. A specific time sequence is designed to satisfy this request. The camera image data can be transmitted to remote area by coaxial or optic fiber cable and then be stored. The second type of imaging system utilizes PHOTOBIT Complementary Metal Oxygen Semiconductor (CMOS) PB-MV13 as the image sensor, which has a high resolution of 1280 (H) ×1024 (V) pixels per frame. The CMOS system can operate at up to 500fps in full frame and 4000fps partially. The prototype scheme of the system is presented. The third type of imaging systems adopts charge coupled device (CCD) as the imagers. MINTRON MTV-1881EX, DALSA CA-D1 and CA-D6 camera head are used in the systems development. The features comparison of the RA100A, PB-MV13, and CA-D6 based systems are given in the end.

  16. Low-Latency Line Tracking Using Event-Based Dynamic Vision Sensors

    PubMed Central

    Everding, Lukas; Conradt, Jörg

    2018-01-01

    In order to safely navigate and orient in their local surroundings autonomous systems need to rapidly extract and persistently track visual features from the environment. While there are many algorithms tackling those tasks for traditional frame-based cameras, these have to deal with the fact that conventional cameras sample their environment with a fixed frequency. Most prominently, the same features have to be found in consecutive frames and corresponding features then need to be matched using elaborate techniques as any information between the two frames is lost. We introduce a novel method to detect and track line structures in data streams of event-based silicon retinae [also known as dynamic vision sensors (DVS)]. In contrast to conventional cameras, these biologically inspired sensors generate a quasicontinuous stream of vision information analogous to the information stream created by the ganglion cells in mammal retinae. All pixels of DVS operate asynchronously without a periodic sampling rate and emit a so-called DVS address event as soon as they perceive a luminance change exceeding an adjustable threshold. We use the high temporal resolution achieved by the DVS to track features continuously through time instead of only at fixed points in time. The focus of this work lies on tracking lines in a mostly static environment which is observed by a moving camera, a typical setting in mobile robotics. Since DVS events are mostly generated at object boundaries and edges which in man-made environments often form lines they were chosen as feature to track. Our method is based on detecting planes of DVS address events in x-y-t-space and tracing these planes through time. It is robust against noise and runs in real time on a standard computer, hence it is suitable for low latency robotics. The efficacy and performance are evaluated on real-world data sets which show artificial structures in an office-building using event data for tracking and frame data for ground-truth estimation from a DAVIS240C sensor. PMID:29515386

  17. Camera-Based Microswitch Technology to Monitor Mouth, Eyebrow, and Eyelid Responses of Children with Profound Multiple Disabilities

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Lang, Russell; Didden, Robert

    2011-01-01

    A camera-based microswitch technology was recently used to successfully monitor small eyelid and mouth responses of two adults with profound multiple disabilities (Lancioni et al., Res Dev Disab 31:1509-1514, 2010a). This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on…

  18. Camera-Based Microswitch Technology for Eyelid and Mouth Responses of Persons with Profound Multiple Disabilities: Two Case Studies

    ERIC Educational Resources Information Center

    Lancioni, Giulio E.; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N.; O'Reilly, Mark F.; Sigafoos, Jeff

    2010-01-01

    These two studies assessed camera-based microswitch technology for eyelid and mouth responses of two persons with profound multiple disabilities and minimal motor behavior. This technology, in contrast with the traditional optic microswitches used for those responses, did not require support frames on the participants' face but only small color…

  19. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  20. Passive stand-off terahertz imaging with 1 hertz frame rate

    NASA Astrophysics Data System (ADS)

    May, T.; Zieger, G.; Anders, S.; Zakosarenko, V.; Starkloff, M.; Meyer, H.-G.; Thorwirth, G.; Kreysa, E.

    2008-04-01

    Terahertz (THz) cameras are expected to be a powerful tool for future security applications. If such a technology shall be useful for typical security scenarios (e.g. airport check-in) it has to meet some minimum standards. A THz camera should record images with video rate from a safe distance (stand-off). Although active cameras are conceivable, a passive system has the benefit of concealed operation. Additionally, from an ethic perspective, the lack of exposure to a radiation source is a considerable advantage in public acceptance. Taking all these requirements into account, only cooled detectors are able to achieve the needed sensitivity. A big leap forward in the detector performance and scalability was driven by the astrophysics community. Superconducting bolometers and midsized arrays of them have been developed and are in routine use. Although devices with many pixels are foreseeable nowadays a device with an additional scanning optic is the straightest way to an imaging system with a useful resolution. We demonstrate the capabilities of a concept for a passive Terahertz video camera based on superconducting technology. The actual prototype utilizes a small Cassegrain telescope with a gyrating secondary mirror to record 2 kilopixel THz images with 1 second frame rate.

  1. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking

    PubMed Central

    Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong

    2018-01-01

    Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features. PMID:29723974

  2. Geometric Integration of Hybrid Correspondences for RGB-D Unidirectional Tracking.

    PubMed

    Tang, Shengjun; Chen, Wu; Wang, Weixi; Li, Xiaoming; Darwish, Walid; Li, Wenbin; Huang, Zhengdong; Hu, Han; Guo, Renzhong

    2018-05-01

    Traditionally, visual-based RGB-D SLAM systems only use correspondences with valid depth values for camera tracking, thus ignoring the regions without 3D information. Due to the strict limitation on measurement distance and view angle, such systems adopt only short-range constraints which may introduce larger drift errors during long-distance unidirectional tracking. In this paper, we propose a novel geometric integration method that makes use of both 2D and 3D correspondences for RGB-D tracking. Our method handles the problem by exploring visual features both when depth information is available and when it is unknown. The system comprises two parts: coarse pose tracking with 3D correspondences, and geometric integration with hybrid correspondences. First, the coarse pose tracking generates the initial camera pose using 3D correspondences with frame-by-frame registration. The initial camera poses are then used as inputs for the geometric integration model, along with 3D correspondences, 2D-3D correspondences and 2D correspondences identified from frame pairs. The initial 3D location of the correspondence is determined in two ways, from depth image and by using the initial poses to triangulate. The model improves the camera poses and decreases drift error during long-distance RGB-D tracking iteratively. Experiments were conducted using data sequences collected by commercial Structure Sensors. The results verify that the geometric integration of hybrid correspondences effectively decreases the drift error and improves mapping accuracy. Furthermore, the model enables a comparative and synergistic use of datasets, including both 2D and 3D features.

  3. Single-camera visual odometry to track a surgical X-ray C-arm base.

    PubMed

    Esfandiari, Hooman; Lichti, Derek; Anglin, Carolyn

    2017-12-01

    This study provides a framework for a single-camera odometry system for localizing a surgical C-arm base. An application-specific monocular visual odometry system (a downward-looking consumer-grade camera rigidly attached to the C-arm base) is proposed in this research. The cumulative dead-reckoning estimation of the base is extracted based on frame-to-frame homography estimation. Optical-flow results are utilized to feed the odometry. Online positional and orientation parameters are then reported. Positional accuracy of better than 2% (of the total traveled distance) for most of the cases and 4% for all the cases studied and angular accuracy of better than 2% (of absolute cumulative changes in orientation) were achieved with this method. This study provides a robust and accurate tracking framework that not only can be integrated with the current C-arm joint-tracking system (i.e. TC-arm) but also is capable of being employed for similar applications in other fields (e.g. robotics).

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ranson, W.F.; Schaeffel, J.A.; Murphree, E.A.

    The response of prestressed and preheated plates subject to an exponentially decaying blast load was experimentally determined. A grid was reflected from the front surface of the plate and the response was recorded with a high speed camera. The camera used in this analysis was a rotating drum camera operating at 20,000 frames per second with a maximum of 224 frames at 39 microseconds separation. Inplane tension loads were applied to the plate by means of air cylinders. Maximum biaxial load applied to the plate was 500 pounds. Plate preheating was obtained with resistance heaters located in the specimen platemore » holder with a maximum capability of 500F. Data analysis was restricted to the maximum conditions at the center of the plate. Strains were determined from the photographic data and the stresses were calculated from the strain data. Results were obtained from zero preload conditions to a maximum of 480 pounds inplane tension loads and a plate temperature of 490F. The blast load ranged from 6 to 23 psi.« less

  5. Development of a Portable 3CCD Camera System for Multispectral Imaging of Biological Samples

    PubMed Central

    Lee, Hoyoung; Park, Soo Hyun; Noh, Sang Ha; Lim, Jongguk; Kim, Moon S.

    2014-01-01

    Recent studies have suggested the need for imaging devices capable of multispectral imaging beyond the visible region, to allow for quality and safety evaluations of agricultural commodities. Conventional multispectral imaging devices lack flexibility in spectral waveband selectivity for such applications. In this paper, a recently developed portable 3CCD camera with significant improvements over existing imaging devices is presented. A beam-splitter prism assembly for 3CCD was designed to accommodate three interference filters that can be easily changed for application-specific multispectral waveband selection in the 400 to 1000 nm region. We also designed and integrated electronic components on printed circuit boards with firmware programming, enabling parallel processing, synchronization, and independent control of the three CCD sensors, to ensure the transfer of data without significant delay or data loss due to buffering. The system can stream 30 frames (3-waveband images in each frame) per second. The potential utility of the 3CCD camera system was demonstrated in the laboratory for detecting defect spots on apples. PMID:25350510

  6. Fast visible imaging of turbulent plasma in TORPEX

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Iraji, D.; Diallo, A.; Fasoli, A.

    2008-10-15

    Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less

  7. Preliminary results on the various U.V. straylight sources for the VWFC onboard SL 1. [Very Wide Field Camera

    NASA Technical Reports Server (NTRS)

    Viton, M.; Courtes, G.; Sivan, J. P.; Decher, R.; Gary, A.

    1985-01-01

    Technical difficulties encountered using the Very Wide Field Camera (VWFC) during the Spacelab 1 Shuttle mission are reported. The VWFC is a wide low resolution (5 arcmin half-half width) photographic camera, capable of operating in both spectrometric and photometric modes. The bandpasses of the photometric mode of the VWFC are defined by three Al + MgF2 interference filters. A piggy-back spectrograph attached to the VWFC was used for observations in the spectrometric mode. A total of 48 astronomical frames were obtained using the VWFC, of which only 20 were considered to be of adequate quality for astronomical data processing. Preliminary analysis of the 28 poor-quality images revealed the following possible defects in the VWFC: darkness in the spacing frames, twilight/dawn UV straylight, and internal UV straylight. Improvements in the VWFC astronomical data processing scheme are expected to help identify and eliminate UV straylight sources in the future.

  8. Iodine filter imaging system for subtraction angiography using synchrotron radiation

    NASA Astrophysics Data System (ADS)

    Umetani, K.; Ueda, K.; Takeda, T.; Itai, Y.; Akisada, M.; Nakajima, T.

    1993-11-01

    A new type of real-time imaging system was developed for transvenous coronary angiography. A combination of an iodine filter and a single energy broad-bandwidth X-ray produces two-energy images for the iodine K-edge subtraction technique. X-ray images are sequentially converted to visible images by an X-ray image intensifier. By synchronizing the timing of the movement of the iodine filter into and out of the X-ray beam, two output images of the image intensifier are focused side by side on the photoconductive layer of a camera tube by an oscillating mirror. Both images are read out by electron beam scanning of a 1050-scanning-line video camera within a camera frame time of 66.7 ms. One hundred ninety two pairs of iodine-filtered and non-iodine-filtered images are stored in the frame memory at a rate of 15 pairs/s. In vivo subtracted images of coronary arteries in dogs were obtained in the form of motion pictures.

  9. Preliminary Geological Map of the Ac-H-12 Toharu Quadrangle of Ceres: An Integrated Mapping Study Using Dawn Spacecraft Data

    NASA Astrophysics Data System (ADS)

    Mest, S. C.; Williams, D. A.; Crown, D. A.; Yingst, R. A.; Buczkowski, D.; Schenk, P.; Scully, J. E. C.; Jaumann, R.; Roatsch, T.; Preusker, F.; Platz, T.; Nathues, A.; Hoffmann, M.; Schäfer, M.; Marchi, S.; De Sanctis, M. C.; Russell, C. T.; Raymond, C. A.

    2015-12-01

    We are using recent data from the Dawn spacecraft to map the geology of the Ac-H-12 Toharu Quadrangle (21-66°S, 90-180°E) of the dwarf planet Ceres in order to examine its surface geology and understand its geologic history. At the time of this writing, mapping was performed on Framing Camera (FC) mosaics from late Approach (1.3 km/px) and Survey (415 m/px) orbits, including clear filter and color images and digital terrain models derived from stereo images. Images from the High Altitude Mapping Orbit (140 m/px) will be used to refine the map in Fall 2015, followed by the Low Altitude Mapping Orbit (35 m/px) starting in December 2015. The quad is named after crater Toharu (87 km diameter; 49°S, 155°E). The southern rim of Kerwan basin (284 km diameter) is visible along the northern edge of the quad, which is preserved as a low-relief scarp. The quad exhibits smooth terrain in the north, and more heavily cratered terrain in the south. The smooth terrain forms nearly flat-lying plains in some areas, such as on the floor and to the southeast of Kerwan, and overlies hummocky materials in other areas. These smooth materials extend over a much broader area outside of the quad, and appear to contain some of the lowest crater densities on Ceres. Impact craters exhibit a range of coinciding sizes and preservation styles. Smaller craters (<40 km) generally appear morphologically "fresh", and their rims are nearly circular and raised above the surrounding terrain. Larger craters, such as Toharu, appear more degraded, exhibiting irregularly shaped, sometimes scalloped, rim structures, and debris lobes on their floors. Numerous craters (> 20 km) contain central mounds; at current FC resolution, it is difficult to discern if these are primary structures (i.e., central peaks) or secondary features. Support of the Dawn Instrument, Operations, & Science Teams is acknowledged. This work is supported by grants from NASA, DLR and MPG.

  10. Measuring full-field displacement spectral components using photographs taken with a DSLR camera via an analogue Fourier integral

    NASA Astrophysics Data System (ADS)

    Javh, Jaka; Slavič, Janko; Boltežar, Miha

    2018-02-01

    Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.

  11. Masked-backlighter technique used to simultaneously image x-ray absorption and x-ray emission from an inertial confinement fusion plasma.

    PubMed

    Marshall, F J; Radha, P B

    2014-11-01

    A method to simultaneously image both the absorption and the self-emission of an imploding inertial confinement fusion plasma has been demonstrated on the OMEGA Laser System. The technique involves the use of a high-Z backlighter, half of which is covered with a low-Z material, and a high-speed x-ray framing camera aligned to capture images backlit by this masked backlighter. Two strips of the four-strip framing camera record images backlit by the high-Z portion of the backlighter, while the other two strips record images aligned with the low-Z portion of the backlighter. The emission from the low-Z material is effectively eliminated by a high-Z filter positioned in front of the framing camera, limiting the detected backlighter emission to that of the principal emission line of the high-Z material. As a result, half of the images are of self-emission from the plasma and the other half are of self-emission plus the backlighter. The advantage of this technique is that the self-emission simultaneous with backlighter absorption is independently measured from a nearby direction. The absorption occurs only in the high-Z backlit frames and is either spatially separated from the emission or the self-emission is suppressed by filtering, or by using a backlighter much brighter than the self-emission, or by subtraction. The masked-backlighter technique has been used on the OMEGA Laser System to simultaneously measure the emission profiles and the absorption profiles of polar-driven implosions.

  12. Triton Mosaic

    NASA Image and Video Library

    1999-08-25

    Mosaic of Triton constructed from 16 individual images. After globally minimizing the camera pointing errors, the frames we reprocessed by map projections, photometric function removal and placement in the mosaic.

  13. Texton-based super-resolution for achieving high spatiotemporal resolution in hybrid camera system

    NASA Astrophysics Data System (ADS)

    Kamimura, Kenji; Tsumura, Norimichi; Nakaguchi, Toshiya; Miyake, Yoichi

    2010-05-01

    Many super-resolution methods have been proposed to enhance the spatial resolution of images by using iteration and multiple input images. In a previous paper, we proposed the example-based super-resolution method to enhance an image through pixel-based texton substitution to reduce the computational cost. In this method, however, we only considered the enhancement of a texture image. In this study, we modified this texton substitution method for a hybrid camera to reduce the required bandwidth of a high-resolution video camera. We applied our algorithm to pairs of high- and low-spatiotemporal-resolution videos, which were synthesized to simulate a hybrid camera. The result showed that the fine detail of the low-resolution video can be reproduced compared with bicubic interpolation and the required bandwidth could be reduced to about 1/5 in a video camera. It was also shown that the peak signal-to-noise ratios (PSNRs) of the images improved by about 6 dB in a trained frame and by 1.0-1.5 dB in a test frame, as determined by comparison with the processed image using bicubic interpolation, and the average PSNRs were higher than those obtained by the well-known Freeman’s patch-based super-resolution method. Compared with that of the Freeman’s patch-based super-resolution method, the computational time of our method was reduced to almost 1/10.

  14. Toward real-time quantum imaging with a single pixel camera

    DOE PAGES

    Lawrie, B. J.; Pooser, R. C.

    2013-03-19

    In this paper, we present a workbench for the study of real-time quantum imaging by measuring the frame-by-frame quantum noise reduction of multi-spatial-mode twin beams generated by four wave mixing in Rb vapor. Exploiting the multiple spatial modes of this squeezed light source, we utilize spatial light modulators to selectively pass macropixels of quantum correlated modes from each of the twin beams to a high quantum efficiency balanced detector. Finally, in low-light-level imaging applications, the ability to measure the quantum correlations between individual spatial modes and macropixels of spatial modes with a single pixel camera will facilitate compressive quantum imagingmore » with sensitivity below the photon shot noise limit.« less

  15. An Application for Driver Drowsiness Identification based on Pupil Detection using IR Camera

    NASA Astrophysics Data System (ADS)

    Kumar, K. S. Chidanand; Bhowmick, Brojeshwar

    A Driver drowsiness identification system has been proposed that generates alarms when driver falls asleep during driving. A number of different physical phenomena can be monitored and measured in order to detect drowsiness of driver in a vehicle. This paper presents a methodology for driver drowsiness identification using IR camera by detecting and tracking pupils. The face region is first determined first using euler number and template matching. Pupils are then located in the face region. In subsequent frames of video, pupils are tracked in order to find whether the eyes are open or closed. If eyes are closed for several consecutive frames then it is concluded that the driver is fatigued and alarm is generated.

  16. Compact Kirkpatrick–Baez microscope mirrors for imaging laser-plasma x-ray emission

    DOE PAGES

    Marshall, F. J.

    2012-07-18

    Compact Kirkpatrick–Baez microscope mirror components for use in imaging laser-plasma x-ray emission have been manufactured, coated, and tested. A single mirror pair has dimensions of 14 × 7 × 9 mm and a best resolution of ~5 μm. The mirrors are coated with Ir providing a useful energy range of 2-8 keV when operated at a grazing angle of 0.7°. The mirrors can be circularly arranged to provide 16 images of the target emission a configuration best suited for use in combination with a custom framing camera. As a result, an alternative arrangement of the mirrors would allow alignment ofmore » the images with a fourstrip framing camera.« less

  17. A real-time ultrasonic field mapping system using a Fabry Pérot single pixel camera for 3D photoacoustic imaging

    NASA Astrophysics Data System (ADS)

    Huynh, Nam; Zhang, Edward; Betcke, Marta; Arridge, Simon R.; Beard, Paul; Cox, Ben

    2015-03-01

    A system for dynamic mapping of broadband ultrasound fields has been designed, with high frame rate photoacoustic imaging in mind. A Fabry-Pérot interferometric ultrasound sensor was interrogated using a coherent light single-pixel camera. Scrambled Hadamard measurement patterns were used to sample the acoustic field at the sensor, and either a fast Hadamard transform or a compressed sensing reconstruction algorithm were used to recover the acoustic pressure data. Frame rates of 80 Hz were achieved for 32x32 images even though no specialist hardware was used for the on-the-fly reconstructions. The ability of the system to obtain photocacoustic images with data compressions as low as 10% was also demonstrated.

  18. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  19. Rapid and highly integrated FPGA-based Shack-Hartmann wavefront sensor for adaptive optics system

    NASA Astrophysics Data System (ADS)

    Chen, Yi-Pin; Chang, Chia-Yuan; Chen, Shean-Jen

    2018-02-01

    In this study, a field programmable gate array (FPGA)-based Shack-Hartmann wavefront sensor (SHWS) programmed on LabVIEW can be highly integrated into customized applications such as adaptive optics system (AOS) for performing real-time wavefront measurement. Further, a Camera Link frame grabber embedded with FPGA is adopted to enhance the sensor speed reacting to variation considering its advantage of the highest data transmission bandwidth. Instead of waiting for a frame image to be captured by the FPGA, the Shack-Hartmann algorithm are implemented in parallel processing blocks design and let the image data transmission synchronize with the wavefront reconstruction. On the other hand, we design a mechanism to control the deformable mirror in the same FPGA and verify the Shack-Hartmann sensor speed by controlling the frequency of the deformable mirror dynamic surface deformation. Currently, this FPGAbead SHWS design can achieve a 266 Hz cyclic speed limited by the camera frame rate as well as leaves 40% logic slices for additionally flexible design.

  20. Online tracking of outdoor lighting variations for augmented reality with moving cameras.

    PubMed

    Liu, Yanli; Granier, Xavier

    2012-04-01

    In augmented reality, one of key tasks to achieve a convincing visual appearance consistency between virtual objects and video scenes is to have a coherent illumination along the whole sequence. As outdoor illumination is largely dependent on the weather, the lighting condition may change from frame to frame. In this paper, we propose a full image-based approach for online tracking of outdoor illumination variations from videos captured with moving cameras. Our key idea is to estimate the relative intensities of sunlight and skylight via a sparse set of planar feature-points extracted from each frame. To address the inevitable feature misalignments, a set of constraints are introduced to select the most reliable ones. Exploiting the spatial and temporal coherence of illumination, the relative intensities of sunlight and skylight are finally estimated by using an optimization process. We validate our technique on a set of real-life videos and show that the results with our estimations are visually coherent along the video sequences.

  1. Eye pupil detection system using an ensemble of regression forest and fast radial symmetry transform with a near infrared camera

    NASA Astrophysics Data System (ADS)

    Jeong, Mira; Nam, Jae-Yeal; Ko, Byoung Chul

    2017-09-01

    In this paper, we focus on pupil center detection in various video sequences that include head poses and changes in illumination. To detect the pupil center, we first find four eye landmarks in each eye by using cascade local regression based on a regression forest. Based on the rough location of the pupil, a fast radial symmetric transform is applied using the previously found pupil location to rearrange the fine pupil center. As the final step, the pupil displacement is estimated between the previous frame and the current frame to maintain the level of accuracy against a false locating result occurring in a particular frame. We generated a new face dataset, called Keimyung University pupil detection (KMUPD), with infrared camera. The proposed method was successfully applied to the KMUPD dataset, and the results indicate that its pupil center detection capability is better than that of other methods and with a shorter processing time.

  2. Alignment of cryo-EM movies of individual particles by optimization of image translations.

    PubMed

    Rubinstein, John L; Brubaker, Marcus A

    2015-11-01

    Direct detector device (DDD) cameras have revolutionized single particle electron cryomicroscopy (cryo-EM). In addition to an improved camera detective quantum efficiency, acquisition of DDD movies allows for correction of movement of the specimen, due to both instabilities in the microscope specimen stage and electron beam-induced movement. Unlike specimen stage drift, beam-induced movement is not always homogeneous within an image. Local correlation in the trajectories of nearby particles suggests that beam-induced motion is due to deformation of the ice layer. Algorithms have already been described that can correct movement for large regions of frames and for >1 MDa protein particles. Another algorithm allows individual <1 MDa protein particle trajectories to be estimated, but requires rolling averages to be calculated from frames and fits linear trajectories for particles. Here we describe an algorithm that allows for individual <1 MDa particle images to be aligned without frame averaging or linear trajectories. The algorithm maximizes the overall correlation of the shifted frames with the sum of the shifted frames. The optimum in this single objective function is found efficiently by making use of analytically calculated derivatives of the function. To smooth estimates of particle trajectories, rapid changes in particle positions between frames are penalized in the objective function and weighted averaging of nearby trajectories ensures local correlation in trajectories. This individual particle motion correction, in combination with weighting of Fourier components to account for increasing radiation damage in later frames, can be used to improve 3-D maps from single particle cryo-EM. Copyright © 2015 Elsevier Inc. All rights reserved.

  3. The Input-Interface of Webcam Applied in 3D Virtual Reality Systems

    ERIC Educational Resources Information Center

    Sun, Huey-Min; Cheng, Wen-Lin

    2009-01-01

    Our research explores a virtual reality application based on Web camera (Webcam) input-interface. The interface can replace with the mouse to control direction intention of a user by the method of frame difference. We divide a frame into nine grids from Webcam and make use of the background registration to compute the moving object. In order to…

  4. View of Saudi Arabia and north eastern Africa from the Apollo 17 spacecraft

    NASA Image and Video Library

    1972-12-09

    AS17-148-22718 (7-19 Dec. 1972) --- This excellent view of Saudi Arabia and the north eastern portion of the African continent was photographed by the Apollo 17 astronauts with a hand-held camera on their trans-lunar coast toward man's last lunar visit. Egypt, Sudan, Ethiopia are some of the African nations are visible. Iran, Iraq, Jordan are not so clearly visible because of cloud cover and their particular location in the picture. India is dimly visible at right of frame. The Red Sea is seen entirely in this one single frame, a rare occurrence in Apollo photography or any photography taken from manned spacecraft. The Gulf of Suez, the Dead Sea, Gulf of Aden, Persian Gulf and Gulf of Oman are also visible. This frame is one of 169 frames on film magazine NN carried aboard Apollo 17, all of which are SO368 (color) film. A 250mm lens on a 70mm Hasselblad camera recorded the image, one of 92 taken during the trans-lunar coast. Note AS17-148-22727 (also magazine NN) for an excellent full Earth picture showing the entire African continent.

  5. Comparison of Sheath Power Transmission Factor for Neutral Beam Injection and Electron Cyclotron Heated Discharges in DIII-D

    NASA Astrophysics Data System (ADS)

    Donovan, D. C.; Buchenauer, D. A.; Watkins, J. G.; Leonard, A. W.; Lasnier, C. J.; Stangeby, P. C.

    2011-10-01

    The sheath power transmission factor (SPTF) is examined in DIII-D with a new IR camera, a more thermally robust Langmuir probe array, fast thermocouples, and a unique probe configuration on the Divertor Materials Evaluation System (DiMES). Past data collected from the fixed Langmuir Probes and Infrared Camera on DIII-D have indicated a SPTF near 1 at the strike point. Theory indicates that the SPTF should be approximately 7 and cannot be less than 5. SPTF values are calculated using independent measurements from the IR camera and fast thermocouples. Experiments have been performed with varying levels of electron cyclotron heating and neutral beam power. The ECH power does not involve fast ions, so the SPTF can be calculated and compared to previous experiments to determine the extent to which fast ions may be influencing the SPTF measurements, and potentially offer insight into the disagreement with the theory. Work supported in part by US DOE under DE-AC04-94AL85000, DE-FC02-04ER54698, and DE-AC52-07NA27344.

  6. C-RED One and C-RED2: SWIR high-performance cameras using Saphira e-APD and Snake InGaAs detectors

    NASA Astrophysics Data System (ADS)

    Gach, Jean-Luc; Feautrier, Philippe; Stadler, Eric; Clop, Fabien; Lemarchand, Stephane; Carmignani, Thomas; Wanwanscappel, Yann; Boutolleau, David

    2018-02-01

    After the development of the OCAM2 EMCCD fast visible camera dedicated to advanced adaptive optics wavefront sensing, First Light Imaging moved to the SWIR fast cameras with the development of the C-RED One and the C-RED 2 cameras. First Light Imaging's C-RED One infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise and very low background. C-RED One is based on the last version of the SAPHIRA detector developed by Leonardo UK. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. C-RED One is an autonomous system with an integrated cooling system and a vacuum regeneration system. It operates its sensor with a wide variety of read out techniques and processes video on-board thanks to an FPGA. We will show its performances and expose its main features. In addition to this project, First Light Imaging developed an InGaAs 640x512 fast camera with unprecedented performances in terms of noise, dark and readout speed based on the SNAKE SWIR detector from Sofradir. The camera was called C-RED 2. The C-RED 2 characteristics and performances will be described. The C-RED One project has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944. The C-RED 2 development is supported by the "Investments for the future" program and the Provence Alpes Côte d'Azur Region, in the frame of the CPER.

  7. Choreographing the Frame: A Critical Investigation into How Dance for the Camera Extends the Conceptual and Artistic Boundaries of Dance

    ERIC Educational Resources Information Center

    Preston, Hilary

    2006-01-01

    This essay investigates the collaboration between dance and choreographic practice and film/video medium in a contemporary context. By looking specifically at dance made for the camera and the proliferation of dance-film/video, critical issues will be explored that have surfaced in response to this burgeoning form. Presenting a view of avant-garde…

  8. Double and Multiple Star Measurements in the Northern Sky with a 10" Newtonian and a Fast CCD Camera in 2006 through 2009

    NASA Astrophysics Data System (ADS)

    Anton, Rainer

    2010-07-01

    Using a 10" Newtonian and a fast CCD camera, recordings of double and multiple stars were made at high frame rates with a notebook computer. From superpositions of "lucky images", measurements of 139 systems were obtained and compared with literature data. B/w and color images of some noteworthy systems are also presented.

  9. Camera selection for real-time in vivo radiation treatment verification systems using Cherenkov imaging.

    PubMed

    Andreozzi, Jacqueline M; Zhang, Rongxiao; Glaser, Adam K; Jarvis, Lesley A; Pogue, Brian W; Gladstone, David J

    2015-02-01

    To identify achievable camera performance and hardware needs in a clinical Cherenkov imaging system for real-time, in vivo monitoring of the surface beam profile on patients, as novel visual information, documentation, and possible treatment verification for clinicians. Complementary metal-oxide-semiconductor (CMOS), charge-coupled device (CCD), intensified charge-coupled device (ICCD), and electron multiplying-intensified charge coupled device (EM-ICCD) cameras were investigated to determine Cherenkov imaging performance in a clinical radiotherapy setting, with one emphasis on the maximum supportable frame rate. Where possible, the image intensifier was synchronized using a pulse signal from the Linac in order to image with room lighting conditions comparable to patient treatment scenarios. A solid water phantom irradiated with a 6 MV photon beam was imaged by the cameras to evaluate the maximum frame rate for adequate Cherenkov detection. Adequate detection was defined as an average electron count in the background-subtracted Cherenkov image region of interest in excess of 0.5% (327 counts) of the 16-bit maximum electron count value. Additionally, an ICCD and an EM-ICCD were each used clinically to image two patients undergoing whole-breast radiotherapy to compare clinical advantages and limitations of each system. Intensifier-coupled cameras were required for imaging Cherenkov emission on the phantom surface with ambient room lighting; standalone CMOS and CCD cameras were not viable. The EM-ICCD was able to collect images from a single Linac pulse delivering less than 0.05 cGy of dose at 30 frames/s (fps) and pixel resolution of 512 × 512, compared to an ICCD which was limited to 4.7 fps at 1024 × 1024 resolution. An intensifier with higher quantum efficiency at the entrance photocathode in the red wavelengths [30% quantum efficiency (QE) vs previous 19%] promises at least 8.6 fps at a resolution of 1024 × 1024 and lower monetary cost than the EM-ICCD. The ICCD with an intensifier better optimized for red wavelengths was found to provide the best potential for real-time display (at least 8.6 fps) of radiation dose on the skin during treatment at a resolution of 1024 × 1024.

  10. Stratigraphy and Surface Ages of Dwarf Planet (1) Ceres: Results from Geologic and Topographic Mapping in Survey, HAMO and LAMO Data of the Dawn Framing Camera Images

    NASA Astrophysics Data System (ADS)

    Wagner, R. J.; Schmedemann, N.; Stephan, K.; Jaumann, R.; Neesemann, A.; Preusker, F.; Kersten, E.; Roatsch, T.; Hiesinger, H.; Williams, D. A.; Yingst, R. A.; Crown, D. A.; Mest, S. C.; Raymond, C. A.; Russell, C. T.

    2017-12-01

    Since March 6, 2015, the surface of dwarf planet (1) Ceres is being imaged by the FC framing camera aboard the Dawn spacecraft from orbit at various altitudes [1]. For this study we focus on images from the Survey orbit phase (4424 km altitude) with spatial resolutions of 400 m/pxl and use images and topographic data from DTMs (digital terrain models) for global geologic mapping. On Ceres' surface cratered plains are ubiquitous, with variations in superimposed crater frequency indicating different ages and processes. Here, we take the topography into account for geologic mapping and discriminate cratered plains units according to their topographic level - high-standing, medium, or low-lying - in order to examine a possible correlation between topography and surface age. Absolute model ages (AMAs) are derived from two impact cratering chronology models discussed in detail by [2] (henceforth termed LDM: lunar-derived model, and ADM: asteroid-derived model). We also apply an improved method to obtain relative ages and AMAs from crater frequency measurements termed Poisson timing analysis [3]. Our ongoing analysis shows no trend that the topographic level has an influence on the age of the geologic units. Both high-standing and low-lying cratered plains have AMAs ranging from 3.5 to 1.5 Ga (LDM), versus 4.2 to 0.5 Ga (ADM). Some areas of measurement within these units, however, show effects of resurfacing processes in their crater distributions and feature an older and a younger age. We use LAMO data (altitude: 375 km; resolution 30 m/pxl) and/or HAMO data (altitude: 1475 km; resolution 140 m/pxl) to study local geologic units and their ages, e.g., smaller impact craters, especially those not dated so far with crater measurements and/or those with specific spectral properties [4], deposits of mass wasting (e.g., landslides), and mountains, such as Ahuna Mons. Crater frequencies are used to set these geologic units into the context of Ceres' time-stratigraphic system and chronologic periods [5]. References: [1] Russell C. T., et al. (2016), Science 353, doi:10.1126/science.aaf4219. [2] Hiesinger H. H. et al. (2016), Science 353, doi:10.1126/science.aaf4759. [3] Michael G. G. et al. (2016), Icarus 277, 279-285. [4] Stephan K. et al. (2017), submitted to Icarus. [5] Mest S. C. et al. (2017), LPSC XLVIII, abstr. No. 2512.

  11. Using the OOI Cabled Array HD Camera to Explore Geophysical and Oceanographic Problems at Axial Seamount

    NASA Astrophysics Data System (ADS)

    Crone, T. J.; Knuth, F.; Marburg, A.

    2016-12-01

    A broad array of Earth science problems can be investigated using high-definition video imagery from the seafloor, ranging from those that are geological and geophysical in nature, to those that are biological and water-column related. A high-definition video camera was installed as part of the Ocean Observatory Initiative's core instrument suite on the Cabled Array, a real-time fiber optic data and power system that stretches from the Oregon Coast to Axial Seamount on the Juan de Fuca Ridge. This camera runs a 14-minute pan-tilt-zoom routine 8 times per day, focusing on locations of scientific interest on and near the Mushroom vent in the ASHES hydrothermal field inside the Axial caldera. The system produces 13 GB of lossless HD video every 3 hours, and at the time of this writing it has generated 2100 recordings totaling 28.5 TB since it began streaming data into the OOI archive in August of 2015. Because of the large size of this dataset, downloading the entirety of the video for long timescale investigations is not practical. We are developing a set of user-side tools for downloading single frames and frame ranges from the OOI HD camera raw data archive to aid users interested in using these data for their research. We use these tools to download about one year's worth of partial frame sets to investigate several questions regarding the hydrothermal system at ASHES, including the variability of bacterial "floc" in the water-column, and changes in high temperature fluid fluxes using optical flow techniques. We show that while these user-side tools can facilitate rudimentary scientific investigations using the HD camera data, a server-side computing environment that allows users to explore this dataset without downloading any raw video will be required for more advanced investigations to flourish.

  12. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  13. Solar System Portrait - 60 Frame Mosaic

    NASA Image and Video Library

    1996-09-13

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever portrait of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. http://photojournal.jpl.nasa.gov/catalog/PIA00451

  14. Solar System Portrait - 60 Frame Mosaic

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The cameras of Voyager 1 on Feb. 14, 1990, pointed back toward the sun and took a series of pictures of the sun and the planets, making the first ever 'portrait' of our solar system as seen from the outside. In the course of taking this mosaic consisting of a total of 60 frames, Voyager 1 made several images of the inner solar system from a distance of approximately 4 billion miles and about 32 degrees above the ecliptic plane. Thirty-nine wide angle frames link together six of the planets of our solar system in this mosaic. Outermost Neptune is 30 times further from the sun than Earth. Our sun is seen as the bright object in the center of the circle of frames. The wide-angle image of the sun was taken with the camera's darkest filter (a methane absorption band) and the shortest possible exposure (5 thousandths of a second) to avoid saturating the camera's vidicon tube with scattered sunlight. The sun is not large as seen from Voyager, only about one-fortieth of the diameter as seen from Earth, but is still almost 8 million times brighter than the brightest star in Earth's sky, Sirius. The result of this great brightness is an image with multiple reflections from the optics in the camera. Wide-angle images surrounding the sun also show many artifacts attributable to scattered light in the optics. These were taken through the clear filter with one second exposures. The insets show the planets magnified many times. Narrow-angle images of Earth, Venus, Jupiter, Saturn, Uranus and Neptune were acquired as the spacecraft built the wide-angle mosaic. Jupiter is larger than a narrow-angle pixel and is clearly resolved, as is Saturn with its rings. Uranus and Neptune appear larger than they really are because of image smear due to spacecraft motion during the long (15 second) exposures. From Voyager's great distance Earth and Venus are mere points of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun.

  15. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  16. HDR video synthesis for vision systems in dynamic scenes

    NASA Astrophysics Data System (ADS)

    Shopovska, Ivana; Jovanov, Ljubomir; Goossens, Bart; Philips, Wilfried

    2016-09-01

    High dynamic range (HDR) image generation from a number of differently exposed low dynamic range (LDR) images has been extensively explored in the past few decades, and as a result of these efforts a large number of HDR synthesis methods have been proposed. Since HDR images are synthesized by combining well-exposed regions of the input images, one of the main challenges is dealing with camera or object motion. In this paper we propose a method for the synthesis of HDR video from a single camera using multiple, differently exposed video frames, with circularly alternating exposure times. One of the potential applications of the system is in driver assistance systems and autonomous vehicles, involving significant camera and object movement, non- uniform and temporally varying illumination, and the requirement of real-time performance. To achieve these goals simultaneously, we propose a HDR synthesis approach based on weighted averaging of aligned radiance maps. The computational complexity of high-quality optical flow methods for motion compensation is still pro- hibitively high for real-time applications. Instead, we rely on more efficient global projective transformations to solve camera movement, while moving objects are detected by thresholding the differences between the trans- formed and brightness adapted images in the set. To attain temporal consistency of the camera motion in the consecutive HDR frames, the parameters of the perspective transformation are stabilized over time by means of computationally efficient temporal filtering. We evaluated our results on several reference HDR videos, on synthetic scenes, and using 14-bit raw images taken with a standard camera.

  17. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  18. SarcOptiM for ImageJ: high-frequency online sarcomere length computing on stimulated cardiomyocytes.

    PubMed

    Pasqualin, Côme; Gannier, François; Yu, Angèle; Malécot, Claire O; Bredeloux, Pierre; Maupoil, Véronique

    2016-08-01

    Accurate measurement of cardiomyocyte contraction is a critical issue for scientists working on cardiac physiology and physiopathology of diseases implying contraction impairment. Cardiomyocytes contraction can be quantified by measuring sarcomere length, but few tools are available for this, and none is freely distributed. We developed a plug-in (SarcOptiM) for the ImageJ/Fiji image analysis platform developed by the National Institutes of Health. SarcOptiM computes sarcomere length via fast Fourier transform analysis of video frames captured or displayed in ImageJ and thus is not tied to a dedicated video camera. It can work in real time or offline, the latter overcoming rotating motion or displacement-related artifacts. SarcOptiM includes a simulator and video generator of cardiomyocyte contraction. Acquisition parameters, such as pixel size and camera frame rate, were tested with both experimental recordings of rat ventricular cardiomyocytes and synthetic videos. It is freely distributed, and its source code is available. It works under Windows, Mac, or Linux operating systems. The camera speed is the limiting factor, since the algorithm can compute online sarcomere shortening at frame rates >10 kHz. In conclusion, SarcOptiM is a free and validated user-friendly tool for studying cardiomyocyte contraction in all species, including human. Copyright © 2016 the American Physiological Society.

  19. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of live targets for training. No calibration is required. Testing shows that the algorithm performs very well in real-world sequences. The consistent labelling problem is solved, even for targets that appear via in-scene entrances. Full occlusions are handled. Although implemented in Matlab, the multiple-camera tracking system runs at eight frames per second. A faster implementation would be suitable for real-world use at typical video frame rates.

  20. An adaptive enhancement algorithm for infrared video based on modified k-means clustering

    NASA Astrophysics Data System (ADS)

    Zhang, Linze; Wang, Jingqi; Wu, Wen

    2016-09-01

    In this paper, we have proposed a video enhancement algorithm to improve the output video of the infrared camera. Sometimes the video obtained by infrared camera is very dark since there is no clear target. In this case, infrared video should be divided into frame images by frame extraction, in order to carry out the image enhancement. For the first frame image, which can be divided into k sub images by using K-means clustering according to the gray interval it occupies before k sub images' histogram equalization according to the amount of information per sub image, we used a method to solve a problem that final cluster centers close to each other in some cases; and for the other frame images, their initial cluster centers can be determined by the final clustering centers of the previous ones, and the histogram equalization of each sub image will be carried out after image segmentation based on K-means clustering. The histogram equalization can make the gray value of the image to the whole gray level, and the gray level of each sub image is determined by the ratio of pixels to a frame image. Experimental results show that this algorithm can improve the contrast of infrared video where night target is not obvious which lead to a dim scene, and reduce the negative effect given by the overexposed pixels adaptively in a certain range.

  1. Reference set design for relational modeling of fuzzy systems

    NASA Astrophysics Data System (ADS)

    Lapohos, Tibor; Buchal, Ralph O.

    1994-10-01

    One of the keys to the successful relational modeling of fuzzy systems is the proper design of fuzzy reference sets. This has been discussed throughout the literature. In the frame of modeling a stochastic system, we analyze the problem numerically. First, we briefly describe the relational model and present the performance of the modeling in the most trivial case: the reference sets are triangle shaped. Next, we present a known fuzzy reference set generator algorithm (FRSGA) which is based on the fuzzy c-means (Fc-M) clustering algorithm. In the second section of this chapter we improve the previous FRSGA by adding a constraint to the Fc-M algorithm (modified Fc-M or MFc-M): two cluster centers are forced to coincide with the domain limits. This is needed to obtain properly shaped extreme linguistic reference values. We apply this algorithm to uniformly discretized domains of the variables involved. The fuzziness of the reference sets produced by both Fc-M and MFc-M is determined by a parameter, which in our experiments is modified iteratively. Each time, a new model is created and its performance analyzed. For certain algorithm parameter values both of these two algorithms have shortcomings. To eliminate the drawbacks of these two approaches, we develop a completely new generator algorithm for reference sets which we call Polyline. This algorithm and its performance are described in the last section. In all three cases, the modeling is performed for a variety of operators used in the inference engine and two defuzzification methods. Therefore our results depend neither on the system model order nor the experimental setup.

  2. Dawn LAMO Image 83

    NASA Image and Video Library

    2016-05-06

    Ceres densely cratered landscape is revealed in this image taken by the framing camera aboard NASA Dawn spacecraft. The craters show various degrees of degradation. The youngest craters have sharp rims.

  3. Dawn LAMO Image 84

    NASA Image and Video Library

    2016-05-09

    Ceres densely cratered landscape is revealed in this image taken by the framing camera aboard NASA Dawn spacecraft. The craters show various degrees of degradation. The youngest craters have sharp rims.

  4. Experimental comparison of high-density scintillators for EMCCD-based gamma ray imaging

    NASA Astrophysics Data System (ADS)

    Heemskerk, Jan W. T.; Kreuger, Rob; Goorden, Marlies C.; Korevaar, Marc A. N.; Salvador, Samuel; Seeley, Zachary M.; Cherepy, Nerine J.; van der Kolk, Erik; Payne, Stephen A.; Dorenbos, Pieter; Beekman, Freek J.

    2012-07-01

    Detection of x-rays and gamma rays with high spatial resolution can be achieved with scintillators that are optically coupled to electron-multiplying charge-coupled devices (EMCCDs). These can be operated at typical frame rates of 50 Hz with low noise. In such a set-up, scintillation light within each frame is integrated after which the frame is analyzed for the presence of scintillation events. This method allows for the use of scintillator materials with relatively long decay times of a few milliseconds, not previously considered for use in photon-counting gamma cameras, opening up an unexplored range of dense scintillators. In this paper, we test CdWO4 and transparent polycrystalline ceramics of Lu2O3:Eu and (Gd,Lu)2O3:Eu as alternatives to currently used CsI:Tl in order to improve the performance of EMCCD-based gamma cameras. The tested scintillators were selected for their significantly larger cross-sections at 140 keV (99mTc) compared to CsI:Tl combined with moderate to good light yield. A performance comparison based on gamma camera spatial and energy resolution was done with all tested scintillators having equal (66%) interaction probability at 140 keV. CdWO4, Lu2O3:Eu and (Gd,Lu)2O3:Eu all result in a significantly improved spatial resolution over CsI:Tl, albeit at the cost of reduced energy resolution. Lu2O3:Eu transparent ceramic gives the best spatial resolution: 65 µm full-width-at-half-maximum (FWHM) compared to 147 µm FWHM for CsI:Tl. In conclusion, these ‘slow’ dense scintillators open up new possibilities for improving the spatial resolution of EMCCD-based scintillation cameras.

  5. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera

    PubMed Central

    Yaghoobi Ershadi, Nastaran

    2017-01-01

    Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions. PMID:29261719

  6. Improving vehicle tracking rate and speed estimation in dusty and snowy weather conditions with a vibrating camera.

    PubMed

    Yaghoobi Ershadi, Nastaran

    2017-01-01

    Traffic surveillance systems are interesting to many researchers to improve the traffic control and reduce the risk caused by accidents. In this area, many published works are only concerned about vehicle detection in normal conditions. The camera may vibrate due to wind or bridge movement. Detection and tracking of vehicles is a very difficult task when we have bad weather conditions in winter (snowy, rainy, windy, etc.), dusty weather in arid and semi-arid regions, at night, etc. Also, it is very important to consider speed of vehicles in the complicated weather condition. In this paper, we improved our method to track and count vehicles in dusty weather with vibrating camera. For this purpose, we used a background subtraction based strategy mixed with an extra processing to segment vehicles. In this paper, the extra processing included the analysis of the headlight size, location, and area. In our work, tracking was done between consecutive frames via a generalized particle filter to detect the vehicle and pair the headlights using the connected component analysis. So, vehicle counting was performed based on the pairing result, with Centroid of each blob we calculated distance between two frames by simple formula and hence dividing it by the time between two frames obtained from the video. Our proposed method was tested on several video surveillance records in different conditions such as dusty or foggy weather, vibrating camera, and in roads with medium-level traffic volumes. The results showed that the new proposed method performed better than our previously published method and other methods, including the Kalman filter or Gaussian model, in different traffic conditions.

  7. Automatic vision system for analysis of microscopic behavior of flow and transport in porous media

    NASA Astrophysics Data System (ADS)

    Rashidi, Mehdi; Dehmeshki, Jamshid; Dickenson, Eric; Daemi, M. Farhang

    1997-10-01

    This paper describes the development of a novel automated and efficient vision system to obtain velocity and concentration measurement within a porous medium. An aqueous fluid lace with a fluorescent dye to microspheres flows through a transparent, refractive-index-matched column packed with transparent crystals. For illumination purposes, a planar sheet of laser passes through the column as a CCD camera records all the laser illuminated planes. Detailed microscopic velocity and concentration fields have been computed within a 3D volume of the column. For measuring velocities, while the aqueous fluid, laced with fluorescent microspheres, flows through the transparent medium, a CCD camera records the motions of the fluorescing particles by a video cassette recorder. The recorded images are acquired automatically frame by frame and transferred to the computer for processing, by using a frame grabber an written relevant algorithms through an RS-232 interface. Since the grabbed image is poor in this stage, some preprocessings are used to enhance particles within images. Finally, these enhanced particles are monitored to calculate velocity vectors in the plane of the beam. For concentration measurements, while the aqueous fluid, laced with a fluorescent organic dye, flows through the transparent medium, a CCD camera sweeps back and forth across the column and records concentration slices on the planes illuminated by the laser beam traveling simultaneously with the camera. Subsequently, these recorded images are transferred to the computer for processing in similar fashion to the velocity measurement. In order to have a fully automatic vision system, several detailed image processing techniques are developed to match exact images that have different intensities values but the same topological characteristics. This results in normalized interstitial chemical concentrations as a function of time within the porous column.

  8. Time and flow-direction responses of shear-styress-sensitive liquid crystal coatings

    NASA Technical Reports Server (NTRS)

    Reda, Daniel C.; Muraqtore, J. J.; Heinick, James T.

    1994-01-01

    Time and flow-direction responses of shear-stress liquid crystal coatings were exploresd experimentally. For the time-response experiments, coatings were exposed to transient, compressible flows created during the startup and off-design operation of an injector-driven supersonic wind tunnel. Flow transients were visualized with a focusing schlieren system and recorded with a 100 frame/s color video camera.

  9. Earth observation taken by the Expedition 28 crew

    NASA Image and Video Library

    2011-09-07

    ISS028-E-043559 (7 Sept. 2011) --- This view, from the camera of an Expedition 28 crew member onboard the International Space Station, looks from the northwest toward southeast and covers many counties in southeast Texas that have been heavily affected by dozens of wild fires. Houston can be seen near frame center and the Gulf of Mexico takes up the upper right quadrant of the frame.

  10. Earth Observation as seen by Expedition Two crew

    NASA Image and Video Library

    2001-04-16

    ISS002-E-5656 (16 April 2001) --- Extreme southern topography of California, including inland portions of the San Diego area were captured in this digital still camera's image from the International Space Station's Expedition Two crew members. The previous frame (5655) and this one were both recorded with an 800mm lens, whereas the succeeding frame (5657) was shot with a 105mm lens.

  11. U.S. Geological Survey National Computer Technology Meeting (7th): Program and Abstracts, Held in New Orleans, Louisiana, April 10-15, 1994

    DTIC Science & Technology

    1994-01-01

    Magnolia Room 8:00 pm - 10:00 pm FrameMaker Techniques - Moderator, Terry A. Reinitz, USGS, WRD, Reston, Va. Wednesday, April 13,1994 7:30 am...Maker Interchange Format (MIF) strings, to an MIF file. The MIF file is imported into a blank FrameMaker template, creating a word-processor-formatted...draft to camera-ready stages using Data General workstations and software packages that include FrameMaker , CorelDRAW, USGS-G2, Statit, and

  12. Clouds over Tharsis

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Color composite of condensate clouds over Tharsis made from red and blue images with a synthesized green channel. Mars Orbiter Camera wide angle frames from Orbit 48.

    Figure caption from Science Magazine

  13. SKYLAB (SL)-3 - TELEVISION

    NASA Image and Video Library

    1973-09-29

    S73-34619 (28 July 1973) --- A composite of four frames taken from 16mm movie camera footage showing an overhead view of the Skylab space station cluster in Earth orbit. The Maurer motion picture camera scenes were being filmed during the Skylab 3 Command/Service Module's (CSM) first "fly around" inspection of the space station. Close comparison of the four frames reveals movement of the improvised parasol solar shield over the Orbital Workshop (OWS). The "flapping" of the sun shade was caused from the exhaust of the reaction control subsystem (RCS) thrusters of the Skylab 3 CSM. The one remaining solar array system wing on the OWS is in the lower left background. The solar panel in the lower left foreground is on the Apollo Telescope Mount (ATM). Photo credit: NASA

  14. Various views of STS-95 Senator John Glenn during training

    NASA Image and Video Library

    1998-06-18

    S98-08732 (9 April 1998) --- Holding a 35mm camera, U.S. Sen. John H. Glenn Jr. (D.-Ohio) gets a refresher course in photography from a JSC crew trainer (out of frame, right). The STS-95 payload specialist carried a 35mm camera on his historic MA-6 flight over 36 years ago. The photo was taken by Joe McNally, National Geographic, for NASA.

  15. Digital Semaphore: Technical Feasibility of QR Code Optical Signaling for Fleet Communications

    DTIC Science & Technology

    2013-06-01

    Standards (http://www.iso.org) JIS Japanese Industrial Standard JPEG Joint Photographic Experts Group (digital image format; http://www.jpeg.org) LED...Denso Wave corporation in the 1990s for the Japanese automotive manufacturing industry. See Appendix A for full details. Reed-Solomon Error...eliminates camera blur induced by the shutter, providing clear images at extremely high frame rates. Thusly, digital cinema cameras are more suitable

  16. Synchronizing A Stroboscope With A Video Camera

    NASA Technical Reports Server (NTRS)

    Rhodes, David B.; Franke, John M.; Jones, Stephen B.; Dismond, Harriet R.

    1993-01-01

    Circuit synchronizes flash of light from stroboscope with frame and field periods of video camera. Sync stripper sends vertical-synchronization signal to delay generator, which generates trigger signal. Flashlamp power supply accepts delayed trigger signal and sends pulse of power to flash lamp. Designed for use in making short-exposure images that "freeze" flow in wind tunnel. Also used for making longer-exposure images obtained by use of continuous intense illumination.

  17. STREAM PROCESSING ALGORITHMS FOR DYNAMIC 3D SCENE ANALYSIS

    DTIC Science & Technology

    2018-02-15

    23 9 Ground truth creation based on marked building feature points in two different views 50 frames apart in...between just two views , each row in the current figure represents a similar assessment however between one camera and all other cameras within the dataset...BA4S. While Fig. 44 depicted the epipolar lines for the point correspondences between just two views , the current figure represents a similar

  18. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  19. Measurement of the timing behaviour of off-the-shelf cameras

    NASA Astrophysics Data System (ADS)

    Schatz, Volker

    2017-04-01

    This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.

  20. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling

    PubMed Central

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-01-01

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method. PMID:27690028

  1. A novel method to reduce time investment when processing videos from camera trap studies.

    PubMed

    Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs.

  2. Enhanced RGB-D Mapping Method for Detailed 3D Indoor and Outdoor Modeling.

    PubMed

    Tang, Shengjun; Zhu, Qing; Chen, Wu; Darwish, Walid; Wu, Bo; Hu, Han; Chen, Min

    2016-09-27

    RGB-D sensors (sensors with RGB camera and Depth camera) are novel sensing systems that capture RGB images along with pixel-wise depth information. Although they are widely used in various applications, RGB-D sensors have significant drawbacks including limited measurement ranges (e.g., within 3 m) and errors in depth measurement increase with distance from the sensor with respect to 3D dense mapping. In this paper, we present a novel approach to geometrically integrate the depth scene and RGB scene to enlarge the measurement distance of RGB-D sensors and enrich the details of model generated from depth images. First, precise calibration for RGB-D Sensors is introduced. In addition to the calibration of internal and external parameters for both, IR camera and RGB camera, the relative pose between RGB camera and IR camera is also calibrated. Second, to ensure poses accuracy of RGB images, a refined false features matches rejection method is introduced by combining the depth information and initial camera poses between frames of the RGB-D sensor. Then, a global optimization model is used to improve the accuracy of the camera pose, decreasing the inconsistencies between the depth frames in advance. In order to eliminate the geometric inconsistencies between RGB scene and depth scene, the scale ambiguity problem encountered during the pose estimation with RGB image sequences can be resolved by integrating the depth and visual information and a robust rigid-transformation recovery method is developed to register RGB scene to depth scene. The benefit of the proposed joint optimization method is firstly evaluated with the publicly available benchmark datasets collected with Kinect. Then, the proposed method is examined by tests with two sets of datasets collected in both outside and inside environments. The experimental results demonstrate the feasibility and robustness of the proposed method.

  3. OOM - OBJECT ORIENTATION MANIPULATOR, VERSION 6.1

    NASA Technical Reports Server (NTRS)

    Goza, S. P.

    1994-01-01

    The Object Orientation Manipulator (OOM) is an application program for creating, rendering, and recording three-dimensional computer-generated still and animated images. This is done using geometrically defined 3D models, cameras, and light sources, referred to collectively as animation elements. OOM does not provide the tools necessary to construct 3D models; instead, it imports binary format model files generated by the Solid Surface Modeler (SSM). Model files stored in other formats must be converted to the SSM binary format before they can be used in OOM. SSM is available as MSC-21914 or as part of the SSM/OOM bundle, COS-10047. Among OOM's features are collision detection (with visual and audio feedback), the capability to define and manipulate hierarchical relationships between animation elements, stereographic display, and ray-traced rendering. OOM uses Euler angle transformations for calculating the results of translation and rotation operations. OOM provides an interactive environment for the manipulation and animation of models, cameras, and light sources. Models are the basic entity upon which OOM operates and are therefore considered the primary animation elements. Cameras and light sources are considered secondary animation elements. A camera, in OOM, is simply a location within the three-space environment from which the contents of the environment are observed. OOM supports the creation and full animation of cameras. Light sources can be defined, positioned and linked to models, but they cannot be animated independently. OOM can simultaneously accommodate as many animation elements as the host computer's memory permits. Once the required animation elements are present, the user may position them, orient them, and define any initial relationships between them. Once the initial relationships are defined, the user can display individual still views for rendering and output, or define motion for the animation elements by using the Interp Animation Editor. The program provides the capability to save still images, animated sequences of frames, and the information that describes the initialization process for an OOM session. OOM provides the same rendering and output options for both still and animated images. OOM is equipped with a robust model manipulation environment featuring a full screen viewing window, a menu-oriented user interface, and an interpolative Animation Editor. It provides three display modes: solid, wire frame, and simple, that allow the user to trade off visual authenticity for update speed. In the solid mode, each model is drawn based on the shading characteristics assigned to it when it was built. All of the shading characteristics supported by SSM are recognized and properly rendered in this mode. If increasing model complexity impedes the operation of OOM in this mode, then wireframe and simple modes are available. These provide substantially faster screen updates than solid mode. The creation and placement of cameras and light sources is under complete control of the user. One light source is provided in the default element set. It is modeled as a direct light source providing a type of lighting analogous to that provided by the Sun. OOM can accommodate as many light sources as the memory of the host computer permits. Animation is created in OOM using a technique called key frame interpolation. First, various program functions are used to load models, load or create light sources and cameras, and specify initial positions for each element. When these steps are completed, the Interp function is used to create an animation sequence for each element to be animated. An animation sequence consists of a user-defined number of frames (screen images) with some subset of those being defined as key frames. The motion of the element between key frames is interpolated automatically by the software. Key frames thus act as transition points in the motion of an element. This saves the user from having to individually define element data at each frame of a sequence. Animation frames and still images can be output to videotape recorders, film recorders, color printers, and disk files. OOM is written in C-language for implementation on SGI IRIS 4D series workstations running the IRIX operating system. A minimum of 8Mb of RAM is recommended for this program. The standard distribution medium for OOM is a .25 inch streaming magnetic IRIX tape cartridge in UNIX tar format. OOM is also offered as a bundle with a related program, SSM (Solid Surface Modeler). Please see the abstract for SSM/OOM (COS-10047) for information about the bundled package. OOM was released in 1993.

  4. 4K Video of Colorful Liquid in Space

    NASA Image and Video Library

    2015-10-09

    Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.

  5. System of technical vision for autonomous unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Bondarchuk, A. S.

    2018-05-01

    This paper is devoted to the implementation of image recognition algorithm using the LabVIEW software. The created virtual instrument is designed to detect the objects on the frames from the camera mounted on the UAV. The trained classifier is invariant to changes in rotation, as well as to small changes in the camera's viewing angle. Finding objects in the image using particle analysis, allows you to classify regions of different sizes. This method allows the system of technical vision to more accurately determine the location of the objects of interest and their movement relative to the camera.

  6. Modeling of digital information optical encryption system with spatially incoherent illumination

    NASA Astrophysics Data System (ADS)

    Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.

    2015-10-01

    State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.

  7. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  8. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    NASA Astrophysics Data System (ADS)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  9. Eight-channel Kirkpatrick-Baez microscope for multiframe x-ray imaging diagnostics in laser plasma experiments.

    PubMed

    Yi, Shengzhen; Zhang, Zhe; Huang, Qiushi; Zhang, Zhong; Mu, Baozhong; Wang, Zhanshan; Fang, Zhiheng; Wang, Wei; Fu, Sizu

    2016-10-01

    Because grazing-incidence Kirkpatrick-Baez (KB) microscopes have better resolution and collection efficiency than pinhole cameras, they have been widely used for x-ray imaging diagnostics of laser inertial confinement fusion. The assembly and adjustment of a multichannel KB microscope must meet stringent requirements for image resolution and reproducible alignment. In the present study, an eight-channel KB microscope was developed for diagnostics by imaging self-emission x-rays with a framing camera at the Shenguang-II Update (SGII-Update) laser facility. A consistent object field of view is ensured in the eight channels using an assembly method based on conical reference cones, which also allow the intervals between the eight images to be tuned to couple with the microstrips of the x-ray framing camera. The eight-channel KB microscope was adjusted via real-time x-ray imaging experiments in the laboratory. This paper describes the details of the eight-channel KB microscope, its optical and multilayer design, the assembly and alignment methods, and results of imaging in the laboratory and at the SGII-Update.

  10. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    NASA Astrophysics Data System (ADS)

    Goldstein, N.; Dressler, R. A.; Richtsmeier, S. S.; McLean, J.; Dao, P. D.; Murray-Krezan, J.; Fulcoly, D. O.

    2013-09-01

    Recent ground testing of a wide area camera system and automated star removal algorithms has demonstrated the potential to detect, quantify, and track deep space objects using small aperture cameras and on-board processors. The camera system, which was originally developed for a space-based Wide Area Space Surveillance System (WASSS), operates in a fixed-stare mode, continuously monitoring a wide swath of space and differentiating celestial objects from satellites based on differential motion across the field of view. It would have greatest utility in a LEO orbit to provide automated and continuous monitoring of deep space with high refresh rates, and with particular emphasis on the GEO belt and GEO transfer space. Continuous monitoring allows a concept of change detection and custody maintenance not possible with existing sensors. The detection approach is equally applicable to Earth-based sensor systems. A distributed system of such sensors, either Earth-based, or space-based, could provide automated, persistent night-time monitoring of all of deep space. The continuous monitoring provides a daily record of the light curves of all GEO objects above a certain brightness within the field of view. The daily updates of satellite light curves offers a means to identify specific satellites, to note changes in orientation and operational mode, and to queue other SSA assets for higher resolution queries. The data processing approach may also be applied to larger-aperture, higher resolution camera systems to extend the sensitivity towards dimmer objects. In order to demonstrate the utility of the WASSS system and data processing, a ground based field test was conducted in October 2012. We report here the results of the observations made at Magdalena Ridge Observatory using the prototype WASSS camera, which has a 4×60° field-of-view , <0.05° resolution, a 2.8 cm2 aperture, and the ability to view within 4° of the sun. A single camera pointed at the GEO belt provided a continuous night-long record of the intensity and location of more than 50 GEO objects detected within the camera's 60-degree field-of-view, with a detection sensitivity similar to the camera's shot noise limit of Mv=13.7. Performance is anticipated to scale with aperture area, allowing the detection of dimmer objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and an image processing algorithm that exploits the different angular velocities of celestial objects and SOs. Principal Components Analysis (PCA) is used to filter out all objects moving with the velocity of the celestial frame of reference. The resulting filtered images are projected back into an Earth-centered frame of reference, or into any other relevant frame of reference, and co-added to form a series of images of the GEO objects as a function of time. The PCA approach not only removes the celestial background, but it also removes systematic variations in system calibration, sensor pointing, and atmospheric conditions. The resulting images are shot-noise limited, and can be exploited to automatically identify deep space objects, produce approximate state vectors, and track their locations and intensities as a function of time.

  11. Clouds over Tharsis

    NASA Image and Video Library

    1998-03-13

    Color composite of condensate clouds over Tharsis made from red and blue images with a synthesized green channel. Mars Orbiter Camera wide angle frames from Orbit 48. http://photojournal.jpl.nasa.gov/catalog/PIA00812

  12. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  13. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  14. Comparative Analysis of THOR-NT ATD vs. Hybrid III ATD in Laboratory Vertical Shock Testing

    DTIC Science & Technology

    2013-09-01

    were taken both pretest and post - test for each test event (figure 5). Figure 5. Rigid fixture placed on the drop table with ATD seated: Hybrid III...6 3. Experimental Procedure 6 3.1 Test Setup...frames per second and with a Vision Research Phantom V9.1 (Wayne, NJ) high-speed video camera, sampling 1000 frames per second. 3. Experimental

  15. A Probability-Based Algorithm Using Image Sensors to Track the LED in a Vehicle Visible Light Communication System.

    PubMed

    Huynh, Phat; Do, Trong-Hop; Yoo, Myungsik

    2017-02-10

    This paper proposes a probability-based algorithm to track the LED in vehicle visible light communication systems using a camera. In this system, the transmitters are the vehicles' front and rear LED lights. The receivers are high speed cameras that take a series of images of the LEDs. ThedataembeddedinthelightisextractedbyfirstdetectingthepositionoftheLEDsintheseimages. Traditionally, LEDs are detected according to pixel intensity. However, when the vehicle is moving, motion blur occurs in the LED images, making it difficult to detect the LEDs. Particularly at high speeds, some frames are blurred at a high degree, which makes it impossible to detect the LED as well as extract the information embedded in these frames. The proposed algorithm relies not only on the pixel intensity, but also on the optical flow of the LEDs and on statistical information obtained from previous frames. Based on this information, the conditional probability that a pixel belongs to a LED is calculated. Then, the position of LED is determined based on this probability. To verify the suitability of the proposed algorithm, simulations are conducted by considering the incidents that can happen in a real-world situation, including a change in the position of the LEDs at each frame, as well as motion blur due to the vehicle speed.

  16. Fast Fiber-Coupled Imaging Devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas

    HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less

  17. TEM Video Compressive Sensing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stevens, Andrew; Kovarik, Libor; Abellan, Patricia

    One of the main limitations of imaging at high spatial and temporal resolution during in-situ TEM experiments is the frame rate of the camera being used to image the dynamic process. While the recent development of direct detectors has provided the hardware to achieve frame rates approaching 0.1ms, the cameras are expensive and must replace existing detectors. In this paper, we examine the use of coded aperture compressive sensing methods [1, 2, 3, 4] to increase the framerate of any camera with simple, low-cost hardware modifications. The coded aperture approach allows multiple sub-frames to be coded and integrated into amore » single camera frame during the acquisition process, and then extracted upon readout using statistical compressive sensing inversion. Our simulations show that it should be possible to increase the speed of any camera by at least an order of magnitude. Compressive Sensing (CS) combines sensing and compression in one operation, and thus provides an approach that could further improve the temporal resolution while correspondingly reducing the electron dose rate. Because the signal is measured in a compressive manner, fewer total measurements are required. When applied to TEM video capture, compressive imaging couled improve acquisition speed and reduce the electron dose rate. CS is a recent concept, and has come to the forefront due the seminal work of Candès [5]. Since the publication of Candès, there has been enormous growth in the application of CS and development of CS variants. For electron microscopy applications, the concept of CS has also been recently applied to electron tomography [6], and reduction of electron dose in scanning transmission electron microscopy (STEM) imaging [7]. To demonstrate the applicability of coded aperture CS video reconstruction for atomic level imaging, we simulate compressive sensing on observations of Pd nanoparticles and Ag nanoparticles during exposure to high temperatures and other environmental conditions. Figure 1 highlights the results from the Pd nanoparticle experiment. On the left, 10 frames are reconstructed from a single coded frame—the original frames are shown for comparison. On the right a selection of three frames are shown from reconstructions at compression levels 10,20,30. The reconstructions, which are not post-processed, are true to the original and degrade in a straightforward manner. The final choice of compression level will obviously depend on both the temporal and spatial resolution required for a specific imaging task, but the results indicate that an increase in speed of better than an order of magnitude should be possible for all experiments. References: [1] P Llull, X Liao, X Yuan et al. Optics express 21(9), (2013), p. 10526. [2] J Yang, X Yuan, X Liao et al. Image Processing, IEEE Trans 23(11), (2014), p. 4863. [3] X Yuan, J Yang, P Llull et al. In ICIP 2013 (IEEE), p. 14. [4] X Yuan, P Llull, X Liao et al. In CVPR 2014. p. 3318. [5] EJ Candès, J Romberg and T Tao. Information Theory, IEEE Trans 52(2), (2006), p. 489. [6] P Binev, W Dahmen, R DeVore et al. In Modeling Nanoscale Imaging in Electron Microscopy, eds. T Vogt, W Dahmen and P Binev (Springer US), Nanostructure Science and Technology (2012). p. 73. [7] A Stevens, H Yang, L Carin et al. Microscopy 63(1), (2014), pp. 41.« less

  18. VizieR Online Data Catalog: BV(RI)c light curves of FF Vul (Samec+, 2016)

    NASA Astrophysics Data System (ADS)

    Samec, R. G.; Nyaude, R.; Caton, D.; van Hamme, W.

    2017-02-01

    The present BVRcIc light curves were taken by DC, the Dark Sky Observatory 0.81m reflector at Phillips Gap, North Carolina. These were taken on 2015 September 12, 13, 14 and 15, and October 15, with a thermoelectrically cooled (-40°C) 2*2K Apogee Alta camera. Additional observations were obtained remotely with the SARA north 0.91m reflector at KPNO on 2015 September 20 and October 11, with the ARC 2*2K camera cooled to -110°C. Individual observations were taken at both sites with standard Johnson-Cousins filters, and included 444 field images in B, 451 in V, 443 in Rc, and 445 in Ic. The standard error was ~7mmag in each of B, V, Rc and Ic. Nightly images were calibrated with 25 bias frames, five flat frames in each filter, and ten 300s dark frames. The exposure times were 40-50s in B, 25-30s in V, 15-25s in Rc and Ic. Our observations are listed in Table1. (1 data file).

  19. MPCM: a hardware coder for super slow motion video sequences

    NASA Astrophysics Data System (ADS)

    Alcocer, Estefanía; López-Granado, Otoniel; Gutierrez, Roberto; Malumbres, Manuel P.

    2013-12-01

    In the last decade, the improvements in VLSI levels and image sensor technologies have led to a frenetic rush to provide image sensors with higher resolutions and faster frame rates. As a result, video devices were designed to capture real-time video at high-resolution formats with frame rates reaching 1,000 fps and beyond. These ultrahigh-speed video cameras are widely used in scientific and industrial applications, such as car crash tests, combustion research, materials research and testing, fluid dynamics, and flow visualization that demand real-time video capturing at extremely high frame rates with high-definition formats. Therefore, data storage capability, communication bandwidth, processing time, and power consumption are critical parameters that should be carefully considered in their design. In this paper, we propose a fast FPGA implementation of a simple codec called modulo-pulse code modulation (MPCM) which is able to reduce the bandwidth requirements up to 1.7 times at the same image quality when compared with PCM coding. This allows current high-speed cameras to capture in a continuous manner through a 40-Gbit Ethernet point-to-point access.

  20. Visual tracking using neuromorphic asynchronous event-based cameras.

    PubMed

    Ni, Zhenjiang; Ieng, Sio-Hoi; Posch, Christoph; Régnier, Stéphane; Benosman, Ryad

    2015-04-01

    This letter presents a novel computationally efficient and robust pattern tracking method based on a time-encoded, frame-free visual data. Recent interdisciplinary developments, combining inputs from engineering and biology, have yielded a novel type of camera that encodes visual information into a continuous stream of asynchronous, temporal events. These events encode temporal contrast and intensity locally in space and time. We show that the sparse yet accurately timed information is well suited as a computational input for object tracking. In this letter, visual data processing is performed for each incoming event at the time it arrives. The method provides a continuous and iterative estimation of the geometric transformation between the model and the events representing the tracked object. It can handle isometry, similarities, and affine distortions and allows for unprecedented real-time performance at equivalent frame rates in the kilohertz range on a standard PC. Furthermore, by using the dimension of time that is currently underexploited by most artificial vision systems, the method we present is able to solve ambiguous cases of object occlusions that classical frame-based techniques handle poorly.

  1. Practical low-cost visual communication using binary images for deaf sign language.

    PubMed

    Manoranjan, M D; Robinson, J A

    2000-03-01

    Deaf sign language transmitted by video requires a temporal resolution of 8 to 10 frames/s for effective communication. Conventional videoconferencing applications, when operated over low bandwidth telephone lines, provide very low temporal resolution of pictures, of the order of less than a frame per second, resulting in jerky movement of objects. This paper presents a practical solution for sign language communication, offering adequate temporal resolution of images using moving binary sketches or cartoons, implemented on standard personal computer hardware with low-cost cameras and communicating over telephone lines. To extract cartoon points an efficient feature extraction algorithm adaptive to the global statistics of the image is proposed. To improve the subjective quality of the binary images, irreversible preprocessing techniques, such as isolated point removal and predictive filtering, are used. A simple, efficient and fast recursive temporal prefiltering scheme, using histograms of successive frames, reduces the additive and multiplicative noise from low-cost cameras. An efficient three-dimensional (3-D) compression scheme codes the binary sketches. Subjective tests performed on the system confirm that it can be used for sign language communication over telephone lines.

  2. Vehicle counting system using real-time video processing

    NASA Astrophysics Data System (ADS)

    Crisóstomo-Romero, Pedro M.

    2006-02-01

    Transit studies are important for planning a road network with optimal vehicular flow. A vehicular count is essential. This article presents a vehicle counting system based on video processing. An advantage of such system is the greater detail than is possible to obtain, like shape, size and speed of vehicles. The system uses a video camera placed above the street to image transit in real-time. The video camera must be placed at least 6 meters above the street level to achieve proper acquisition quality. Fast image processing algorithms and small image dimensions are used to allow real-time processing. Digital filters, mathematical morphology, segmentation and other techniques allow identifying and counting all vehicles in the image sequences. The system was implemented under Linux in a 1.8 GHz Pentium 4 computer. A successful count was obtained with frame rates of 15 frames per second for images of size 240x180 pixels and 24 frames per second for images of size 180x120 pixels, thus being able to count vehicles whose speeds do not exceed 150 km/h.

  3. MS Musgrave conducts CFES experiment on middeck

    NASA Image and Video Library

    1983-04-09

    STS006-03-381 (4-9 April 1983) --- Astronaut F. Story Musgrave, STS-6 mission specialist, monitors the activity of a sample in the continuous flow electrophoresis system (CFES) aboard the Earth-orbiting space shuttle Challenger. Dr. Musgrave is in the middeck area of the spacecraft. He has mounted a 35mm camera to record the activity through the window of the experiment. This frame was also photographed with a 35mm camera. Photo credit: NASA

  4. Various views of STS-95 Senator John Glenn during training

    NASA Image and Video Library

    1998-06-18

    S98-08733 (9 April 1998) --- Looking through the view finder on a camera, U.S. Sen. John H. Glenn Jr. (D.-Ohio) gets a refresher course in photography from a JSC crew trainer (out of frame, right). The STS-95 payload specialist carried a 35mm camera on his historic MA-6 flight over 36 years ago. The photo was taken by Joe McNally, National Geographic, for NASA.

  5. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  6. Photometric Calibration of Consumer Video Cameras

    NASA Technical Reports Server (NTRS)

    Suggs, Robert; Swift, Wesley, Jr.

    2007-01-01

    Equipment and techniques have been developed to implement a method of photometric calibration of consumer video cameras for imaging of objects that are sufficiently narrow or sufficiently distant to be optically equivalent to point or line sources. Heretofore, it has been difficult to calibrate consumer video cameras, especially in cases of image saturation, because they exhibit nonlinear responses with dynamic ranges much smaller than those of scientific-grade video cameras. The present method not only takes this difficulty in stride but also makes it possible to extend effective dynamic ranges to several powers of ten beyond saturation levels. The method will likely be primarily useful in astronomical photometry. There are also potential commercial applications in medical and industrial imaging of point or line sources in the presence of saturation.This development was prompted by the need to measure brightnesses of debris in amateur video images of the breakup of the Space Shuttle Columbia. The purpose of these measurements is to use the brightness values to estimate relative masses of debris objects. In most of the images, the brightness of the main body of Columbia was found to exceed the dynamic ranges of the cameras. A similar problem arose a few years ago in the analysis of video images of Leonid meteors. The present method is a refined version of the calibration method developed to solve the Leonid calibration problem. In this method, one performs an endto- end calibration of the entire imaging system, including not only the imaging optics and imaging photodetector array but also analog tape recording and playback equipment (if used) and any frame grabber or other analog-to-digital converter (if used). To automatically incorporate the effects of nonlinearity and any other distortions into the calibration, the calibration images are processed in precisely the same manner as are the images of meteors, space-shuttle debris, or other objects that one seeks to analyze. The light source used to generate the calibration images is an artificial variable star comprising a Newtonian collimator illuminated by a light source modulated by a rotating variable neutral- density filter. This source acts as a point source, the brightness of which varies at a known rate. A video camera to be calibrated is aimed at this source. Fixed neutral-density filters are inserted in or removed from the light path as needed to make the video image of the source appear to fluctuate between dark and saturated bright. The resulting video-image data are analyzed by use of custom software that determines the integrated signal in each video frame and determines the system response curve (measured output signal versus input brightness). These determinations constitute the calibration, which is thereafter used in automatic, frame-by-frame processing of the data from the video images to be analyzed.

  7. Densely Cratered Terrain Near the Terminator

    NASA Image and Video Library

    2011-08-16

    NASA Dawn spacecraft shows densely cratered terrain near Vesta terminator on August 6, 2011. This image was taken through the framing camera clear filter aboard the spacecraft. North is pointing towards the two oclock position.

  8. MESSENGER Departs Mercury

    NASA Image and Video Library

    2008-01-30

    After NASA MESSENGER spacecraft completed its successful flyby of Mercury, the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS, took these images of the receding planet. This is a frame from an animation.

  9. Near-infrared high-resolution real-time omnidirectional imaging platform for drone detection

    NASA Astrophysics Data System (ADS)

    Popovic, Vladan; Ott, Beat; Wellig, Peter; Leblebici, Yusuf

    2016-10-01

    Recent technological advancements in hardware systems have made higher quality cameras. State of the art panoramic systems use them to produce videos with a resolution of 9000 x 2400 pixels at a rate of 30 frames per second (fps).1 Many modern applications use object tracking to determine the speed and the path taken by each object moving through a scene. The detection requires detailed pixel analysis between two frames. In fields like surveillance systems or crowd analysis, this must be achieved in real time.2 In this paper, we focus on the system-level design of multi-camera sensor acquiring near-infrared (NIR) spectrum and its ability to detect mini-UAVs in a representative rural Swiss environment. The presented results show the UAV detection from the trial that we conducted during a field trial in August 2015.

  10. Fast frame rate rodent cardiac x-ray imaging using scintillator lens coupled to CMOS camera

    NASA Astrophysics Data System (ADS)

    Swathi Lakshmi, B.; Sai Varsha, M. K. N.; Kumar, N. Ashwin; Dixit, Madhulika; Krishnamurthi, Ganapathy

    2017-03-01

    Micro-Computed Tomography (MCT) systems for small animal imaging plays a critical role for monitoring disease progression and therapy evaluation. In this work, an in-house built micro-CT system equipped with a X-ray scintillator lens coupled to a commercial CMOS camera was used to test the feasibility of its application to Digital Subtraction Angiography (DSA). Literature has reported such studies being done with clinical X-ray tubes that can be pulsed rapidly or with rotating gantry systems, thus increasing the cost and infrastructural requirements.The feasibility of DSA was evaluated by injected Iodinated contrast agent (ICA) through the tail vein of a mouse. Projection images of the heart were acquired pre and post contrast using the high frame rate X-ray detector and processing done to visualize transit of ICA through the heart.

  11. Tracking Sunspots from Mars, April 2015 Animation

    NASA Image and Video Library

    2015-07-10

    This single frame from a sequence of six images of an animation shows sunspots as viewed by NASA Curiosity Mars rover from April 4 to April 15, 2015. From Mars, the rover was in position to see the opposite side of the sun. The images were taken by the right-eye camera of Curiosity's Mast Camera (Mastcam), which has a 100-millimeter telephoto lens. The view on the left of each pair in this sequence has little processing other than calibration and putting north toward the top of each frame. The view on the right of each pair has been enhanced to make sunspots more visible. The apparent granularity throughout these enhanced images is an artifact of this processing. These sunspots seen in this sequence eventually produced two solar eruptions, one of which affected Earth. http://photojournal.jpl.nasa.gov/catalog/PIA19802

  12. Research on target tracking algorithm based on spatio-temporal context

    NASA Astrophysics Data System (ADS)

    Li, Baiping; Xu, Sanmei; Kang, Hongjuan

    2017-07-01

    In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.

  13. Precise Trajectory Reconstruction of CE-3 Hovering Stage By Landing Camera Images

    NASA Astrophysics Data System (ADS)

    Yan, W.; Liu, J.; Li, C.; Ren, X.; Mu, L.; Gao, X.; Zeng, X.

    2014-12-01

    Chang'E-3 (CE-3) is part of the second phase of the Chinese Lunar Exploration Program, incorporating a lander and China's first lunar rover. It was landed on 14 December, 2013 successfully. Hovering and obstacle avoidance stages are essential for CE-3 safety soft landing so that precise spacecraft trajectory in these stages are of great significance to verify orbital control strategy, to optimize orbital design, to accurately determine the landing site of CE-3, and to analyze the geological background of the landing site. Because the time consumption of these stages is just 25s, it is difficult to present spacecraft's subtle movement by Measurement and Control System or by radio observations. Under this background, the trajectory reconstruction based on landing camera images can be used to obtain the trajectory of CE-3 because of its technical advantages such as unaffecting by lunar gravity field spacecraft kinetic model, high resolution, high frame rate, and so on. In this paper, the trajectory of CE-3 before and after entering hovering stage was reconstructed by landing camera images from frame 3092 to frame 3180, which lasted about 9s, under Single Image Space Resection (SISR). The results show that CE-3's subtle changes during hovering stage can be emerged by the reconstructed trajectory. The horizontal accuracy of spacecraft position was up to 1.4m while vertical accuracy was up to 0.76m. The results can be used for orbital control strategy analysis and some other application fields.

  14. High speed fluorescence imaging with compressed ultrafast photography

    NASA Astrophysics Data System (ADS)

    Thompson, J. V.; Mason, J. D.; Beier, H. T.; Bixler, J. N.

    2017-02-01

    Fluorescent lifetime imaging is an optical technique that facilitates imaging molecular interactions and cellular functions. Because the excited lifetime of a fluorophore is sensitive to its local microenvironment,1, 2 measurement of fluorescent lifetimes can be used to accurately detect regional changes in temperature, pH, and ion concentration. However, typical state of the art fluorescent lifetime methods are severely limited when it comes to acquisition time (on the order of seconds to minutes) and video rate imaging. Here we show that compressed ultrafast photography (CUP) can be used in conjunction with fluorescent lifetime imaging to overcome these acquisition rate limitations. Frame rates up to one hundred billion frames per second have been demonstrated with compressed ultrafast photography using a streak camera.3 These rates are achieved by encoding time in the spatial direction with a pseudo-random binary pattern. The time domain information is then reconstructed using a compressed sensing algorithm, resulting in a cube of data (x,y,t) for each readout image. Thus, application of compressed ultrafast photography will allow us to acquire an entire fluorescent lifetime image with a single laser pulse. Using a streak camera with a high-speed CMOS camera, acquisition rates of 100 frames per second can be achieved, which will significantly enhance our ability to quantitatively measure complex biological events with high spatial and temporal resolution. In particular, we will demonstrate the ability of this technique to do single-shot fluorescent lifetime imaging of cells and microspheres.

  15. Barnacle Bill in Super Resolution from Super Panorama

    NASA Image and Video Library

    1998-07-03

    "Barnacle Bill" is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin. This view of Barnacle Bill was produced by combining the "Super Panorama" frames from the IMP camera. Super resolution was applied to help to address questions about the texture of these rocks and what it might tell us about their mode of origin. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The composites consist of 7 frames in the right eye and 8 frames in the left eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses. http://photojournal.jpl.nasa.gov/catalog/PIA01409

  16. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  17. C-RED one: ultra-high speed wavefront sensing in the infrared made possible

    NASA Astrophysics Data System (ADS)

    Gach, J.-L.; Feautrier, Philippe; Stadler, Eric; Greffe, Timothee; Clop, Fabien; Lemarchand, Stéphane; Carmignani, Thomas; Boutolleau, David; Baker, Ian

    2016-07-01

    First Light Imaging's CRED-ONE infrared camera is capable of capturing up to 3500 full frames per second with a subelectron readout noise. This breakthrough has been made possible thanks to the use of an e-APD infrared focal plane array which is a real disruptive technology in imagery. We will show the performances of the camera, its main features and compare them to other high performance wavefront sensing cameras like OCAM2 in the visible and in the infrared. The project leading to this application has received funding from the European Union's Horizon 2020 research and innovation program under grant agreement N° 673944.

  18. An interactive web-based system using cloud for large-scale visual analytics

    NASA Astrophysics Data System (ADS)

    Kaseb, Ahmed S.; Berry, Everett; Rozolis, Erik; McNulty, Kyle; Bontrager, Seth; Koh, Youngsol; Lu, Yung-Hsiang; Delp, Edward J.

    2015-03-01

    Network cameras have been growing rapidly in recent years. Thousands of public network cameras provide tremendous amount of visual information about the environment. There is a need to analyze this valuable information for a better understanding of the world around us. This paper presents an interactive web-based system that enables users to execute image analysis and computer vision techniques on a large scale to analyze the data from more than 65,000 worldwide cameras. This paper focuses on how to use both the system's website and Application Programming Interface (API). Given a computer program that analyzes a single frame, the user needs to make only slight changes to the existing program and choose the cameras to analyze. The system handles the heterogeneity of the geographically distributed cameras, e.g. different brands, resolutions. The system allocates and manages Amazon EC2 and Windows Azure cloud resources to meet the analysis requirements.

  19. Linear array of photodiodes to track a human speaker for video recording

    NASA Astrophysics Data System (ADS)

    DeTone, D.; Neal, H.; Lougheed, R.

    2012-12-01

    Communication and collaboration using stored digital media has garnered more interest by many areas of business, government and education in recent years. This is due primarily to improvements in the quality of cameras and speed of computers. An advantage of digital media is that it can serve as an effective alternative when physical interaction is not possible. Video recordings that allow for viewers to discern a presenter's facial features, lips and hand motions are more effective than videos that do not. To attain this, one must maintain a video capture in which the speaker occupies a significant portion of the captured pixels. However, camera operators are costly, and often do an imperfect job of tracking presenters in unrehearsed situations. This creates motivation for a robust, automated system that directs a video camera to follow a presenter as he or she walks anywhere in the front of a lecture hall or large conference room. Such a system is presented. The system consists of a commercial, off-the-shelf pan/tilt/zoom (PTZ) color video camera, a necklace of infrared LEDs and a linear photodiode array detector. Electronic output from the photodiode array is processed to generate the location of the LED necklace, which is worn by a human speaker. The computer controls the video camera movements to record video of the speaker. The speaker's vertical position and depth are assumed to remain relatively constant- the video camera is sent only panning (horizontal) movement commands. The LED necklace is flashed at 70Hz at a 50% duty cycle to provide noise-filtering capability. The benefit to using a photodiode array versus a standard video camera is its higher frame rate (4kHz vs. 60Hz). The higher frame rate allows for the filtering of infrared noise such as sunlight and indoor lighting-a capability absent from other tracking technologies. The system has been tested in a large lecture hall and is shown to be effective.

  20. SU-C-18A-02: Image-Based Camera Tracking: Towards Registration of Endoscopic Video to CT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ingram, S; Rao, A; Wendt, R

    Purpose: Endoscopic examinations are routinely performed on head and neck and esophageal cancer patients. However, these images are underutilized for radiation therapy because there is currently no way to register them to a CT of the patient. The purpose of this work is to develop a method to track the motion of an endoscope within a structure using images from standard clinical equipment. This method will be incorporated into a broader endoscopy/CT registration framework. Methods: We developed a software algorithm to track the motion of an endoscope within an arbitrary structure. We computed frame-to-frame rotation and translation of the cameramore » by tracking surface points across the video sequence and utilizing two-camera epipolar geometry. The resulting 3D camera path was used to recover the surrounding structure via triangulation methods. We tested this algorithm on a rigid cylindrical phantom with a pattern spray-painted on the inside. We did not constrain the motion of the endoscope while recording, and we did not constrain our measurements using the known structure of the phantom. Results: Our software algorithm can successfully track the general motion of the endoscope as it moves through the phantom. However, our preliminary data do not show a high degree of accuracy in the triangulation of 3D point locations. More rigorous data will be presented at the annual meeting. Conclusion: Image-based camera tracking is a promising method for endoscopy/CT image registration, and it requires only standard clinical equipment. It is one of two major components needed to achieve endoscopy/CT registration, the second of which is tying the camera path to absolute patient geometry. In addition to this second component, future work will focus on validating our camera tracking algorithm in the presence of clinical imaging features such as patient motion, erratic camera motion, and dynamic scene illumination.« less

  1. Projective Structure from Two Uncalibrated Images: Structure from Motion and Recognition

    DTIC Science & Technology

    1992-09-01

    correspondence between points in Maybank 1990). The question, therefore, is why look for both views more of a problem, and hence, may make the...plane is fixed with respect to the 1987, Faugeras, Luong and Maybank 1992). The prob- camera coordinate frame. A rigid camera motion, there- lem of...the second reference Rieger-Lawton 1985, Faugeras and Maybank 1990, Hil- plane (assuming the four object points Pi, j = 1, ...,4, dreth 1991, Faugeras

  2. Ground Testing of Prototype Hardware and Processing Algorithms for a Wide Area Space Surveillance System (WASSS)

    DTIC Science & Technology

    2013-09-01

    Ground testing of prototype hardware and processing algorithms for a Wide Area Space Surveillance System (WASSS) Neil Goldstein, Rainer A...at Magdalena Ridge Observatory using the prototype Wide Area Space Surveillance System (WASSS) camera, which has a 4 x 60 field-of-view , < 0.05...objects with larger-aperture cameras. The sensitivity of the system depends on multi-frame averaging and a Principal Component Analysis based image

  3. The ET as it falls away from the orbiter after separation on STS-121

    NASA Image and Video Library

    2006-07-04

    S121-E-05006 (4 July 2006) --- This picture of the STS-121 external tank was taken with a digital still camera by an astronaut only seconds after separation from the Space Shuttle Discovery on launch day. Engineers, managers and flight controllers have carefully studied this image and other frames from this series as well as a number of pictures showing the falling ET as photographed from umbilical well cameras.

  4. The ET as it falls away from the orbiter after separation on STS-121

    NASA Image and Video Library

    2006-07-04

    STS121-E-05011 (4 July 2006)-- This picture of the STS-121 external tank was taken with a digital still camera by an astronaut only seconds after separation from the Space Shuttle Discovery on launch day. Engineers, managers and flight controllers have carefully studied this image and other frames from this series as well as a number of pictures showing the falling ET as photographed from umbilical well cameras.

  5. The ET as it falls away from the orbiter after separation on STS-121

    NASA Image and Video Library

    2006-07-04

    STS121-E-05008 (4 July 2006)-- This picture of the STS-121 external tank was taken with a digital still camera by an astronaut only seconds after separation from the Space Shuttle Discovery on launch day. Engineers, managers and flight controllers have carefully studied this image and other frames from this series as well as a number of pictures showing the falling ET as photographed from umbilical well cameras.

  6. Use of Space Shuttle Photography in the Study of Meteorological Phenomena.

    DTIC Science & Technology

    1985-06-01

    Hurricane Kamisy in the Indian Ocean 7-12 April 1984, and the Nauna Loa volcano smoke plume, Hawaii 7-12 April 1984) is perfored using handheld-camera...April 1984 (50-mm lens) . ....... 98 42. The island of Hawaii , and Location o2 its Major Volcanoes . . . . . . . . . . . . . . . . . . . . . 99 43. Frame... Hawaii 7-12 April 1984) is performed using handheld-camera photographs from the Space Transportation ~ System (STS) 41-C mission (6-13 April 1984). High

  7. Robust camera calibration for sport videos using court models

    NASA Astrophysics Data System (ADS)

    Farin, Dirk; Krabbe, Susanne; de With, Peter H. N.; Effelsberg, Wolfgang

    2003-12-01

    We propose an automatic camera calibration algorithm for court sports. The obtained camera calibration parameters are required for applications that need to convert positions in the video frame to real-world coordinates or vice versa. Our algorithm uses a model of the arrangement of court lines for calibration. Since the court model can be specified by the user, the algorithm can be applied to a variety of different sports. The algorithm starts with a model initialization step which locates the court in the image without any user assistance or a-priori knowledge about the most probable position. Image pixels are classified as court line pixels if they pass several tests including color and local texture constraints. A Hough transform is applied to extract line elements, forming a set of court line candidates. The subsequent combinatorial search establishes correspondences between lines in the input image and lines from the court model. For the succeeding input frames, an abbreviated calibration algorithm is used, which predicts the camera parameters for the new image and optimizes the parameters using a gradient-descent algorithm. We have conducted experiments on a variety of sport videos (tennis, volleyball, and goal area sequences of soccer games). Video scenes with considerable difficulties were selected to test the robustness of the algorithm. Results show that the algorithm is very robust to occlusions, partial court views, bad lighting conditions, or shadows.

  8. A 3D camera for improved facial recognition

    NASA Astrophysics Data System (ADS)

    Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim

    2004-12-01

    We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.

  9. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  10. Multi-mode Observations of Cloud-to-Ground Lightning Strokes

    NASA Astrophysics Data System (ADS)

    Smith, M. W.; Smith, B. J.; Clemenson, M. D.; Zollweg, J. D.

    2015-12-01

    We present hyper-temporal and hyper-spectral data collected using a suite of three Phantom high-speed cameras configured to observe cloud-to-ground lightning strokes. The first camera functioned as a contextual imager to show the location and structure of the strokes. The other two cameras were operated as slit-less spectrometers, with resolutions of 0.2 to 1.0 nm. The imaging camera was operated at a readout rate of 48,000 frames per second and provided an image-based trigger mechanism for the spectrometers. Each spectrometer operated at a readout rate of 400,000 frames per second. The sensors were deployed on the southern edge of Albuquerque, New Mexico and collected data over a 4 week period during the thunderstorm season in the summer of 2015. Strikes observed by the sensor suite were correlated to specific strikes recorded by the National Lightning Data Network (NLDN) and thereby geo-located. Sensor calibration factors, distance to each strike, and calculated values of atmospheric transmission were used to estimate absolute radiometric intensities for the spectral-temporal data. The data that we present show the intensity and time evolution of broadband and line emission features for both leader and return strokes. We highlight several key features and overall statistics of the observations. A companion poster describes a lightning model that is being developed at Sandia National Laboratories.

  11. Compact full-motion video hyperspectral cameras: development, image processing, and applications

    NASA Astrophysics Data System (ADS)

    Kanaev, A. V.

    2015-10-01

    Emergence of spectral pixel-level color filters has enabled development of hyper-spectral Full Motion Video (FMV) sensors operating in visible (EO) and infrared (IR) wavelengths. The new class of hyper-spectral cameras opens broad possibilities of its utilization for military and industry purposes. Indeed, such cameras are able to classify materials as well as detect and track spectral signatures continuously in real time while simultaneously providing an operator the benefit of enhanced-discrimination-color video. Supporting these extensive capabilities requires significant computational processing of the collected spectral data. In general, two processing streams are envisioned for mosaic array cameras. The first is spectral computation that provides essential spectral content analysis e.g. detection or classification. The second is presentation of the video to an operator that can offer the best display of the content depending on the performed task e.g. providing spatial resolution enhancement or color coding of the spectral analysis. These processing streams can be executed in parallel or they can utilize each other's results. The spectral analysis algorithms have been developed extensively, however demosaicking of more than three equally-sampled spectral bands has been explored scarcely. We present unique approach to demosaicking based on multi-band super-resolution and show the trade-off between spatial resolution and spectral content. Using imagery collected with developed 9-band SWIR camera we demonstrate several of its concepts of operation including detection and tracking. We also compare the demosaicking results to the results of multi-frame super-resolution as well as to the combined multi-frame and multiband processing.

  12. Development of low-cost high-performance multispectral camera system at Banpil

    NASA Astrophysics Data System (ADS)

    Oduor, Patrick; Mizuno, Genki; Olah, Robert; Dutta, Achyut K.

    2014-05-01

    Banpil Photonics (Banpil) has developed a low-cost high-performance multispectral camera system for Visible to Short- Wave Infrared (VIS-SWIR) imaging for the most demanding high-sensitivity and high-speed military, commercial and industrial applications. The 640x512 pixel InGaAs uncooled camera system is designed to provide a compact, smallform factor to within a cubic inch, high sensitivity needing less than 100 electrons, high dynamic range exceeding 190 dB, high-frame rates greater than 1000 frames per second (FPS) at full resolution, and low power consumption below 1W. This is practically all the feature benefits highly desirable in military imaging applications to expand deployment to every warfighter, while also maintaining a low-cost structure demanded for scaling into commercial markets. This paper describes Banpil's development of the camera system including the features of the image sensor with an innovation integrating advanced digital electronics functionality, which has made the confluence of high-performance capabilities on the same imaging platform practical at low cost. It discusses the strategies employed including innovations of the key components (e.g. focal plane array (FPA) and Read-Out Integrated Circuitry (ROIC)) within our control while maintaining a fabless model, and strategic collaboration with partners to attain additional cost reductions on optics, electronics, and packaging. We highlight the challenges and potential opportunities for further cost reductions to achieve a goal of a sub-$1000 uncooled high-performance camera system. Finally, a brief overview of emerging military, commercial and industrial applications that will benefit from this high performance imaging system and their forecast cost structure is presented.

  13. High-Resolution Global Geologic Map of Ceres from NASA Dawn Mission

    NASA Astrophysics Data System (ADS)

    Williams, D. A.; Buczkowski, D. L.; Crown, D. A.; Frigeri, A.; Hughson, K.; Kneissl, T.; Krohn, K.; Mest, S. C.; Pasckert, J. H.; Platz, T.; Ruesch, O.; Schulzeck, F.; Scully, J. E. C.; Sizemore, H. G.; Nass, A.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2018-06-01

    This presentation will discuss the completed 1:4,000,000 global geologic map of dwarf planet Ceres derived from Dawn Framing Camera Low Altitude Mapping Orbit (LAMo) images, combining 15 quadrangle maps.

  14. A Valentine from Vesta

    NASA Image and Video Library

    2012-02-14

    This image from NASA Dawn spacecraft, is based on a framing camera image that is overlain by a color-coded height representation of topography. This heart-shaped hollow is roughly 10 kilometers 6 miles across at its widest point.

  15. Topography of Vesta Surface

    NASA Image and Video Library

    2011-08-26

    This view of the topography of asteroid Vesta surface is composed of several images obtained with the framing camera on NASA Dawn spacecraft on August 6, 2011. The image mosaic is shown superimposed on a digital terrain model.

  16. Stepping over obstacles: gait patterns of healthy young and old adults.

    PubMed

    Chen, H C; Ashton-Miller, J A; Alexander, N B; Schultz, A B

    1991-11-01

    Falls associated with tripping over an obstacle can be devastating to elderly individuals, yet little is known about the strategies used for stepping over obstacles by either old or young adults. The gait of gender-matched groups of 24 young and 24 old healthy adults (mean ages 22 and 71 years) was studied during a 4 m approach to and while stepping over obstacles of 0, 25, 51, or 152 mm height and in level obstacle-free walking. Optoelectronic cameras and recorders were used to record approach and obstacle crossing speeds as well as bilateral lower extremity kinematic parameters that described foot placement and movement trajectories relative to the obstacle. The results showed that age had no effect on minimum swing foot clearance (FC) over an obstacle. For the 25 mm obstacle, mean FC was 64 mm, or approximately three times that used in level gait; FC increased nonlinearly with obstacle height for all subjects. Although no age differences were found in obstacle-free gait, old adults exhibited a significantly more conservative strategy when crossing obstacles, with slower crossing speed, shorter step length, and shorter obstacle-heel strike distance. In addition, the old adults crossed the obstacle so that it was 10% further forward in their obstacle-crossing step. Although all subjects successfully avoided the riskiest form of obstacle contact, tripping, 4/24 healthy old adults stepped on an obstacle, demonstrating an increased risk for obstacle contact with age.

  17. Radiometric calibration of wide-field camera system with an application in astronomy

    NASA Astrophysics Data System (ADS)

    Vítek, Stanislav; Nasyrova, Maria; Stehlíková, Veronika

    2017-09-01

    Camera response function (CRF) is widely used for the description of the relationship between scene radiance and image brightness. Most common application of CRF is High Dynamic Range (HDR) reconstruction of the radiance maps of imaged scenes from a set of frames with different exposures. The main goal of this work is to provide an overview of CRF estimation algorithms and compare their outputs with results obtained under laboratory conditions. These algorithms, typically designed for multimedia content, are unfortunately quite useless with astronomical image data, mostly due to their nature (blur, noise, and long exposures). Therefore, we propose an optimization of selected methods to use in an astronomical imaging application. Results are experimentally verified on the wide-field camera system using Digital Single Lens Reflex (DSLR) camera.

  18. Solid state television camera (CCD-buried channel)

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The development of an all solid state television camera, which uses a buried channel charge coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array is utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control (i.e., ALC and AGC) techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  19. Solid state television camera (CCD-buried channel), revision 1

    NASA Technical Reports Server (NTRS)

    1977-01-01

    An all solid state television camera was designed which uses a buried channel charge coupled device (CCD) as the image sensor. A 380 x 488 element CCD array is utilized to ensure compatibility with 525-line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (1) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (2) techniques for the elimination or suppression of CCD blemish effects, and (3) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a deliverable solid state TV camera which addressed the program requirements for a prototype qualifiable to space environment conditions.

  20. Clementine Observes the Moon, Solar Corona, and Venus

    NASA Technical Reports Server (NTRS)

    1997-01-01

    In 1994, during its flight, the Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon.

    In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame.

  1. Solid state, CCD-buried channel, television camera study and design

    NASA Technical Reports Server (NTRS)

    Hoagland, K. A.; Balopole, H.

    1976-01-01

    An investigation of an all solid state television camera design, which uses a buried channel charge-coupled device (CCD) as the image sensor, was undertaken. A 380 x 488 element CCD array was utilized to ensure compatibility with 525 line transmission and display monitor equipment. Specific camera design approaches selected for study and analysis included (a) optional clocking modes for either fast (1/60 second) or normal (1/30 second) frame readout, (b) techniques for the elimination or suppression of CCD blemish effects, and (c) automatic light control and video gain control techniques to eliminate or minimize sensor overload due to bright objects in the scene. Preferred approaches were determined and integrated into a design which addresses the program requirements for a deliverable solid state TV camera.

  2. Development Of A Dynamic Radiographic Capability Using High-Speed Video

    NASA Astrophysics Data System (ADS)

    Bryant, Lawrence E.

    1985-02-01

    High-speed video equipment can be used to optically image up to 2,000 full frames per second or 12,000 partial frames per second. X-ray image intensifiers have historically been used to image radiographic images at 30 frames per second. By combining these two types of equipment, it is possible to perform dynamic x-ray imaging of up to 2,000 full frames per second. The technique has been demonstrated using conventional, industrial x-ray sources such as 150 Kv and 300 Kv constant potential x-ray generators, 2.5 MeV Van de Graaffs, and linear accelerators. A crude form of this high-speed radiographic imaging has been shown to be possible with a cobalt 60 source. Use of a maximum aperture lens makes best use of the available light output from the image intensifier. The x-ray image intensifier input and output fluors decay rapidly enough to allow the high frame rate imaging. Data are presented on the maximum possible video frame rates versus x-ray penetration of various thicknesses of aluminum and steel. Photographs illustrate typical radiographic setups using the high speed imaging method. Video recordings show several demonstrations of this technique with the played-back x-ray images slowed down up to 100 times as compared to the actual event speed. Typical applications include boiling type action of liquids in metal containers, compressor operation with visualization of crankshaft, connecting rod and piston movement and thermal battery operation. An interesting aspect of this technique combines both the optical and x-ray capabilities to observe an object or event with both external and internal details with one camera in a visual mode and the other camera in an x-ray mode. This allows both kinds of video images to appear side by side in a synchronized presentation.

  3. Real-time unmanned aircraft systems surveillance video mosaicking using GPU

    NASA Astrophysics Data System (ADS)

    Camargo, Aldo; Anderson, Kyle; Wang, Yi; Schultz, Richard R.; Fevig, Ronald A.

    2010-04-01

    Digital video mosaicking from Unmanned Aircraft Systems (UAS) is being used for many military and civilian applications, including surveillance, target recognition, border protection, forest fire monitoring, traffic control on highways, monitoring of transmission lines, among others. Additionally, NASA is using digital video mosaicking to explore the moon and planets such as Mars. In order to compute a "good" mosaic from video captured by a UAS, the algorithm must deal with motion blur, frame-to-frame jitter associated with an imperfectly stabilized platform, perspective changes as the camera tilts in flight, as well as a number of other factors. The most suitable algorithms use SIFT (Scale-Invariant Feature Transform) to detect the features consistent between video frames. Utilizing these features, the next step is to estimate the homography between two consecutives video frames, perform warping to properly register the image data, and finally blend the video frames resulting in a seamless video mosaick. All this processing takes a great deal of resources of resources from the CPU, so it is almost impossible to compute a real time video mosaic on a single processor. Modern graphics processing units (GPUs) offer computational performance that far exceeds current CPU technology, allowing for real-time operation. This paper presents the development of a GPU-accelerated digital video mosaicking implementation and compares it with CPU performance. Our tests are based on two sets of real video captured by a small UAS aircraft; one video comes from Infrared (IR) and Electro-Optical (EO) cameras. Our results show that we can obtain a speed-up of more than 50 times using GPU technology, so real-time operation at a video capture of 30 frames per second is feasible.

  4. Architecture and Protocol of a Semantic System Designed for Video Tagging with Sensor Data in Mobile Devices

    PubMed Central

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753

  5. Architecture and protocol of a semantic system designed for video tagging with sensor data in mobile devices.

    PubMed

    Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel

    2012-01-01

    Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.

  6. Upgrading the Arecibo Potassium Lidar Receiver for Meridional Wind Measurements

    NASA Astrophysics Data System (ADS)

    Piccone, A. N.; Lautenbach, J.

    2017-12-01

    Lidar can be used to measure a plethora of variables: temperature, density of metals, and wind. This REU project is focused on the set up of a semi steerable telescope that will allow the measurement of meridional wind in the mesosphere (80-105 km) with Arecibo Observatory's potassium resonance lidar. This includes the basic design concept of a steering system that is able to turn the telescope to a maximum of 40°, alignment of the mirror with the telescope frame to find the correct focusing, and the triggering and programming of a CCD camera. The CCD camera's purpose is twofold: looking though the telescope and matching the stars in the field of view with a star map to accurately calibrate the steering system and determining the laser beam properties and position. Using LabVIEW, the frames from the CCD camera can be analyzed to identify the most intense pixel in the image (and therefore the brightest point in the laser beam or stars) by plotting average pixel values per row and column and locating the peaks of these plots. The location of this pixel can then be plotted, determining the jitter in the laser and position within the field of view of the telescope.

  7. Estimation of vibration frequency of loudspeaker diaphragm by parallel phase-shifting digital holography

    NASA Astrophysics Data System (ADS)

    Kakue, T.; Endo, Y.; Shimobaba, T.; Ito, T.

    2014-11-01

    We report frequency estimation of loudspeaker diaphragm vibrating at high speed by parallel phase-shifting digital holography which is a technique of single-shot phase-shifting interferometry. This technique records multiple phaseshifted holograms required for phase-shifting interferometry by using space-division multiplexing. We constructed a parallel phase-shifting digital holography system consisting of a high-speed polarization-imaging camera. This camera has a micro-polarizer array which selects four linear polarization axes for 2 × 2 pixels. We set a loudspeaker as an object, and recorded vibration of diaphragm of the loudspeaker by the constructed system. By the constructed system, we demonstrated observation of vibration displacement of loudspeaker diaphragm. In this paper, we aim to estimate vibration frequency of the loudspeaker diaphragm by applying the experimental results to frequency analysis. Holograms consisting of 128 × 128 pixels were recorded at a frame rate of 262,500 frames per second by the camera. A sinusoidal wave was input to the loudspeaker via a phone connector. We observed displacement of the loudspeaker diaphragm vibrating by the system. We also succeeded in estimating vibration frequency of the loudspeaker diaphragm by applying frequency analysis to the experimental results.

  8. Ridges and Cliffs on Mercury Surface

    NASA Image and Video Library

    2008-01-20

    A complex history of geological evolution is recorded in this frame from the Narrow Angle Camera NAC, part of the Mercury Dual Imaging System MDIS instrument, taken during NASA MESSENGER close flyby of Mercury on January 14, 2008.

  9. 'Algonquin' Outcrop on Spirit's Sol 680

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This view combines four frames from Spirit's panoramic camera, looking in the drive direction on the rover's 680th Martian day, or sol (Dec. 1, 2005). The outcrop of apparently layered bedrock has the informal name 'Algonquin.'

  10. A New Spin on Vesta

    NASA Image and Video Library

    2010-10-08

    Hubble Wide Field Camera 3 observed the potato-shaped asteroid in preparation for the visit by NASA Dawn spacecraft in 2011. This is one frame from a movie showing the difference in brightness and color on the asteroid surface.

  11. Investigation of the Production of High Density Uniform Plasmas.

    DTIC Science & Technology

    1980-10-01

    first time with the framing camera. These are a considerable improvement upon the black and white films taken in earlier experi- ments. The different...i 111 I 11Il ELECTRON BEAM JvL ~f OIL REFLECTING PRISMS - -PYREX CELL SUSTAINER CATHODE LENS MIRROR LENS MINATURE ARC LAMP APERTURE FRAMING...was run to test the opposite limit. This cathode also arced earlier than the more con- ventional materials. The first run left several holes in the kap

  12. MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.

    PubMed

    Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram

    2015-11-01

    We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.

  13. Keyhole imaging method for dynamic objects behind the occlusion area

    NASA Astrophysics Data System (ADS)

    Hao, Conghui; Chen, Xi; Dong, Liquan; Zhao, Yuejin; Liu, Ming; Kong, Lingqin; Hui, Mei; Liu, Xiaohua; Wu, Hong

    2018-01-01

    A method of keyhole imaging based on camera array is realized to obtain the video image behind a keyhole in shielded space at a relatively long distance. We get the multi-angle video images by using a 2×2 CCD camera array to take the images behind the keyhole in four directions. The multi-angle video images are saved in the form of frame sequences. This paper presents a method of video frame alignment. In order to remove the non-target area outside the aperture, we use the canny operator and morphological method to realize the edge detection of images and fill the images. The image stitching of four images is accomplished on the basis of the image stitching algorithm of two images. In the image stitching algorithm of two images, the SIFT method is adopted to accomplish the initial matching of images, and then the RANSAC algorithm is applied to eliminate the wrong matching points and to obtain a homography matrix. A method of optimizing transformation matrix is proposed in this paper. Finally, the video image with larger field of view behind the keyhole can be synthesized with image frame sequence in which every single frame is stitched. The results show that the screen of the video is clear and natural, the brightness transition is smooth. There is no obvious artificial stitching marks in the video, and it can be applied in different engineering environment .

  14. Lightning Step Leader and Return Stroke Spectra at 100,000 fps

    NASA Astrophysics Data System (ADS)

    Harley, J.; McHarg, M.; Stenbaek-Nielsen, H. C.; Haaland, R. K.; Sonnenfeld, R.; Edens, H. E.; Cummer, S.; Lapierre, J. L.; Maddocks, S.

    2017-12-01

    A fundamental understanding of lightning can be inferred from the spectral emissions resulting from the leader and return stroke channels. We examine events recorded at 00:58:07 on 19 July 2015 and 06:44:24 on 23 July 2017, both at Langmuir Laboratory. Analysis of both events is supplemented by data from the Lightning Mapping Array at Langmuir. The 00:58:07 event spectra was recorded using a 100 line per mm grating in front of a Phantom V2010 camera with an 85mm (9o FOV) Nikon lens recording at 100,000 frames per second. Coarse resolution spectra (approximately 5 nm resolution) are produced from approximately 400 nm to 800 nm for each frame. We analyze several nitrogen and oxygen lines to understand step leader temperature behavior between cloud and ground. The 06:44:24 event spectra was recorded using a 300 line per mm grating (approximately 1.5 nm resolution) in front of a Phantom V2010 camera with an 50mm (32o FOV) Nikon lens also recording at 100,000 frames per second. Two ionized atomic nitrogen lines at 502 nm and 569 nm appear upon attachment and disappear as the return stroke travels from ground to cloud in approximately 5 frames. We analyze these lines to understand initial return stroke temperature and species behavior.

  15. Real-time look-up table-based color correction for still image stabilization of digital cameras without using frame memory

    NASA Astrophysics Data System (ADS)

    Luo, Lin-Bo; An, Sang-Woo; Wang, Chang-Shuai; Li, Ying-Chun; Chong, Jong-Wha

    2012-09-01

    Digital cameras usually decrease exposure time to capture motion-blur-free images. However, this operation will generate an under-exposed image with a low-budget complementary metal-oxide semiconductor image sensor (CIS). Conventional color correction algorithms can efficiently correct under-exposed images; however, they are generally not performed in real time and need at least one frame memory if they are implemented by hardware. The authors propose a real-time look-up table-based color correction method that corrects under-exposed images with hardware without using frame memory. The method utilizes histogram matching of two preview images, which are exposed for a long and short time, respectively, to construct an improved look-up table (ILUT) and then corrects the captured under-exposed image in real time. Because the ILUT is calculated in real time before processing the captured image, this method does not require frame memory to buffer image data, and therefore can greatly save the cost of CIS. This method not only supports single image capture, but also bracketing to capture three images at a time. The proposed method was implemented by hardware description language and verified by a field-programmable gate array with a 5 M CIS. Simulations show that the system can perform in real time with a low cost and can correct the color of under-exposed images well.

  16. High-Speed Camera and High-Vision Camera Observations of TLEs from Jet Aircraft in Winter Japan and in Summer US

    NASA Astrophysics Data System (ADS)

    Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.

    2011-12-01

    The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.

  17. A high resolution IR/visible imaging system for the W7-X limiter

    NASA Astrophysics Data System (ADS)

    Wurden, G. A.; Stephey, L. A.; Biedermann, C.; Jakubowski, M. W.; Dunn, J. P.; Gamradt, M.

    2016-11-01

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphite tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (˜1-4.5 MW/m2), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO's can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.

  18. A high resolution IR/visible imaging system for the W7-X limiter

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wurden, G. A., E-mail: wurden@lanl.gov; Dunn, J. P.; Stephey, L. A.

    A high-resolution imaging system, consisting of megapixel mid-IR and visible cameras along the same line of sight, has been prepared for the new W7-X stellarator and was operated during Operational Period 1.1 to view one of the five inboard graphite limiters. The radial line of sight, through a large diameter (184 mm clear aperture) uncoated sapphire window, couples a direct viewing 1344 × 784 pixel FLIR SC8303HD camera. A germanium beam-splitter sends visible light to a 1024 × 1024 pixel Allied Vision Technologies Prosilica GX1050 color camera. Both achieve sub-millimeter resolution on the 161 mm wide, inertially cooled, segmented graphitemore » tiles. The IR and visible cameras are controlled via optical fibers over full Camera Link and dual GigE Ethernet (2 Gbit/s data rates) interfaces, respectively. While they are mounted outside the cryostat at a distance of 3.2 m from the limiter, they are close to a large magnetic trim coil and require soft iron shielding. We have taken IR data at 125 Hz to 1.25 kHz frame rates and seen that surface temperature increases in excess of 350 °C, especially on leading edges or defect hot spots. The IR camera sees heat-load stripe patterns on the limiter and has been used to infer limiter power fluxes (∼1–4.5 MW/m{sup 2}), during the ECRH heating phase. IR images have also been used calorimetrically between shots to measure equilibrated bulk tile temperature, and hence tile energy inputs (in the range of 30 kJ/tile with 0.6 MW, 6 s heating pulses). Small UFO’s can be seen and tracked by the FLIR camera in some discharges. The calibrated visible color camera (100 Hz frame rate) has also been equipped with narrow band C-III and H-alpha filters, to compare with other diagnostics, and is used for absolute particle flux determination from the limiter surface. Sometimes, but not always, hot-spots in the IR are also seen to be bright in C-III light.« less

  19. Optimized Two-Party Video Chat with Restored Eye Contact Using Graphics Hardware

    NASA Astrophysics Data System (ADS)

    Dumont, Maarten; Rogmans, Sammy; Maesen, Steven; Bekaert, Philippe

    We present a practical system prototype to convincingly restore eye contact between two video chat participants, with a minimal amount of constraints. The proposed six-fold camera setup is easily integrated into the monitor frame, and is used to interpolate an image as if its virtual camera captured the image through a transparent screen. The peer user has a large freedom of movement, resulting in system specifications that enable genuine practical usage. Our software framework thereby harnesses the powerful computational resources inside graphics hardware, and maximizes arithmetic intensity to achieve over real-time performance up to 42 frames per second for 800 ×600 resolution images. Furthermore, an optimal set of fine tuned parameters are presented, that optimizes the end-to-end performance of the application to achieve high subjective visual quality, and still allows for further algorithmic advancement without loosing its real-time capabilities.

  20. In vivo PET imaging of beta-amyloid deposition in mouse models of Alzheimer's disease with a high specific activity PET imaging agent [(18)F]flutemetamol.

    PubMed

    Snellman, Anniina; Rokka, Johanna; López-Picón, Francisco R; Eskola, Olli; Salmona, Mario; Forloni, Gianluigi; Scheinin, Mika; Solin, Olof; Rinne, Juha O; Haaparanta-Solin, Merja

    2014-01-01

    The purpose of the study was to evaluate the applicability of (18) F-labelled amyloid imaging positron emission tomography (PET) agent [ (18) F]flutemetamol to detect changes in brain beta-amyloid (Aβ) deposition in vivo in APP23, Tg2576 and APPswe-PS1dE9 mouse models of Alzheimer's disease. We expected that the high specific activity of [ (18) F]flutemetamol would make it an attractive small animal Aβ imaging agent. [ (18) F]flutemetamol uptake in the mouse brain was evaluated in vivo at 9 to 22 months of age with an Inveon Multimodality PET/CT camera (Siemens Medical Solutions USA, Knoxville, TN, USA). Retention in the frontal cortex (FC) was evaluated by Logan distribution volume ratios (DVR) and FC/cerebellum (CB) ratios during the late washout phase (50 to 60 min). [ (18) F]flutemetamol binding to Aβ was also evaluated in brain slices by in vitro and ex vivo autoradiography. The amount of Aβ in the brain slices was determined with Thioflavin S and anti-Aβ1-40 immunohistochemistry. In APP23 mice, [ (18) F]flutemetamol retention in the FC increased from 9 to 18 months. In younger mice, DVR and FC/CB50-60 were 0.88 (0.81) and 0.88 (0.89) at 9 months (N = 2), and 0.98 (0.93) at 12 months (N = 1), respectively. In older mice, DVR and FC/CB50-60 were 1.16 (1.15) at 15 months (N = 1), 1.13 (1.16) and 1.35 (1.35) at 18 months (N = 2), and 1.05 (1.31) at 21 months (N = 1). In Tg2576 mice, DVR and FC/CB50-60 showed modest increasing trends but also high variability. In APPswe-PS1dE9 mice, DVR and FC/CB50-60 did not increase with age. Thioflavin S and anti-Aβ1-40 positive Aβ deposits were present in all transgenic mice at 19 to 22 months, and they co-localized with [ (18) F]flutemetamol binding in the brain slices examined with in vitro and ex vivo autoradiography. Increased [ (18) F]flutemetamol retention in the brain was detected in old APP23 mice in vivo. However, the high specific activity of [ (18) F]flutemetamol did not provide a notable advantage in Tg2576 and APPswe-PS1dE9 mice compared to the previously evaluated structural analogue [(11)C]PIB. For its practical benefits, [ (18) F]flutemetamol imaging with a suitable mouse model like APP23 is an attractive alternative.

  1. In vivo PET imaging of beta-amyloid deposition in mouse models of Alzheimer's disease with a high specific activity PET imaging agent [18F]flutemetamol

    PubMed Central

    2014-01-01

    Background The purpose of the study was to evaluate the applicability of 18F-labelled amyloid imaging positron emission tomography (PET) agent [18F]flutemetamol to detect changes in brain beta-amyloid (Aβ) deposition in vivo in APP23, Tg2576 and APPswe-PS1dE9 mouse models of Alzheimer's disease. We expected that the high specific activity of [18F]flutemetamol would make it an attractive small animal Aβ imaging agent. Methods [18F]flutemetamol uptake in the mouse brain was evaluated in vivo at 9 to 22 months of age with an Inveon Multimodality PET/CT camera (Siemens Medical Solutions USA, Knoxville, TN, USA). Retention in the frontal cortex (FC) was evaluated by Logan distribution volume ratios (DVR) and FC/cerebellum (CB) ratios during the late washout phase (50 to 60 min). [18F]flutemetamol binding to Aβ was also evaluated in brain slices by in vitro and ex vivo autoradiography. The amount of Aβ in the brain slices was determined with Thioflavin S and anti-Aβ1−40 immunohistochemistry. Results In APP23 mice, [18F]flutemetamol retention in the FC increased from 9 to 18 months. In younger mice, DVR and FC/CB50-60 were 0.88 (0.81) and 0.88 (0.89) at 9 months (N = 2), and 0.98 (0.93) at 12 months (N = 1), respectively. In older mice, DVR and FC/CB50-60 were 1.16 (1.15) at 15 months (N = 1), 1.13 (1.16) and 1.35 (1.35) at 18 months (N = 2), and 1.05 (1.31) at 21 months (N = 1). In Tg2576 mice, DVR and FC/CB50-60 showed modest increasing trends but also high variability. In APPswe-PS1dE9 mice, DVR and FC/CB50-60 did not increase with age. Thioflavin S and anti-Aβ1−40 positive Aβ deposits were present in all transgenic mice at 19 to 22 months, and they co-localized with [18F]flutemetamol binding in the brain slices examined with in vitro and ex vivo autoradiography. Conclusions Increased [18F]flutemetamol retention in the brain was detected in old APP23 mice in vivo. However, the high specific activity of [18F]flutemetamol did not provide a notable advantage in Tg2576 and APPswe-PS1dE9 mice compared to the previously evaluated structural analogue [11C]PIB. For its practical benefits, [18F]flutemetamol imaging with a suitable mouse model like APP23 is an attractive alternative. PMID:25977876

  2. Iodine 125 Imaging in Mice Using NaI(Tl)/Flat Panel PMT Integral Assembly

    NASA Astrophysics Data System (ADS)

    Cinti, M. N.; Majewski, S.; Williams, M. B.; Bachmann, C.; Cominelli, F.; Kundu, B. K.; Stolin, A.; Popov, V.; Welch, B. L.; De Vincentis, G.; Bennati, P.; Betti, M.; Ridolfi, S.; Pani, R.

    2007-06-01

    Radiolabeled agents that bind to specific receptors have shown great promise in diagnosing and characterizing tumor cell biology. In vivo imaging of gene transcription and protein expression represents an other area of interest. The radioisotope I is commercially available as a label for molecular probes and utilized by researchers in small animal studies. We propose an advanced imaging detector based on planar NaI(T1) integral assembly with a Hamamatsu Flat Panel Photomultiplier (MA-PMT) representing one of the best trade-offs between spatial resolution and detection efficiency. We characterized the imaging performances of this planar detector, in comparison with a gamma camera based on a pixellated scintillator. We also tested the in-vivo image capability by acquiring images of mice as a part of a study of inflammatory bowel disease (IBD). In this study, four 25g mice with an IBD-like phenotype (SAMP1/YitFc) were injected with 375, 125, 60 and 30 muCi of I-labelled antibody against mucosal vascular addressin cell adhesion molecule (MAdCAM-1), which is up-regulated in the presence of inflammation. Two mice without bowel inflammation were injected with 150 and 60 muCi of the labeled anti-MAdCAM-1 antibody as controls. To better evaluate the performances of the integral assembly detector, we also acquired mice images with a dual modality (X and Gamma Ray) camera dedicated for small animal imaging. The results coming from this new detector are considerable: images of SAMP1/YitFc injected with 30 muCi activity show inflammation throughout the intestinal tract, with the disease very well defined at two hours post-injection.

  3. High-speed optical 3D sensing and its applications

    NASA Astrophysics Data System (ADS)

    Watanabe, Yoshihiro

    2016-12-01

    This paper reviews high-speed optical 3D sensing technologies for obtaining the 3D shape of a target using a camera. The focusing speed is from 100 to 1000 fps, exceeding normal camera frame rates, which are typically 30 fps. In particular, contactless, active, and real-time systems are introduced. Also, three example applications of this type of sensing technology are introduced, including surface reconstruction from time-sequential depth images, high-speed 3D user interaction, and high-speed digital archiving.

  4. (abstract) Realization of a Faster, Cheaper, Better Mission and Its New Paradigm Star Tracker, the Advanced Stellar Compass

    NASA Technical Reports Server (NTRS)

    Eisenman, Allan Read; Liebe, Carl Christian; Joergensen, John Lief; Jensen, Gunnar Bent

    1997-01-01

    The first Danish satellite, rsted, will be launched in August of 1997. The scientific objective of sted is to perform a precision mapping of the Earth's magnetic field. Attitude data for the payload and the satellite are provided by the Advanced Stellar Compass (ASC) star tracker. The ASC consists of a CCD star camera and a capable microprocessor which operates by comparing the star image frames taken by the camera to its internal star catalogs.

  5. -V2 plane on the Hubble Space Telescope

    NASA Image and Video Library

    2002-03-03

    STS109-E-5104 (3 March 2002) --- The Hubble Space Telescope is seen in the cargo bay of the Space Shuttle Columbia. Each present set of solar array panels will be replaced during one of the space walks planned for the coming week. The crew aimed various cameras, including the digital still camera used for this frame, out the shuttle's aft flight deck windows to take a series of survey type photos, the first close-up images of the telescope since December of 1999.

  6. -V2 plane on the Hubble Space Telescope

    NASA Image and Video Library

    2002-03-03

    STS109-E-5102 (3 March 2002) --- The Hubble Space Telescope is seen in the cargo bay of the Space Shuttle Columbia. Each present set of solar array panels will be replaced during one of the space walks planned for the coming week. The crew aimed various cameras, including the digital still camera used for this frame, out the shuttle's aft flight deck windows to take a series of survey type photos, the first close-up images of the telescope since December of 1999.

  7. Optical Mapping of Membrane Potential and Epicardial Deformation in Beating Hearts.

    PubMed

    Zhang, Hanyu; Iijima, Kenichi; Huang, Jian; Walcott, Gregory P; Rogers, Jack M

    2016-07-26

    Cardiac optical mapping uses potentiometric fluorescent dyes to image membrane potential (Vm). An important limitation of conventional optical mapping is that contraction is usually arrested pharmacologically to prevent motion artifacts from obscuring Vm signals. However, these agents may alter electrophysiology, and by abolishing contraction, also prevent optical mapping from being used to study coupling between electrical and mechanical function. Here, we present a method to simultaneously map Vm and epicardial contraction in the beating heart. Isolated perfused swine hearts were stained with di-4-ANEPPS and fiducial markers were glued to the epicardium for motion tracking. The heart was imaged at 750 Hz with a video camera. Fluorescence was excited with cyan or blue LEDs on alternating camera frames, thus providing a 375-Hz effective sampling rate. Marker tracking enabled the pixel(s) imaging any epicardial site within the marked region to be identified in each camera frame. Cyan- and blue-elicited fluorescence have different sensitivities to Vm, but other signal features, primarily motion artifacts, are common. Thus, taking the ratio of fluorescence emitted by a motion-tracked epicardial site in adjacent frames removes artifacts, leaving Vm (excitation ratiometry). Reconstructed Vm signals were validated by comparison to monophasic action potentials and to conventional optical mapping signals. Binocular imaging with additional video cameras enabled marker motion to be tracked in three dimensions. From these data, epicardial deformation during the cardiac cycle was quantified by computing finite strain fields. We show that the method can simultaneously map Vm and strain in a left-sided working heart preparation and can image changes in both electrical and mechanical function 5 min after the induction of regional ischemia. By allowing high-resolution optical mapping in the absence of electromechanical uncoupling agents, the method relieves a long-standing limitation of optical mapping and has potential to enhance new studies in coupled cardiac electromechanics. Copyright © 2016 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  8. Hardware/Software Issues for Video Guidance Systems: The Coreco Frame Grabber

    NASA Technical Reports Server (NTRS)

    Bales, John W.

    1996-01-01

    The F64 frame grabber is a high performance video image acquisition and processing board utilizing the TMS320C40 and TMS34020 processors. The hardware is designed for the ISA 16 bit bus and supports multiple digital or analog cameras. It has an acquisition rate of 40 million pixels per second, with a variable sampling frequency of 510 kHz to MO MHz. The board has a 4MB frame buffer memory expandable to 32 MB, and has a simultaneous acquisition and processing capability. It supports both VGA and RGB displays, and accepts all analog and digital video input standards.

  9. Smart Camera Technology Increases Quality

    NASA Technical Reports Server (NTRS)

    2004-01-01

    When it comes to real-time image processing, everyone is an expert. People begin processing images at birth and rapidly learn to control their responses through the real-time processing of the human visual system. The human eye captures an enormous amount of information in the form of light images. In order to keep the brain from becoming overloaded with all the data, portions of an image are processed at a higher resolution than others, such as a traffic light changing colors. changing colors. In the same manner, image processing products strive to extract the information stored in light in the most efficient way possible. Digital cameras available today capture millions of pixels worth of information from incident light. However, at frame rates more than a few per second, existing digital interfaces are overwhelmed. All the user can do is store several frames to memory until that memory is full and then subsequent information is lost. New technology pairs existing digital interface technology with an off-the-shelf complementary metal oxide semiconductor (CMOS) imager to provide more than 500 frames per second of specialty image processing. The result is a cost-effective detection system unlike any other.

  10. Data analysis for GOPEX image frames

    NASA Technical Reports Server (NTRS)

    Levine, B. M.; Shaik, K. S.; Yan, T.-Y.

    1993-01-01

    The data analysis based on the image frames received at the Solid State Imaging (SSI) camera of the Galileo Optical Experiment (GOPEX) demonstration conducted between 9-16 Dec. 1992 is described. Laser uplink was successfully established between the ground and the Galileo spacecraft during its second Earth-gravity-assist phase in December 1992. SSI camera frames were acquired which contained images of detected laser pulses transmitted from the Table Mountain Facility (TMF), Wrightwood, California, and the Starfire Optical Range (SOR), Albuquerque, New Mexico. Laser pulse data were processed using standard image-processing techniques at the Multimission Image Processing Laboratory (MIPL) for preliminary pulse identification and to produce public release images. Subsequent image analysis corrected for background noise to measure received pulse intensities. Data were plotted to obtain histograms on a daily basis and were then compared with theoretical results derived from applicable weak-turbulence and strong-turbulence considerations. Processing steps are described and the theories are compared with the experimental results. Quantitative agreement was found in both turbulence regimes, and better agreement would have been found, given more received laser pulses. Future experiments should consider methods to reliably measure low-intensity pulses, and through experimental planning to geometrically locate pulse positions with greater certainty.

  11. Vehicle-triggered video compression/decompression for fast and efficient searching in large video databases

    NASA Astrophysics Data System (ADS)

    Bulan, Orhan; Bernal, Edgar A.; Loce, Robert P.; Wu, Wencheng

    2013-03-01

    Video cameras are widely deployed along city streets, interstate highways, traffic lights, stop signs and toll booths by entities that perform traffic monitoring and law enforcement. The videos captured by these cameras are typically compressed and stored in large databases. Performing a rapid search for a specific vehicle within a large database of compressed videos is often required and can be a time-critical life or death situation. In this paper, we propose video compression and decompression algorithms that enable fast and efficient vehicle or, more generally, event searches in large video databases. The proposed algorithm selects reference frames (i.e., I-frames) based on a vehicle having been detected at a specified position within the scene being monitored while compressing a video sequence. A search for a specific vehicle in the compressed video stream is performed across the reference frames only, which does not require decompression of the full video sequence as in traditional search algorithms. Our experimental results on videos captured in a local road show that the proposed algorithm significantly reduces the search space (thus reducing time and computational resources) in vehicle search tasks within compressed video streams, particularly those captured in light traffic volume conditions.

  12. Full-scale high-speed ``Edgerton'' retroreflective shadowgraphy of gunshots

    NASA Astrophysics Data System (ADS)

    Settles, Gary

    2005-11-01

    Almost 1/2 century ago, H. E. ``Doc'' Edgerton demonstrated a simple and elegant direct-shadowgraph technique for imaging large-scale events like explosions and gunshots. Only a retroreflective screen, flashlamp illumination, and an ordinary view camera were required. Retroreflective shadowgraphy has seen occasional use since then, but its unique combination of large scale, simplicity and portability has barely been tapped. It functions well in environments hostile to most optical diagnostics, such as full-scale outdoor daylight ballistics and explosives testing. Here, shadowgrams cast upon a 2.4 m square retroreflective screen are imaged by a Photron Fastcam APX-RS digital camera that is capable of megapixel image resolution at 3000 frames/sec up to 250,000 frames/sec at lower resolution. Microsecond frame exposures are used to examine the external ballistics of several firearms, including a high-powered rifle, an AK-47 submachine gun, and several pistols and revolvers. Muzzle blast phenomena and the mechanism of gunpowder residue deposition on the shooter's hands are clearly visualized. In particular, observing the firing of a pistol with and without a silencer (suppressor) suggests that some of the muzzle blast energy is converted by the silencer into supersonic jet noise.

  13. Exogenic olivine on Vesta from Dawn Framing Camera color data

    NASA Astrophysics Data System (ADS)

    Nathues, Andreas; Hoffmann, Martin; Schäfer, Michael; Thangjam, Guneshwar; Le Corre, Lucille; Reddy, Vishnu; Christensen, Ulrich; Mengel, Kurt; Sierks, Holger; Vincent, Jean-Baptist; Cloutis, Edward A.; Russell, Christopher T.; Schäfer, Tanja; Gutierrez-Marques, Pablo; Hall, Ian; Ripken, Joachim; Büttner, Irene

    2015-09-01

    In this paper we present the results of a global survey of olivine-rich lithologies on (4) Vesta. We investigated Dawn Framing Camera (FC) High Altitude Mapping Orbit (HAMO) color cubes (∼60 m/pixel resolution) by using a method described in Thangjam et al. (Thangjam, G., Nathues, A., Mengel, K., Hoffmann, M., Schäfer, M., Reddy, V., Cloutis, E.A., Christensen, U., Sierks, H., Le Corre, L., Vincent, J.-B, Russell, C.T. [2014b]. Meteorit. Planet. Sci. arXiv:1408.4687 [astro-ph.EP]). In total we identified 15 impact craters exhibiting olivine-rich (>40 wt.% ol) outcrops on their inner walls, some showing olivine-rich material also in their ejecta and floors. Olivine-rich sites are concentrated in the Bellicia, Arruntia and Pomponia region on Vesta's northern hemisphere. From our multi-color and stratigraphic analysis, we conclude that most, if not all, of the olivine-rich material identified is of exogenic origin, i.e. remnants of A- or/and S-type projectiles. The olivine-rich lithologies in the north are possibly ejecta of the ∼90 km diameter Albana crater. We cannot draw a final conclusion on their relative stratigraphic succession, but it seems that the dark material (Nathues, A., Hoffmann, M., Cloutis, E.A., Schäfer, M., Reddy, V., Christensen, U., Sierks, H., Thangjam, G.S., Le Corre, L., Mengel, K., Vincent, J.-B., Russell, C.T., Prettyman, T., Schmedemann, N., Kneissl, T., Raymond, C., Gutierrez-Marques, P., Hall, I. Büttner, I. [2014b]. Icarus (239, 222-237)) and the olivine-rich lithologies are of a similar age. The origin of some potential olivine-rich sites in the Rheasilvia basin and at crater Portia are ambiguous, i.e. these are either of endogenic or exogenic origin. However, the small number and size of these sites led us to conclude that olivine-rich mantle material, containing more than 40 wt.% of olivine, is basically absent on the present surface of Vesta. In combination with recent impact models of Veneneia and Rheasilvia (Clenet, H., Jutzi, M., Barrat, J.-A., Gillet, Ph. [2014]. Lunar Planet Sci. 45, #1349; Jutzi, M., Asphaug, E., Gillet, P., Barrat, J.-A., Benz, W. [2013]. Nature 494, 207-210), which predict an excavation depth of up to 80 km, we are confident that the crust-mantle depth is significantly deeper than predicted by most evolution models (30 km; Mittlefehldt, D.W. [2014]. Asteroid 4 Vesta: A Fully Differentiated Dwarf Planet. NASA Technical Reports Server (20140004857.pdf)) or, alternatively, the olivine-content of the (upper) mantle is lower than our detection limit, which would lead to the conclusion that Vesta's parent material was already depleted in olivine compared to CI meteorites.

  14. Correction of projective distortion in long-image-sequence mosaics without prior information

    NASA Astrophysics Data System (ADS)

    Yang, Chenhui; Mao, Hongwei; Abousleman, Glen; Si, Jennie

    2010-04-01

    Image mosaicking is the process of piecing together multiple video frames or still images from a moving camera to form a wide-area or panoramic view of the scene being imaged. Mosaics have widespread applications in many areas such as security surveillance, remote sensing, geographical exploration, agricultural field surveillance, virtual reality, digital video, and medical image analysis, among others. When mosaicking a large number of still images or video frames, the quality of the resulting mosaic is compromised by projective distortion. That is, during the mosaicking process, the image frames that are transformed and pasted to the mosaic become significantly scaled down and appear out of proportion with respect to the mosaic. As more frames continue to be transformed, important target information in the frames can be lost since the transformed frames become too small, which eventually leads to the inability to continue further. Some projective distortion correction techniques make use of prior information such as GPS information embedded within the image, or camera internal and external parameters. Alternatively, this paper proposes a new algorithm to reduce the projective distortion without using any prior information whatsoever. Based on the analysis of the projective distortion, we approximate the projective matrix that describes the transformation between image frames using an affine model. Using singular value decomposition, we can deduce the affine model scaling factor that is usually very close to 1. By resetting the image scale of the affine model to 1, the transformed image size remains unchanged. Even though the proposed correction introduces some error in the image matching, this error is typically acceptable and more importantly, the final mosaic preserves the original image size after transformation. We demonstrate the effectiveness of this new correction algorithm on two real-world unmanned air vehicle (UAV) sequences. The proposed method is shown to be effective and suitable for real-time implementation.

  15. 1920x1080 pixel color camera with progressive scan at 50 to 60 frames per second

    NASA Astrophysics Data System (ADS)

    Glenn, William E.; Marcinka, John W.

    1998-09-01

    For over a decade, the broadcast industry, the film industry and the computer industry have had a long-range objective to originate high definition images with progressive scan. This produces images with better vertical resolution and much fewer artifacts than interlaced scan. Computers almost universally use progressive scan. The broadcast industry has resisted switching from interlace to progressive because no cameras were available in that format with the 1920 X 1080 resolution that had obtained international acceptance for high definition program production. The camera described in this paper produces an output in that format derived from two 1920 X 1080 CCD sensors produced by Eastman Kodak.

  16. Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms

    PubMed Central

    Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg

    2013-01-01

    Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387

  17. KSC-05PD-0565

    NASA Technical Reports Server (NTRS)

    2005-01-01

    KENNEDY SPACE CENTER, FLA. In the Vehicle Assembly Building at NASAs Kennedy Space Center, a digital still camera has been mounted in the External Tank (ET) umbilical well on the aft end of Space Shuttle Discovery. The camera is being used to obtain and downlink high-resolution images of the disconnect point on the ET following ET separation from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  18. KSC-05PD-0562

    NASA Technical Reports Server (NTRS)

    2005-01-01

    KENNEDY SPACE CENTER, FLA. In the Vehicle Assembly Building at NASAs Kennedy Space Center, workers check the digital still camera they will mount in the External Tank (ET) umbilical well on the aft end of Space Shuttle Discovery. The camera is being used to obtain and downlink high-resolution images of the disconnect point on the ET following the tank's separation from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  19. KSC-05PD-0564

    NASA Technical Reports Server (NTRS)

    2005-01-01

    KENNEDY SPACE CENTER, FLA. In the Vehicle Assembly Building at NASAs Kennedy Space Center, a worker mounts a digital still camera in the External Tank (ET) umbilical well on the aft end of Space Shuttle Discovery. The camera is being used to obtain and downlink high-resolution images of the disconnect point on the ET following the ET separation from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  20. KSC-05PD-0561

    NASA Technical Reports Server (NTRS)

    2005-01-01

    KENNEDY SPACE CENTER, FLA. In the Vehicle Assembly Building at NASAs Kennedy Space Center, workers prepare a digital still camera they will mount in the External Tank (ET) umbilical well on the aft end of Space Shuttle Discovery. The camera is being used to obtain and downlink high-resolution images of the disconnect point on the ET following its separation from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  1. KSC-05PD-0563

    NASA Technical Reports Server (NTRS)

    2005-01-01

    KENNEDY SPACE CENTER, FLA. In the Vehicle Assembly Building at NASAs Kennedy Space Center, workers prepare a digital still camera they will mount in the External Tank (ET) umbilical well on the aft end of Space Shuttle Discovery. The camera is being used to obtain and downlink high-resolution images of the disconnect point on the ET following the ET separation from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  2. Preliminary analysis on faint luminous lightning events recorded by multiple high speed cameras

    NASA Astrophysics Data System (ADS)

    Alves, J.; Saraiva, A. V.; Pinto, O.; Campos, L. Z.; Antunes, L.; Luz, E. S.; Medeiros, C.; Buzato, T. S.

    2013-12-01

    The objective of this work is the study of some faint luminous events produced by lightning flashes that were recorded simultaneously by multiple high-speed cameras during the previous RAMMER (Automated Multi-camera Network for Monitoring and Study of Lightning) campaigns. The RAMMER network is composed by three fixed cameras and one mobile color camera separated by, in average, distances of 13 kilometers. They were located in the Paraiba Valley (in the cities of São José dos Campos and Caçapava), SP, Brazil, arranged in a quadrilateral shape, centered in São José dos Campos region. This configuration allowed RAMMER to see a thunderstorm from different angles, registering the same lightning flashes simultaneously by multiple cameras. Each RAMMER sensor is composed by a triggering system and a Phantom high-speed camera version 9.1, which is set to operate at a frame rate of 2,500 frames per second with a lens Nikkor (model AF-S DX 18-55 mm 1:3.5 - 5.6 G in the stationary sensors, and a lens model AF-S ED 24 mm - 1:1.4 in the mobile sensor). All videos were GPS (Global Positioning System) time stamped. For this work we used a data set collected in four RAMMER manual operation days in the campaign of 2012 and 2013. On Feb. 18th the data set is composed by 15 flashes recorded by two cameras and 4 flashes recorded by three cameras. On Feb. 19th a total of 5 flashes was registered by two cameras and 1 flash registered by three cameras. On Feb. 22th we obtained 4 flashes registered by two cameras. Finally, in March 6th two cameras recorded 2 flashes. The analysis in this study proposes an evaluation methodology for faint luminous lightning events, such as continuing current. Problems in the temporal measurement of the continuing current can generate some imprecisions during the optical analysis, therefore this work aim to evaluate the effects of distance in this parameter with this preliminary data set. In the cases that include the color camera we analyzed the RGB mode (red, green, blue) and compared them with the data provided by the black and white cameras for the same event and the influence of these parameters with the luminosity intensity of the flashes. Two peculiar cases presented, from the data obtained at one site, a stroke, some continuing current during the interval between the strokes and, then, a subsequent stroke; however, the other site showed that the subsequent stroke was in fact an M-component, since the continuing current had not vanished after its parent stroke. These events generated a dubious classification for the same event that was based only in a visual analysis with high-speed cameras and they were analyzed in this work.

  3. Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting

    NASA Astrophysics Data System (ADS)

    Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang

    In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.

  4. Clementine Observes the Moon, Solar Corona, and Venus

    NASA Image and Video Library

    1999-06-12

    In 1994, during its flight, NASA's Clementine spacecraft returned images of the Moon. In addition to the geologic mapping cameras, the Clementine spacecraft also carried two Star Tracker cameras for navigation. These lightweight (0.3 kg) cameras kept the spacecraft on track by constantly observing the positions of stars, reminiscent of the age-old seafaring tradition of sextant/star navigation. These navigation cameras were also to take some spectacular wide angle images of the Moon. In this picture the Moon is seen illuminated solely by light reflected from the Earth--Earthshine! The bright glow on the lunar horizon is caused by light from the solar corona; the sun is just behind the lunar limb. Caught in this image is the planet Venus at the top of the frame. http://photojournal.jpl.nasa.gov/catalog/PIA00434

  5. High-resolution hyperspectral ground mapping for robotic vision

    NASA Astrophysics Data System (ADS)

    Neuhaus, Frank; Fuchs, Christian; Paulus, Dietrich

    2018-04-01

    Recently released hyperspectral cameras use large, mosaiced filter patterns to capture different ranges of the light's spectrum in each of the camera's pixels. Spectral information is sparse, as it is not fully available in each location. We propose an online method that avoids explicit demosaicing of camera images by fusing raw, unprocessed, hyperspectral camera frames inside an ego-centric ground surface map. It is represented as a multilayer heightmap data structure, whose geometry is estimated by combining a visual odometry system with either dense 3D reconstruction or 3D laser data. We use a publicly available dataset to show that our approach is capable of constructing an accurate hyperspectral representation of the surface surrounding the vehicle. We show that in many cases our approach increases spatial resolution over a demosaicing approach, while providing the same amount of spectral information.

  6. Vesta Surface Comes into View

    NASA Image and Video Library

    2011-06-13

    This image from the framing camera aboard NASA Dawn spacecraft shows surface details beginning to resolve as the spacecraft closes in on the giant asteroid Vesta on June 1, 2011, from a distance of about 300,000 miles 483,000 kilometers.

  7. ISS seen during flyaround

    NASA Image and Video Library

    2001-02-16

    STS98-E-5310 (16 February 2001) --- Sporting an important new component in the Destiny laboratory (near center of frame), the International Space Station (ISS) is backdropped against the blackness of space following undocking. The photo was taken with a digital still camera.

  8. Side by Side Views of a Dark Hill

    NASA Image and Video Library

    2011-09-02

    NASA Dawn spacecraft obtained these side-by-side views of a dark hill of the surface of asteroid Vesta with its framing camera on August 19, 2011. The images have a resolution of about 260 meters per pixel.

  9. Topography of Troughs on Vesta

    NASA Image and Video Library

    2011-08-23

    This view of the topography of asteroid Vesta surface is composed of several images obtained with the clear filter in the framing camera on NASA Dawn spacecraft on August 6, 2011. The image has a resolution of about 260 meters per pixel.

  10. Detection of inter-frame forgeries in digital videos.

    PubMed

    K, Sitara; Mehtre, B M

    2018-05-26

    Videos are acceptable as evidence in the court of law, provided its authenticity and integrity are scientifically validated. Videos recorded by surveillance systems are susceptible to malicious alterations of visual content by perpetrators locally or remotely. Such malicious alterations of video contents (called video forgeries) are categorized into inter-frame and intra-frame forgeries. In this paper, we propose inter-frame forgery detection techniques using tamper traces from spatio-temporal and compressed domains. Pristine videos containing frames that are recorded during sudden camera zooming event, may get wrongly classified as tampered videos leading to an increase in false positives. To address this issue, we propose a method for zooming detection and it is incorporated in video tampering detection. Frame shuffling detection, which was not explored so far is also addressed in our work. Our method is capable of differentiating various inter-frame tamper events and its localization in the temporal domain. The proposed system is tested on 23,586 videos of which 2346 are pristine and rest of them are candidates of inter-frame forged videos. Experimental results show that we have successfully detected frame shuffling with encouraging accuracy rates. We have achieved improved accuracy on forgery detection in frame insertion, frame deletion and frame duplication. Copyright © 2018. Published by Elsevier B.V.

  11. Northeast View from Pathfinder Lander

    NASA Image and Video Library

    1997-11-04

    This panorama of the region to the northeast of the lander was constructed to support the Sojourner Rover Team's plans to conduct an "autonomous traverse" to explore the terrain away from the lander after science objectives in the lander vicinity had been met. The large, relatively bright surface in the foreground, about 10 meters (33 feet) from the spacecraft, in this scene is "Baker's Bench." The large, elongated rock left of center in the middle distance is "Zaphod." This view was produced by combining 8 individual "Superpan" scenes from the left and right eyes of the IMP camera. Each frame consists of 8 individual frames (left eye) and 7 frames (right eye) taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. http://photojournal.jpl.nasa.gov/catalog/PIA01000

  12. Temporal compressive imaging for video

    NASA Astrophysics Data System (ADS)

    Zhou, Qun; Zhang, Linxia; Ke, Jun

    2018-01-01

    In many situations, imagers are required to have higher imaging speed, such as gunpowder blasting analysis and observing high-speed biology phenomena. However, measuring high-speed video is a challenge to camera design, especially, in infrared spectrum. In this paper, we reconstruct a high-frame-rate video from compressive video measurements using temporal compressive imaging (TCI) with a temporal compression ratio T=8. This means that, 8 unique high-speed temporal frames will be obtained from a single compressive frame using a reconstruction algorithm. Equivalently, the video frame rates is increased by 8 times. Two methods, two-step iterative shrinkage/threshold (TwIST) algorithm and the Gaussian mixture model (GMM) method, are used for reconstruction. To reduce reconstruction time and memory usage, each frame of size 256×256 is divided into patches of size 8×8. The influence of different coded mask to reconstruction is discussed. The reconstruction qualities using TwIST and GMM are also compared.

  13. Earth Observations taken by Expedition 41 crewmember

    NASA Image and Video Library

    2014-09-17

    ISS041-E-016740 (17 Sept. 2014) --- One of the Expedition 41 crew members aboard the Earth-orbiting International Space Station exposed this Sept. 17 nocturnal scene featuring most of the largest cities on the central eastern seaboard. Even at 221 nautical miles above Earth, the 28mm focal length on the still camera was able to pick up detail in the image, for example, Central Park on Manhattan at right frame. The nation?s capital is very near frame center.

  14. Large Area Field of View for Fast Temporal Resolution Astronomy

    NASA Astrophysics Data System (ADS)

    Covarrubias, Ricardo A.

    2018-01-01

    Scientific CMOS (sCMOS) technology is especially relevant for high temporal resolution astronomy combining high resolution, large field of view with very fast frame rates, without sacrificing ultra-low noise performance. Solar Astronomy, Near Earth Object detections, Space Debris Tracking, Transient Observations or Wavefront Sensing are among the many applications this technology can be utilized. Andor Technology is currently developing the next-generation, very large area sCMOS camera with an extremely low noise, rapid frame rates, high resolution and wide dynamic range.

  15. A Fast MEANSHIFT Algorithm-Based Target Tracking System

    PubMed Central

    Sun, Jian

    2012-01-01

    Tracking moving targets in complex scenes using an active video camera is a challenging task. Tracking accuracy and efficiency are two key yet generally incompatible aspects of a Target Tracking System (TTS). A compromise scheme will be studied in this paper. A fast mean-shift-based Target Tracking scheme is designed and realized, which is robust to partial occlusion and changes in object appearance. The physical simulation shows that the image signal processing speed is >50 frame/s. PMID:22969397

  16. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  17. Early forest fire detection using principal component analysis of infrared video

    NASA Astrophysics Data System (ADS)

    Saghri, John A.; Radjabi, Ryan; Jacobs, John T.

    2011-09-01

    A land-based early forest fire detection scheme which exploits the infrared (IR) temporal signature of fire plume is described. Unlike common land-based and/or satellite-based techniques which rely on measurement and discrimination of fire plume directly from its infrared and/or visible reflectance imagery, this scheme is based on exploitation of fire plume temporal signature, i.e., temperature fluctuations over the observation period. The method is simple and relatively inexpensive to implement. The false alarm rate is expected to be lower that of the existing methods. Land-based infrared (IR) cameras are installed in a step-stare-mode configuration in potential fire-prone areas. The sequence of IR video frames from each camera is digitally processed to determine if there is a fire within camera's field of view (FOV). The process involves applying a principal component transformation (PCT) to each nonoverlapping sequence of video frames from the camera to produce a corresponding sequence of temporally-uncorrelated principal component (PC) images. Since pixels that form a fire plume exhibit statistically similar temporal variation (i.e., have a unique temporal signature), PCT conveniently renders the footprint/trace of the fire plume in low-order PC images. The PC image which best reveals the trace of the fire plume is then selected and spatially filtered via simple threshold and median filter operations to remove the background clutter, such as traces of moving tree branches due to wind.

  18. Illumination-compensated non-contact imaging photoplethysmography via dual-mode temporally coded illumination

    NASA Astrophysics Data System (ADS)

    Amelard, Robert; Scharfenberger, Christian; Wong, Alexander; Clausi, David A.

    2015-03-01

    Non-contact camera-based imaging photoplethysmography (iPPG) is useful for measuring heart rate in conditions where contact devices are problematic due to issues such as mobility, comfort, and sanitation. Existing iPPG methods analyse the light-tissue interaction of either active or passive (ambient) illumination. Many active iPPG methods assume the incident ambient light is negligible to the active illumination, resulting in high power requirements, while many passive iPPG methods assume near-constant ambient conditions. These assumptions can only be achieved in environments with controlled illumination and thus constrain the use of such devices. To increase the number of possible applications of iPPG devices, we propose a dual-mode active iPPG system that is robust to changes in ambient illumination variations. Our system uses a temporally-coded illumination sequence that is synchronized with the camera to measure both active and ambient illumination interaction for determining heart rate. By subtracting the ambient contribution, the remaining illumination data can be attributed to the controlled illuminant. Our device comprises a camera and an LED illuminant controlled by a microcontroller. The microcontroller drives the temporal code via synchronizing the frame captures and illumination time at the hardware level. By simulating changes in ambient light conditions, experimental results show our device is able to assess heart rate accurately in challenging lighting conditions. By varying the temporal code, we demonstrate the trade-off between camera frame rate and ambient light compensation for optimal blood pulse detection.

  19. Staring at Saturn

    NASA Image and Video Library

    2016-09-15

    NASA's Cassini spacecraft stared at Saturn for nearly 44 hours on April 25 to 27, 2016, to obtain this movie showing just over four Saturn days. With Cassini's orbit being moved closer to the planet in preparation for the mission's 2017 finale, scientists took this final opportunity to capture a long movie in which the planet's full disk fit into a single wide-angle camera frame. Visible at top is the giant hexagon-shaped jet stream that surrounds the planet's north pole. Each side of this huge shape is slightly wider than Earth. The resolution of the 250 natural color wide-angle camera frames comprising this movie is 512x512 pixels, rather than the camera's full resolution of 1024x1024 pixels. Cassini's imaging cameras have the ability to take reduced-size images like these in order to decrease the amount of data storage space required for an observation. The spacecraft began acquiring this sequence of images just after it obtained the images to make a three-panel color mosaic. When it began taking images for this movie sequence, Cassini was 1,847,000 miles (2,973,000 kilometers) from Saturn, with an image scale of 355 kilometers per pixel. When it finished gathering the images, the spacecraft had moved 171,000 miles (275,000 kilometers) closer to the planet, with an image scale of 200 miles (322 kilometers) per pixel. A movie is available at http://photojournal.jpl.nasa.gov/catalog/PIA21047

  20. The electromagnetic interference of mobile phones on the function of a γ-camera.

    PubMed

    Javadi, Hamid; Azizmohammadi, Zahra; Mahmoud Pashazadeh, Ali; Neshandar Asli, Isa; Moazzeni, Taleb; Baharfar, Nastaran; Shafiei, Babak; Nabipour, Iraj; Assadi, Majid

    2014-03-01

    The aim of the present study is to evaluate whether or not the electromagnetic field generated by mobile phones interferes with the function of a SPECT γ-camera during data acquisition. We tested the effects of 7 models of mobile phones on 1 SPECT γ-camera. The mobile phones were tested when making a call, in ringing mode, and in standby mode. The γ-camera function was assessed during data acquisition from a planar source and a point source of Tc with activities of 10 mCi and 3 mCi, respectively. A significant visual decrease in count number was considered to be electromagnetic interference (EMI). The percentage of induced EMI with the γ-camera per mobile phone was in the range of 0% to 100%. The incidence of EMI was mainly observed in the first seconds of ringing and then mitigated in the following frames. Mobile phones are portable sources of electromagnetic radiation, and there is interference potential with the function of SPECT γ-cameras leading to adverse effects on the quality of the acquired images.

  1. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  2. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition

    PubMed Central

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133

  3. Use and validation of mirrorless digital single light reflex camera for recording of vitreoretinal surgeries in high definition.

    PubMed

    Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish

    2018-01-01

    The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.

  4. Simulated Flyover of Mars Canyon Map Animation

    NASA Image and Video Library

    2014-12-12

    This frame from an animation simulates a flyover of a portion of a Martian canyon detailed in a geological map produced by the U.S. Geological Survey and based on observations by the HiRISE camera on NASA Mars Reconnaissance Orbiter.

  5. Spirit Look Ahead After Sol 1866 Drive

    NASA Image and Video Library

    2009-07-16

    This scene combines three frames taken by the navigation camera on NASA Mars Exploration Rover Spirit during the 1,866th Martian day, or sol, of Spirit mission on Mars April 3, 2009. It spans 120 degrees, with south at the center.

  6. Spirit Look Ahead on Sol 1869

    NASA Image and Video Library

    2009-07-16

    This scene combines three frames taken by the navigation camera on NASA Mars Exploration Rover Spirit during the 1,869th Martian day, or sol, of Spirit mission on Mars April 6, 2009. It spans 120 degrees, with south at the center.

  7. Juno Approach to the Earth-Moon System

    NASA Image and Video Library

    2013-12-10

    This frame from a movie was captured by a star tracker camera on NASA Jupiter-bound Juno spacecraft. It was taken over several days as Juno approached Earth for a close flyby that would send the spacecraft onward to the giant planet.

  8. ARNICA, the NICMOS 3 imaging camera of TIRGO.

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Hunt, L.; Stanga, R.

    ARNICA (ARcetri Near Infrared CAmera) is the imaging camera for the near infrared bands between 1.0 and 2.5 μm that Arcetri Observatory has designed and built as a general facility for the TIRGO telescope (1.5 m diameter, f/20) located at Gornergrat (Switzerland). The scale is 1″per pixel, with sky coverage of more than 4 min×4 min on the NICMOS 3 (256×256 pixels, 40 μm side) detector array. The camera is remotely controlled by a PC 486, connected to the array control electronics via a fiber-optics link. A C-language package, running under MS-DOS on the PC 486, acquires and stores the frames, and controls the timing of the array. The camera is intended for imaging of large extra-galactic and Galactic fields; a large effort has been dedicated to explore the possibility of achieving precise photometric measurements in the J, H, K astronomical bands, with very promising results.

  9. a Prompt Methodology to Georeference Complex Hypogea Environments

    NASA Astrophysics Data System (ADS)

    Troisi, S.; Baiocchi, V.; Del Pizzo, S.; Giannone, F.

    2017-02-01

    Actually complex underground structures and facilities occupy a wide space in our cities, most of them are often unsurveyed; cable duct, drainage system are not exception. Furthermore, several inspection operations are performed in critical air condition, that do not allow or make more difficult a conventional survey. In this scenario a prompt methodology to survey and georeferencing such facilities is often indispensable. A visual based approach was proposed in this paper; such methodology provides a 3D model of the environment and the path followed by the camera using the conventional photogrammetric/Structure from motion software tools. The key-role is played by the lens camera; indeed, a fisheye system was employed to obtain a very wide field of view (FOV) and therefore high overlapping among the frames. The camera geometry is in according to a forward motion along the axis camera. Consequently, to avoid instability of bundle adjustment algorithm a preliminary calibration of camera was carried out. A specific case study was reported and the accuracy achieved.

  10. Computational Spectroscopy of Polycyclic Aromatic Hydrocarbons In Support of Laboratory Astrophysics

    NASA Technical Reports Server (NTRS)

    Tan, Xiaofeng; Salama, Farid

    2006-01-01

    Polycyclic aromatic hydrocarbons (PAHs) are strong candidates for the molecular carriers of the unidentified infrared bands (UIR) and the diffuse interstellar bands (DIBs). In order to test the PAH hypothesis, we have systematically measured the vibronic spectra of a number of jet-cooled neutral and ionized PAHs in the near ultraviolet (UV) to visible spectral ranges using the cavity ring-down spectroscopy. To support this experimental effort, we have carried out theoretical studies of the spectra obtained in our measurements. Ab initio and (time-dependent) density.functiona1 theory calculations are performed to obtain the geometries, energetics, vibrational frequencies, transition dipole moments, and normal coordinates of these PAH molecules. Franck-Condon (FC) calculations and/or vibronic calculations are then performed using the calculated normal coordinates and vibrational frequencies to simulate the vibronic spectra. It is found that vibronic interactions in these conjugated pi electron systems are often strong enough to cause significant deviations from the Born-Oppenheimer (BO) approximation. For vibronic transitions that are well described by the BO approximation, the vibronic band profiles are simulated by calculating the rotational structure of the vibronic transitions. Vibronic oscillator strength factors are calculated in the frame of the FC approximation from the electronic transition dipole moments and the FC factors. This computational effort together with our experimental measurements provides, for the first time, powerful tools for comparison with space-based data and, hence, a powerful approach to understand the spectroscopy of interstellar PAH analogs and the nature of the UIR and DIBs.

  11. Barnacle Bill in Super Resolution from Insurance Panorama

    NASA Technical Reports Server (NTRS)

    1998-01-01

    Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This image shows super resolution techniques applied to the first APXS target rock, which was never imaged with the rover's forward cameras. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.

    This view of Barnacle Bill was produced by combining the 'Insurance Pan' frames taken while the IMP camera was still in its stowed position on sol2. The composite color frames that make up this anaglyph were produced for both the right and left eye of the IMP. The right eye composite consists of 5 frames, taken with different color filters, the left eye consists of only 1 frame. The resultant image from each eye was enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. These panchromatic frames were then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars.

    The anaglyph view was produced by combining the left with the right eye color composite frames by assigning the left eye composite view to the red color plane and the right eye composite view to the green and blue color planes (cyan), to produce a stereo anaglyph mosaic. This mosaic can be viewed in 3-D on your computer monitor or in color print form by wearing red-blue 3-D glasses.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. Barnacle Bill is a small rock immediately west-northwest of the Mars Pathfinder lander and was the first rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument.

  12. Soft X-ray studies on MST: Measuring the effects of toroidicity on tearing mode phase and installation of a multi-energy camera

    NASA Astrophysics Data System (ADS)

    Vanmeter, Patrick; Reusch, Lisa; Franz, Paolo; Sarff, John; Goetz, John; Delgado-Aparicio, Louis; den Hartog, Daniel

    2017-10-01

    The soft X-ray tomography (SXT) system on MST uses four cameras in a double-filter configuration to measure the emitted brightness along forty distinct lines of sight. These measurements can then be inverted to determine the emissivity, which depends on physical properties such as temperature, density, and impurity content. The SXR emissivity should correspond to the structure of the magnetic field; however, there is a discrepancy between the phase of the emissivity inversions and magnetic field reconstructions when using the typical cylindrical approximation to interpret the signal from the toroidal magnetics array. This discrepancy was measured for two distinct plasma conditions using all four SXT cameras, with results supporting the interpretation that it emerges from physical effects of the toroidal geometry. In addition, a new soft x-ray measurement system based on the PILATUS3 photon counting detector will be installed on MST. Emitted photons are counted by an array of pixels with individually adjustable energy cutoffs giving the device more spectral information than the double-filter system. This material is based upon work supported by the U.S. Department of Energy, Office of Science, Office of Fusion Energy Sciences program under Award Numbers DE-FC02-05ER54814 and DE-SC0015474.

  13. A new approach to the form and position error measurement of the auto frame surface based on laser

    NASA Astrophysics Data System (ADS)

    Wang, Hua; Li, Wei

    2013-03-01

    Auto frame is a very large workpiece, with length up to 12 meters and width up to 2 meters, and it's very easy to know that it's inconvenient and not automatic to measure such a large workpiece by independent manual operation. In this paper we propose a new approach to reconstruct the 3D model of the large workpiece, especially the auto truck frame, based on multiple pulsed lasers, for the purpose of measuring the form and position errors. In a concerned area, it just needs one high-speed camera and two lasers. It is a fast, high-precision and economical approach.

  14. Real-time image sequence segmentation using curve evolution

    NASA Astrophysics Data System (ADS)

    Zhang, Jun; Liu, Weisong

    2001-04-01

    In this paper, we describe a novel approach to image sequence segmentation and its real-time implementation. This approach uses the 3D structure tensor to produce a more robust frame difference signal and uses curve evolution to extract whole objects. Our algorithm is implemented on a standard PC running the Windows operating system with video capture from a USB camera that is a standard Windows video capture device. Using the Windows standard video I/O functionalities, our segmentation software is highly portable and easy to maintain and upgrade. In its current implementation on a Pentium 400, the system can perform segmentation at 5 frames/sec with a frame resolution of 160 by 120.

  15. Comprehensive Evaluation of Stand-Off Biometrics Techniques for Enhanced Surveillance during Major Events

    DTIC Science & Technology

    2011-02-01

    transactions. Analysts used  frame  counts to  measure  the duration for which the Test Subject interacted with the iris recognition system camera, from the...44  Figure 20:  Frame  Extracted from HD CCTV Video...the eyes are located and used as a  frame  of reference. Once the eyes are  located, the face image can be rotated clockwise or counter‐clockwise to

  16. Algorithm design for a gun simulator based on image processing

    NASA Astrophysics Data System (ADS)

    Liu, Yu; Wei, Ping; Ke, Jun

    2015-08-01

    In this paper, an algorithm is designed for shooting games under strong background light. Six LEDs are uniformly distributed on the edge of a game machine screen. They are located at the four corners and in the middle of the top and the bottom edges. Three LEDs are enlightened in the odd frames, and the other three are enlightened in the even frames. A simulator is furnished with one camera, which is used to obtain the image of the LEDs by applying inter-frame difference between the even and odd frames. In the resulting images, six LED are six bright spots. To obtain the LEDs' coordinates rapidly, we proposed a method based on the area of the bright spots. After calibrating the camera based on a pinhole model, four equations can be found using the relationship between the image coordinate system and the world coordinate system with perspective transformation. The center point of the image of LEDs is supposed to be at the virtual shooting point. The perspective transformation matrix is applied to the coordinate of the center point. Then we can obtain the virtual shooting point's coordinate in the world coordinate system. When a game player shoots a target about two meters away, using the method discussed in this paper, the calculated coordinate error is less than ten mm. We can obtain 65 coordinate results per second, which meets the requirement of a real-time system. It proves the algorithm is reliable and effective.

  17. Optical Design of the Camera for Transiting Exoplanet Survey Satellite (TESS)

    NASA Technical Reports Server (NTRS)

    Chrisp, Michael; Clark, Kristin; Primeau, Brian; Dalpiaz, Michael; Lennon, Joseph

    2015-01-01

    The optical design of the wide field of view refractive camera, 34 degrees diagonal field, for the TESS payload is described. This fast f/1.4 cryogenic camera, operating at -75 C, has no vignetting for maximum light gathering within the size and weight constraints. Four of these cameras capture full frames of star images for photometric searches of planet crossings. The optical design evolution, from the initial Petzval design, took advantage of Forbes aspheres to develop a hybrid design form. This maximized the correction from the two aspherics resulting in a reduction of average spot size by sixty percent in the final design. An external long wavelength pass filter was replaced by an internal filter coating on a lens to save weight, and has been fabricated to meet the specifications. The stray light requirements were met by an extended lens hood baffle design, giving the necessary off-axis attenuation.

  18. Cranz-Schardin camera with a large working distance for the observation of small scale high-speed flows.

    PubMed

    Skupsch, C; Chaves, H; Brücker, C

    2011-08-01

    The Cranz-Schardin camera utilizes a Q-switched Nd:YAG laser and four single CCD cameras. Light pulse energy in the range of 25 mJ and pulse duration of about 5 ns is provided by the laser. The laser light is converted to incoherent light by Rhodamine-B fluorescence dye in a cuvette. The laser beam coherence is intentionally broken in order to avoid speckle. Four light fibers collect the fluorescence light and are used for illumination. Different light fiber lengths enable a delay of illumination between consecutive images. The chosen interframe time is 25 ns, corresponding to 40 × 10(6) frames per second. Exemplarily, the camera is applied to observe the bow shock in front of a water jet, propagating in air at supersonic speed. The initial phase of the formation of a jet structure is recorded.

  19. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  20. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  1. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  2. How many pixels does it take to make a good 4"×6" print? Pixel count wars revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2011-01-01

    In the early 1980's the future of conventional silver-halide photographic systems was of great concern due to the potential introduction of electronic imaging systems then typified by the Sony Mavica analog electronic camera. The focus was on the quality of film-based systems as expressed in the number of equivalent number pixels and bits-per-pixel, and how many pixels would be required to create an equivalent quality image from a digital camera. It was found that 35-mm frames, for ISO 100 color negative film, contained equivalent pixels of 12 microns for a total of 18 million pixels per frame (6 million pixels per layer) with about 6 bits of information per pixel; the introduction of new emulsion technology, tabular AgX grains, increased the value to 8 bit per pixel. Higher ISO speed films had larger equivalent pixels, fewer pixels per frame, but retained the 8 bits per pixel. Further work found that a high quality 3.5" x 5.25" print could be obtained from a three layer system containing 1300 x 1950 pixels per layer or about 7.6 million pixels in all. In short, it became clear that when a digital camera contained about 6 million pixels (in a single layer using a color filter array and appropriate image processing) that digital systems would challenge and replace conventional film-based system for the consumer market. By 2005 this became the reality. Since 2005 there has been a "pixel war" raging amongst digital camera makers. The question arises about just how many pixels are required and are all pixels equal? This paper will provide a practical look at how many pixels are needed for a good print based on the form factor of the sensor (sensor size) and the effective optical modulation transfer function (optical spread function) of the camera lens. Is it better to have 16 million, 5.7-micron pixels or 6 million 7.8-micron pixels? How does intrinsic (no electronic boost) ISO speed and exposure latitude vary with pixel size? A systematic review of these issues will be provided within the context of image quality and ISO speed models developed over the last 15 years.

  3. Six-degrees-of-freedom sensing based on pictures taken by single camera.

    PubMed

    Zhongke, Li; Yong, Wang; Yongyuan, Qin; Peijun, Lu

    2005-02-01

    Two six-degrees-of-freedom sensing methods are presented. In the first method, three laser beams are employed to set up Descartes' frame on a rigid body and a screen is adopted to form diffuse spots. In the second method, two superimposed grid screens and two laser beams are used. A CCD camera is used to take photographs in both methods. Both approaches provide a simple and error-free method to record the positions and the attitudes of a rigid body in motion continuously.

  4. STS-116 MS Fuglesang uses digital camera on the STBD side of the S0 Truss during EVA 4

    NASA Image and Video Library

    2006-12-19

    S116-E-06882 (18 Dec. 2006) --- European Space Agency (ESA) astronaut Christer Fuglesang, STS-116 mission specialist, uses a digital still camera during the mission's fourth session of extravehicular activity (EVA) while Space Shuttle Discovery was docked with the International Space Station. Astronaut Robert L. Curbeam Jr. (out of frame), mission specialist, worked in tandem with Fuglesang, using specially-prepared, tape-insulated tools, to guide the array wing neatly inside its blanket box during the 6-hour, 38-minute spacewalk.

  5. Computerized lateral-shear interferometer

    NASA Astrophysics Data System (ADS)

    Hasegan, Sorin A.; Jianu, Angela; Vlad, Valentin I.

    1998-07-01

    A lateral-shear interferometer, coupled with a computer for laser wavefront analysis, is described. A CCD camera is used to transfer the fringe images through a frame-grabber into a PC. 3D phase maps are obtained by fringe pattern processing using a new algorithm for direct spatial reconstruction of the optical phase. The program describes phase maps by Zernike polynomials yielding an analytical description of the wavefront aberration. A compact lateral-shear interferometer has been built using a laser diode as light source, a CCD camera and a rechargeable battery supply, which allows measurements in-situ, if necessary.

  6. Views of the starboard P6 Truss solar array during STS-97

    NASA Image and Video Library

    2000-12-05

    STS097-702-070 (3 December 2000) --- An astronaut inside Endeavour's crew cabin used a handheld 70mm camera to expose this frame of the International Space Station's starboard solar array wing panel, backdropped against an Earth horizon scene.

  7. 4. VAL PARTIAL ELEVATION SHOWING LAUNCHER BRIDGE ON SUPPORTS, LAUNCHER ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    4. VAL PARTIAL ELEVATION SHOWING LAUNCHER BRIDGE ON SUPPORTS, LAUNCHER SLAB, SUPPORT CARRIAGE, CONCRETE 'A' FRAME STRUCTURE AND CAMERA TOWER LOOKING SOUTHEAST. - Variable Angle Launcher Complex, Variable Angle Launcher, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  8. Optical frequency comb profilometry using a single-pixel camera composed of digital micromirror devices.

    PubMed

    Pham, Quang Duc; Hayasaki, Yoshio

    2015-01-01

    We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.

  9. KSC-04PD-1812

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, United Space Alliance worker Craig Meyer fits an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttles Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  10. KSC-04PD-1813

    NASA Technical Reports Server (NTRS)

    2004-01-01

    KENNEDY SPACE CENTER, FLA. In the Orbiter Processing Facility, an External Tank (ET) digital still camera is positioned into the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis to determine if it fits properly. NASA is pursuing use of the camera, beginning with the Shuttles Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  11. KSC-04pd1813

    NASA Image and Video Library

    2004-09-17

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, an External Tank (ET) digital still camera is positioned into the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis to determine if it fits properly. NASA is pursuing use of the camera, beginning with the Shuttle’s Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  12. KSC-04pd1812

    NASA Image and Video Library

    2004-09-17

    KENNEDY SPACE CENTER, FLA. - In the Orbiter Processing Facility, United Space Alliance worker Craig Meyer fits an External Tank (ET) digital still camera in the right-hand liquid oxygen umbilical well on Space Shuttle Atlantis. NASA is pursuing use of the camera, beginning with the Shuttle’s Return To Flight, to obtain and downlink high-resolution images of the ET following separation of the ET from the orbiter after launch. The Kodak camera will record 24 images, at one frame per 1.5 seconds, on a flash memory card. After orbital insertion, the crew will transfer the images from the memory card to a laptop computer. The files will then be downloaded through the Ku-band system to the Mission Control Center in Houston for analysis.

  13. Super Resolution Image of Yogi

    NASA Technical Reports Server (NTRS)

    1997-01-01

    Yogi is a meter-size rock about 5 meters northwest of the Mars Pathfinder lander and was the second rock visited by the Sojourner Rover's alpha proton X-ray spectrometer (APXS) instrument. This mosaic shows super resolution techniques applied to the second APXS target rock, which was poorly illuminated in the rover's forward camera view taken before the instrument was deployed. Super resolution was applied to help to address questions about the texture of this rock and what it might tell us about its mode of origin.

    This mosaic of Yogi was produced by combining four 'Super Pan' frames taken with the IMP camera. This composite color mosaic consists of 7 frames from the right eye, taken with different color filters that were enlarged by 500% and then co-added using Adobe Photoshop to produce, in effect, a super-resolution panchromatic frame that is sharper than an individual frame would be. This panchromatic frame was then colorized with the red, green, and blue filtered images from the same sequence. The color balance was adjusted to approximate the true color of Mars. Shadows were processed separately from the rest of the rock and combined with the rest of the scene to bring out details in the shadow of Yogi that would be too dark to view at the same time as the sunlit surfaces.

    Mars Pathfinder is the second in NASA's Discovery program of low-cost spacecraft with highly focused science goals. The Jet Propulsion Laboratory, Pasadena, CA, developed and manages the Mars Pathfinder mission for NASA's Office of Space Science, Washington, D.C. JPL is a division of the California Institute of Technology (Caltech).

  14. Monocular Stereo Measurement Using High-Speed Catadioptric Tracking

    PubMed Central

    Hu, Shaopeng; Matsumoto, Yuji; Takaki, Takeshi; Ishii, Idaku

    2017-01-01

    This paper presents a novel concept of real-time catadioptric stereo tracking using a single ultrafast mirror-drive pan-tilt active vision system that can simultaneously switch between hundreds of different views in a second. By accelerating video-shooting, computation, and actuation at the millisecond-granularity level for time-division multithreaded processing in ultrafast gaze control, the active vision system can function virtually as two or more tracking cameras with different views. It enables a single active vision system to act as virtual left and right pan-tilt cameras that can simultaneously shoot a pair of stereo images for the same object to be observed at arbitrary viewpoints by switching the direction of the mirrors of the active vision system frame by frame. We developed a monocular galvano-mirror-based stereo tracking system that can switch between 500 different views in a second, and it functions as a catadioptric active stereo with left and right pan-tilt tracking cameras that can virtually capture 8-bit color 512×512 images each operating at 250 fps to mechanically track a fast-moving object with a sufficient parallax for accurate 3D measurement. Several tracking experiments for moving objects in 3D space are described to demonstrate the performance of our monocular stereo tracking system. PMID:28792483

  15. Engine flow visualization using a copper vapor laser

    NASA Technical Reports Server (NTRS)

    Regan, Carolyn A.; Chun, Kue S.; Schock, Harold J., Jr.

    1987-01-01

    A flow visualization system has been developed to determine the air flow within the combustion chamber of a motored, axisymmetric engine. The engine has been equipped with a transparent quartz cylinder, allowing complete optical access to the chamber. A 40-Watt copper vapor laser is used as the light source. Its beam is focused down to a sheet approximately 1 mm thick. The light plane is passed through the combustion chamber, and illuminates oil particles which were entrained in the intake air. The light scattered off of the particles is recorded by a high speed rotating prism movie camera. A movie is then made showing the air flow within the combustion chamber for an entire four-stroke engine cycle. The system is synchronized so that a pulse generated by the camera triggers the laser's thyratron. The camera is run at 5,000 frames per second; the trigger drives one laser pulse per frame. This paper describes the optics used in the flow visualization system, the synchronization circuit, and presents results obtained from the movie. This is believed to be the first published study showing a planar observation of airflow in a four-stroke piston-cylinder assembly. These flow visualization results have been used to interpret flow velocity measurements previously obtained with a laser Doppler velocimetry system.

  16. Overview of diagnostic performance and results for the first operation phase in Wendelstein 7-X (invited).

    PubMed

    Krychowiak, M; Adnan, A; Alonso, A; Andreeva, T; Baldzuhn, J; Barbui, T; Beurskens, M; Biel, W; Biedermann, C; Blackwell, B D; Bosch, H S; Bozhenkov, S; Brakel, R; Bräuer, T; Brotas de Carvalho, B; Burhenn, R; Buttenschön, B; Cappa, A; Cseh, G; Czarnecka, A; Dinklage, A; Drews, P; Dzikowicka, A; Effenberg, F; Endler, M; Erckmann, V; Estrada, T; Ford, O; Fornal, T; Frerichs, H; Fuchert, G; Geiger, J; Grulke, O; Harris, J H; Hartfuß, H J; Hartmann, D; Hathiramani, D; Hirsch, M; Höfel, U; Jabłoński, S; Jakubowski, M W; Kaczmarczyk, J; Klinger, T; Klose, S; Knauer, J; Kocsis, G; König, R; Kornejew, P; Krämer-Flecken, A; Krawczyk, N; Kremeyer, T; Książek, I; Kubkowska, M; Langenberg, A; Laqua, H P; Laux, M; Lazerson, S; Liang, Y; Liu, S C; Lorenz, A; Marchuk, A O; Marsen, S; Moncada, V; Naujoks, D; Neilson, H; Neubauer, O; Neuner, U; Niemann, H; Oosterbeek, J W; Otte, M; Pablant, N; Pasch, E; Sunn Pedersen, T; Pisano, F; Rahbarnia, K; Ryć, L; Schmitz, O; Schmuck, S; Schneider, W; Schröder, T; Schuhmacher, H; Schweer, B; Standley, B; Stange, T; Stephey, L; Svensson, J; Szabolics, T; Szepesi, T; Thomsen, H; Travere, J-M; Trimino Mora, H; Tsuchiya, H; Weir, G M; Wenzel, U; Werner, A; Wiegel, B; Windisch, T; Wolf, R; Wurden, G A; Zhang, D; Zimbal, A; Zoletnik, S

    2016-11-01

    Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered a MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. An overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.

  17. Real-time machine vision system using FPGA and soft-core processor

    NASA Astrophysics Data System (ADS)

    Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad

    2012-06-01

    This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.

  18. Foreground extraction for moving RGBD cameras

    NASA Astrophysics Data System (ADS)

    Junejo, Imran N.; Ahmed, Naveed

    2017-02-01

    In this paper, we propose a simple method to perform foreground extraction for a moving RGBD camera. These cameras have now been available for quite some time. Their popularity is primarily due to their low cost and ease of availability. Although the field of foreground extraction or background subtraction has been explored by the computer vision researchers since a long time, the depth-based subtraction is relatively new and has not been extensively addressed as of yet. Most of the current methods make heavy use of geometric reconstruction, making the solutions quite restrictive. In this paper, we make a novel use RGB and RGBD data: from the RGB frame, we extract corner features (FAST) and then represent these features with the histogram of oriented gradients (HoG) descriptor. We train a non-linear SVM on these descriptors. During the test phase, we make used of the fact that the foreground object has distinct depth ordering with respect to the rest of the scene. That is, we use the positively classified FAST features on the test frame to initiate a region growing to obtain the accurate segmentation of the foreground object from just the RGBD data. We demonstrate the proposed method of a synthetic datasets, and demonstrate encouraging quantitative and qualitative results.

  19. Temporally consistent segmentation of point clouds

    NASA Astrophysics Data System (ADS)

    Owens, Jason L.; Osteen, Philip R.; Daniilidis, Kostas

    2014-06-01

    We consider the problem of generating temporally consistent point cloud segmentations from streaming RGB-D data, where every incoming frame extends existing labels to new points or contributes new labels while maintaining the labels for pre-existing segments. Our approach generates an over-segmentation based on voxel cloud connectivity, where a modified k-means algorithm selects supervoxel seeds and associates similar neighboring voxels to form segments. Given the data stream from a potentially mobile sensor, we solve for the camera transformation between consecutive frames using a joint optimization over point correspondences and image appearance. The aligned point cloud may then be integrated into a consistent model coordinate frame. Previously labeled points are used to mask incoming points from the new frame, while new and previous boundary points extend the existing segmentation. We evaluate the algorithm on newly-generated RGB-D datasets.

  20. Geologic Mapping of Ejecta Deposits in Oppia Quadrangle, Asteroid (4) Vesta

    NASA Technical Reports Server (NTRS)

    Garry, W. Brent; Williams, David A.; Yingst, R. Aileen; Mest, Scott C.; Buczkowski, Debra L.; Tosi, Federico; Schafer, Michael; LeCorre, Lucille; Reddy, Vishnu; Jaumann, Ralf; hide

    2014-01-01

    Oppia Quadrangle Av-10 (288-360 deg E, +/- 22 deg) is a junction of key geologic features that preserve a rough history of Asteroid (4) Vesta and serves as a case study of using geologic mapping to define a relative geologic timescale. Clear filter images, stereo-derived topography, slope maps, and multispectral color-ratio images from the Framing Camera on NASA's Dawn spacecraft served as basemaps to create a geologic map and investigate the spatial and temporal relationships of the local stratigraphy. Geologic mapping reveals the oldest map unit within Av-10 is the cratered highlands terrain which possibly represents original crustal material on Vesta that was then excavated by one or more impacts to form the basin Feralia Planitia. Saturnalia Fossae and Divalia Fossae ridge and trough terrains intersect the wall of Feralia Planitia indicating that this impact basin is older than both the Veneneia and Rheasilvia impact structures, representing Pre-Veneneian crustal material. Two of the youngest geologic features in Av-10 are Lepida (approximately 45 km diameter) and Oppia (approximately 40 km diameter) impact craters that formed on the northern and southern wall of Feralia Planitia and each cross-cuts a trough terrain. The ejecta blanket of Oppia is mapped as 'dark mantle' material because it appears dark orange in the Framing Camera 'Clementine-type' colorratio image and has a diffuse, gradational contact distributed to the south across the rim of Rheasilvia. Mapping of surface material that appears light orange in color in the Framing Camera 'Clementine-type' color-ratio image as 'light mantle material' supports previous interpretations of an impact ejecta origin. Some light mantle deposits are easily traced to nearby source craters, but other deposits may represent distal ejecta deposits (emplaced greater than 5 crater radii away) in a microgravity environment.

Top