Sample records for camera images fc

  1. Comparison of Fundus Autofluorescence Between Fundus Camera and Confocal Scanning Laser Ophthalmoscope–based Systems

    PubMed Central

    Park, Sung Pyo; Siringo, Frank S.; Pensec, Noelle; Hong, In Hwan; Sparrow, Janet; Barile, Gaetano; Tsang, Stephen H.; Chang, Stanley

    2015-01-01

    BACKGROUND AND OBJECTIVE To compare fundus autofluorescence (FAF) imaging via fundus camera (FC) and confocal scanning laser ophthalmoscope (cSLO). PATIENTS AND METHODS FAF images were obtained with a digital FC (530 to 580 nm excitation) and a cSLO (488 nm excitation). Two authors evaluated correlation of autofluorescence pattern, atrophic lesion size, and image quality between the two devices. RESULTS In 120 eyes, the autofluorescence pattern correlated in 86% of lesions. By lesion subtype, correlation rates were 100% in hemorrhage, 97% in geographic atrophy, 82% in flecks, 75% in drusen, 70% in exudates, 67% in pigment epithelial detachment, 50% in fibrous scars, and 33% in macular hole. The mean lesion size in geographic atrophy was 4.57 ± 2.3 mm2 via cSLO and 3.81 ± 1.94 mm2 via FC (P < .0001). Image quality favored cSLO in 71 eyes. CONCLUSION FAF images were highly correlated between the FC and cSLO. Differences between the two devices revealed contrasts. Multiple image capture and confocal optics yielded higher image contrast with the cSLO, although acquisition and exposure time was longer. PMID:24221461

  2. Reliability and discriminatory power of methods for dental plaque quantification

    PubMed Central

    RAGGIO, Daniela Prócida; BRAGA, Mariana Minatel; RODRIGUES, Jonas Almeida; FREITAS, Patrícia Moreira; IMPARATO, José Carlos Pettorossi; MENDES, Fausto Medeiros

    2010-01-01

    Objective This in situ study evaluated the discriminatory power and reliability of methods of dental plaque quantification and the relationship between visual indices (VI) and fluorescence camera (FC) to detect plaque. Material and Methods Six volunteers used palatal appliances with six bovine enamel blocks presenting different stages of plaque accumulation. The presence of plaque with and without disclosing was assessed using VI. Images were obtained with FC and digital camera in both conditions. The area covered by plaque was assessed. Examinations were done by two independent examiners. Data were analyzed by Kruskal-Wallis and Kappa tests to compare different conditions of samples and to assess the inter-examiner reproducibility. Results Some methods presented adequate reproducibility. The Turesky index and the assessment of area covered by disclosed plaque in the FC images presented the highest discriminatory powers. Conclusions The Turesky index and images with FC with disclosing present good reliability and discriminatory power in quantifying dental plaque. PMID:20485931

  3. 4 Vesta in Color: High Resolution Mapping from Dawn Framing Camera Images

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Sierks, H.; Christensen, U.; Hoffmann, M.; Schroeder, S. E.; Vincent, J. B.; McSween, H. Y.; Denevi, B. W.; hide

    2011-01-01

    Rotational surface variations on asteroid 4 Vesta have been known from ground-based and HST observations, and they have been interpreted as evidence of compositional diversity. NASA s Dawn mission entered orbit around Vesta on July 16, 2011 for a year-long global characterization. The framing cameras (FC) onboard the Dawn spacecraft will image the asteroid in one clear (broad) and seven narrow band filters covering the wavelength range between 0.4-1.0 microns. We present color mapping results from the Dawn FC observations of Vesta obtained during Survey orbit (approx.3000 km) and High-Altitude Mapping Orbit (HAMO) (approx.950 km). Our aim is to create global color maps of Vesta using multi spectral FC images to identify the spatial extent of compositional units and link them with other available data sets to extract the basic mineralogy. While the VIR spectrometer onboard Dawn has higher spectral resolution (864 channels) allowing precise mineralogical assessment of Vesta s surface, the FC has three times higher spatial resolution in any given orbital phase. In an effort to extract maximum information from FC data we have developed algorithms using laboratory spectra of pyroxenes and HED meteorites to derive parameters associated with the 1-micron absorption band wing. These parameters will help map the global distribution of compositionally related units on Vesta s surface. Interpretation of these units will involve the integration of FC and VIR data.

  4. Reevaluating Surface Composition of Asteroid (4) Vesta by Comparing HED Spectral Data with Dawn Framing Camera (FC) Observations

    NASA Astrophysics Data System (ADS)

    Giebner, T.; Jaumann, R.; Schröder, S.

    2016-08-01

    This master's thesis project tries to reevaluate previous findings on asteroid (4) Vesta's surface composition by using DAWN FC Filter image ratios in a new way in order to identify HED (howardite, eucrite, diogenite) lithologies on the surface.

  5. Origin of Dark Material on VESTA from DAWN FC Data: Remnant Carbonaceous Chondrite Impators

    NASA Technical Reports Server (NTRS)

    Reddy, V.; LeCorre, L.; Nathues, A.; Mittlefehldt, David W.; Cloutis, E. A.; OBrien, D. P.; Durda, D. D.; Bottke, W. F.; Buczkowski, D.; Scully, J. E. C.; hide

    2012-01-01

    NASA's Dawn spacecraft entered orbit around asteroid (4) Vesta in July 2011 for a yearlong mapping orbit. The surface of Vesta as imaged by the Dawn Framing Camera (FC) revealed a surface that is unlike any asteroid we have visited so far with a spacecraft. Albedo and color variations on Vesta are the most diverse in the asteroid belt with a majority of these linked to distinct compositional units on the asteroid s surface. FC discovered dark material on Vesta. These low albedo surface features were first observed during Rotational Characterization 3 phase at a resolution of approx. 487 m/pixel. Here we explore the composition and possible meteoritical analogs for the dark material on Vesta.

  6. CANDU in-reactor quantitative visual-based inspection techniques

    NASA Astrophysics Data System (ADS)

    Rochefort, P. A.

    2009-02-01

    This paper describes two separate visual-based inspection procedures used at CANDU nuclear power generating stations. The techniques are quantitative in nature and are delivered and operated in highly radioactive environments with access that is restrictive, and in one case is submerged. Visual-based inspections at stations are typically qualitative in nature. For example a video system will be used to search for a missing component, inspect for a broken fixture, or locate areas of excessive corrosion in a pipe. In contrast, the methods described here are used to measure characteristic component dimensions that in one case ensure ongoing safe operation of the reactor and in the other support reactor refurbishment. CANDU reactors are Pressurized Heavy Water Reactors (PHWR). The reactor vessel is a horizontal cylindrical low-pressure calandria tank approximately 6 m in diameter and length, containing heavy water as a neutron moderator. Inside the calandria, 380 horizontal fuel channels (FC) are supported at each end by integral end-shields. Each FC holds 12 fuel bundles. The heavy water primary heat transport water flows through the FC pressure tube, removing the heat from the fuel bundles and delivering it to the steam generator. The general design of the reactor governs both the type of measurements that are required and the methods to perform the measurements. The first inspection procedure is a method to remotely measure the gap between FC and other in-core horizontal components. The technique involves delivering vertically a module with a high-radiation-resistant camera and lighting into the core of a shutdown but fuelled reactor. The measurement is done using a line-of-sight technique between the components. Compensation for image perspective and viewing elevation to the measurement is required. The second inspection procedure measures flaws within the reactor's end shield FC calandria tube rolled joint area. The FC calandria tube (the outer shell of the FC) is sealed by rolling its ends into the rolled joint area. During reactor refurbishment, the original FC calandria tubes are removed, potentially scratching the rolled joint area and, thereby, compromising the seal with the new FC calandria tube. The procedure involves delivering an inspection module having a radiation-resistant camera, standard lighting, and a structured lighting projector. The surface is inspected by rotating the module within the rolled joint area. If a flaw is detected, its depth and width are gauged from the profile variation of the structured lighting in a captured image. As well, the diameter profile of the area is measured from the analysis of a series of captured circumferential images of the structured lighting profiles on the surface.

  7. Iodine 125 Imaging in Mice Using NaI(Tl)/Flat Panel PMT Integral Assembly

    NASA Astrophysics Data System (ADS)

    Cinti, M. N.; Majewski, S.; Williams, M. B.; Bachmann, C.; Cominelli, F.; Kundu, B. K.; Stolin, A.; Popov, V.; Welch, B. L.; De Vincentis, G.; Bennati, P.; Betti, M.; Ridolfi, S.; Pani, R.

    2007-06-01

    Radiolabeled agents that bind to specific receptors have shown great promise in diagnosing and characterizing tumor cell biology. In vivo imaging of gene transcription and protein expression represents an other area of interest. The radioisotope I is commercially available as a label for molecular probes and utilized by researchers in small animal studies. We propose an advanced imaging detector based on planar NaI(T1) integral assembly with a Hamamatsu Flat Panel Photomultiplier (MA-PMT) representing one of the best trade-offs between spatial resolution and detection efficiency. We characterized the imaging performances of this planar detector, in comparison with a gamma camera based on a pixellated scintillator. We also tested the in-vivo image capability by acquiring images of mice as a part of a study of inflammatory bowel disease (IBD). In this study, four 25g mice with an IBD-like phenotype (SAMP1/YitFc) were injected with 375, 125, 60 and 30 muCi of I-labelled antibody against mucosal vascular addressin cell adhesion molecule (MAdCAM-1), which is up-regulated in the presence of inflammation. Two mice without bowel inflammation were injected with 150 and 60 muCi of the labeled anti-MAdCAM-1 antibody as controls. To better evaluate the performances of the integral assembly detector, we also acquired mice images with a dual modality (X and Gamma Ray) camera dedicated for small animal imaging. The results coming from this new detector are considerable: images of SAMP1/YitFc injected with 30 muCi activity show inflammation throughout the intestinal tract, with the disease very well defined at two hours post-injection.

  8. High-resolution Ceres Low Altitude Mapping Orbit Atlas derived from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Roatsch, Th.; Kersten, E.; Matz, K.-D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2017-06-01

    The Dawn spacecraft Framing Camera (FC) acquired over 31,300 clear filter images of Ceres with a resolution of about 35 m/pxl during the eleven cycles in the Low Altitude Mapping Orbit (LAMO) phase between December 16 2015 and August 8 2016. We ortho-rectified the images from the first four cycles and produced a global, high-resolution, uncontrolled photomosaic of Ceres. This global mosaic is the basis for a high-resolution Ceres atlas that consists of 62 tiles mapped at a scale of 1:250,000. The nomenclature used in this atlas was proposed by the Dawn team and was approved by the International Astronomical Union (IAU). The full atlas is available to the public through the Dawn Geographical Information System (GIS) web page [http://dawngis.dlr.de/atlas] and will become available through the NASA Planetary Data System (PDS) (http://pdssbn.astro.umd.edu/).

  9. Pool boiling of ethanol and FC-72 on open microchannel surfaces

    NASA Astrophysics Data System (ADS)

    Kaniowski, Robert; Pastuszko, Robert

    2018-06-01

    The paper presents experimental investigations into pool boiling heat transfer for open microchannel surfaces. Parallel microchannels fabricated by machining were about 0.3 mm wide, and 0.2 to 0.5 mm deep and spaced every 0.1 mm. The experiments were carried out for ethanol, and FC-72 at atmospheric pressure. The image acquisition speed was 493 fps (at resolution 400 × 300 pixels with Photonfocus PHOT MV-D1024-160-CL camera). Visualization investigations aimed to identify nucleation sites and flow patterns and to determine the bubble departure diameter and frequency at various superheats. The primary factor in the increase of heat transfer coefficient at increasing heat flux was a growing number of active pores and increased departure frequency. Heat transfer coefficients obtained in this study were noticeably higher than those from a smooth surface.

  10. Camera Trajectory fromWide Baseline Images

    NASA Astrophysics Data System (ADS)

    Havlena, M.; Torii, A.; Pajdla, T.

    2008-09-01

    Camera trajectory estimation, which is closely related to the structure from motion computation, is one of the fundamental tasks in computer vision. Reliable camera trajectory estimation plays an important role in 3D reconstruction, self localization, and object recognition. There are essential issues for a reliable camera trajectory estimation, for instance, choice of the camera and its geometric projection model, camera calibration, image feature detection and description, and robust 3D structure computation. Most of approaches rely on classical perspective cameras because of the simplicity of their projection models and ease of their calibration. However, classical perspective cameras offer only a limited field of view, and thus occlusions and sharp camera turns may cause that consecutive frames look completely different when the baseline becomes longer. This makes the image feature matching very difficult (or impossible) and the camera trajectory estimation fails under such conditions. These problems can be avoided if omnidirectional cameras, e.g. a fish-eye lens convertor, are used. The hardware which we are using in practice is a combination of Nikon FC-E9 mounted via a mechanical adaptor onto a Kyocera Finecam M410R digital camera. Nikon FC-E9 is a megapixel omnidirectional addon convertor with 180° view angle which provides images of photographic quality. Kyocera Finecam M410R delivers 2272×1704 images at 3 frames per second. The resulting combination yields a circular view of diameter 1600 pixels in the image. Since consecutive frames of the omnidirectional camera often share a common region in 3D space, the image feature matching is often feasible. On the other hand, the calibration of these cameras is non-trivial and is crucial for the accuracy of the resulting 3D reconstruction. We calibrate omnidirectional cameras off-line using the state-of-the-art technique and Mičušík's two-parameter model, that links the radius of the image point r to the angle θ of its corresponding rays w.r.t. the optical axis as θ = ar 1+br2 . After a successful calibration, we know the correspondence of the image points to the 3D optical rays in the coordinate system of the camera. The following steps aim at finding the transformation between the camera and the world coordinate systems, i.e. the pose of the camera in the 3D world, using 2D image matches. For computing 3D structure, we construct a set of tentative matches detecting different affine covariant feature regions including MSER, Harris Affine, and Hessian Affine in acquired images. These features are alternative to popular SIFT features and work comparably in our situation. Parameters of the detectors are chosen to limit the number of regions to 1-2 thousands per image. The detected regions are assigned local affine frames (LAF) and transformed into standard positions w.r.t. their LAFs. Discrete Cosine Descriptors are computed for each region in the standard position. Finally, mutual distances of all regions in one image and all regions in the other image are computed as the Euclidean distances of their descriptors and tentative matches are constructed by selecting the mutually closest pairs. Opposed to the methods using short baseline images, simpler image features which are not affine covariant cannot be used because the view point can change a lot between consecutive frames. Furthermore, feature matching has to be performed on the whole frame because no assumptions on the proximity of the consecutive projections can be made for wide baseline images. This is making the feature detection, description, and matching much more time-consuming than it is for short baseline images and limits the usage to low frame rate sequences when operating in real-time. Robust 3D structure can be computed by RANSAC which searches for the largest subset of the set of tentative matches which is, within a predefined threshold ", consistent with an epipolar geometry. We use ordered sampling as suggested in to draw 5-tuples from the list of tentative matches ordered ascendingly by the distance of their descriptors which may help to reduce the number of samples in RANSAC. From each 5-tuple, relative orientation is computed by solving the 5-point minimal relative orientation problem for calibrated cameras. Often, there are more models which are supported by a large number of matches. Thus the chance that the correct model, even if it has the largest support, will be found by running a single RANSAC is small. Work suggested to generate models by randomized sampling as in RANSAC but to use soft (kernel) voting for a parameter instead of looking for the maximal support. The best model is then selected as the one with the parameter closest to the maximum in the accumulator space. In our case, we vote in a two-dimensional accumulator for the estimated camera motion direction. However, unlike in, we do not cast votes directly by each sampled epipolar geometry but by the best epipolar geometries recovered by ordered sampling of RANSAC. With our technique, we could go up to the 98.5 % contamination of mismatches with comparable effort as simple RANSAC does for the contamination by 84 %. The relative camera orientation with the motion direction closest to the maximum in the voting space is finally selected. As already mentioned in the first paragraph, the use of camera trajectory estimates is quite wide. In we have introduced a technique for measuring the size of camera translation relatively to the observed scene which uses the dominant apical angle computed at the reconstructed scene points and is robust against mismatches. The experiments demonstrated that the measure can be used to improve the robustness of camera path computation and object recognition for methods which use a geometric, e.g. the ground plane, constraint such as does for the detection of pedestrians. Using the camera trajectories, perspective cutouts with stabilized horizon are constructed and an arbitrary object recognition routine designed to work with images acquired by perspective cameras can be used without any further modifications.

  11. In vivo PET imaging of beta-amyloid deposition in mouse models of Alzheimer's disease with a high specific activity PET imaging agent [(18)F]flutemetamol.

    PubMed

    Snellman, Anniina; Rokka, Johanna; López-Picón, Francisco R; Eskola, Olli; Salmona, Mario; Forloni, Gianluigi; Scheinin, Mika; Solin, Olof; Rinne, Juha O; Haaparanta-Solin, Merja

    2014-01-01

    The purpose of the study was to evaluate the applicability of (18) F-labelled amyloid imaging positron emission tomography (PET) agent [ (18) F]flutemetamol to detect changes in brain beta-amyloid (Aβ) deposition in vivo in APP23, Tg2576 and APPswe-PS1dE9 mouse models of Alzheimer's disease. We expected that the high specific activity of [ (18) F]flutemetamol would make it an attractive small animal Aβ imaging agent. [ (18) F]flutemetamol uptake in the mouse brain was evaluated in vivo at 9 to 22 months of age with an Inveon Multimodality PET/CT camera (Siemens Medical Solutions USA, Knoxville, TN, USA). Retention in the frontal cortex (FC) was evaluated by Logan distribution volume ratios (DVR) and FC/cerebellum (CB) ratios during the late washout phase (50 to 60 min). [ (18) F]flutemetamol binding to Aβ was also evaluated in brain slices by in vitro and ex vivo autoradiography. The amount of Aβ in the brain slices was determined with Thioflavin S and anti-Aβ1-40 immunohistochemistry. In APP23 mice, [ (18) F]flutemetamol retention in the FC increased from 9 to 18 months. In younger mice, DVR and FC/CB50-60 were 0.88 (0.81) and 0.88 (0.89) at 9 months (N = 2), and 0.98 (0.93) at 12 months (N = 1), respectively. In older mice, DVR and FC/CB50-60 were 1.16 (1.15) at 15 months (N = 1), 1.13 (1.16) and 1.35 (1.35) at 18 months (N = 2), and 1.05 (1.31) at 21 months (N = 1). In Tg2576 mice, DVR and FC/CB50-60 showed modest increasing trends but also high variability. In APPswe-PS1dE9 mice, DVR and FC/CB50-60 did not increase with age. Thioflavin S and anti-Aβ1-40 positive Aβ deposits were present in all transgenic mice at 19 to 22 months, and they co-localized with [ (18) F]flutemetamol binding in the brain slices examined with in vitro and ex vivo autoradiography. Increased [ (18) F]flutemetamol retention in the brain was detected in old APP23 mice in vivo. However, the high specific activity of [ (18) F]flutemetamol did not provide a notable advantage in Tg2576 and APPswe-PS1dE9 mice compared to the previously evaluated structural analogue [(11)C]PIB. For its practical benefits, [ (18) F]flutemetamol imaging with a suitable mouse model like APP23 is an attractive alternative.

  12. In vivo PET imaging of beta-amyloid deposition in mouse models of Alzheimer's disease with a high specific activity PET imaging agent [18F]flutemetamol

    PubMed Central

    2014-01-01

    Background The purpose of the study was to evaluate the applicability of 18F-labelled amyloid imaging positron emission tomography (PET) agent [18F]flutemetamol to detect changes in brain beta-amyloid (Aβ) deposition in vivo in APP23, Tg2576 and APPswe-PS1dE9 mouse models of Alzheimer's disease. We expected that the high specific activity of [18F]flutemetamol would make it an attractive small animal Aβ imaging agent. Methods [18F]flutemetamol uptake in the mouse brain was evaluated in vivo at 9 to 22 months of age with an Inveon Multimodality PET/CT camera (Siemens Medical Solutions USA, Knoxville, TN, USA). Retention in the frontal cortex (FC) was evaluated by Logan distribution volume ratios (DVR) and FC/cerebellum (CB) ratios during the late washout phase (50 to 60 min). [18F]flutemetamol binding to Aβ was also evaluated in brain slices by in vitro and ex vivo autoradiography. The amount of Aβ in the brain slices was determined with Thioflavin S and anti-Aβ1−40 immunohistochemistry. Results In APP23 mice, [18F]flutemetamol retention in the FC increased from 9 to 18 months. In younger mice, DVR and FC/CB50-60 were 0.88 (0.81) and 0.88 (0.89) at 9 months (N = 2), and 0.98 (0.93) at 12 months (N = 1), respectively. In older mice, DVR and FC/CB50-60 were 1.16 (1.15) at 15 months (N = 1), 1.13 (1.16) and 1.35 (1.35) at 18 months (N = 2), and 1.05 (1.31) at 21 months (N = 1). In Tg2576 mice, DVR and FC/CB50-60 showed modest increasing trends but also high variability. In APPswe-PS1dE9 mice, DVR and FC/CB50-60 did not increase with age. Thioflavin S and anti-Aβ1−40 positive Aβ deposits were present in all transgenic mice at 19 to 22 months, and they co-localized with [18F]flutemetamol binding in the brain slices examined with in vitro and ex vivo autoradiography. Conclusions Increased [18F]flutemetamol retention in the brain was detected in old APP23 mice in vivo. However, the high specific activity of [18F]flutemetamol did not provide a notable advantage in Tg2576 and APPswe-PS1dE9 mice compared to the previously evaluated structural analogue [11C]PIB. For its practical benefits, [18F]flutemetamol imaging with a suitable mouse model like APP23 is an attractive alternative. PMID:25977876

  13. Detection of serpentine in exogenic carbonaceous chondrite material on Vesta from Dawn FC data

    NASA Astrophysics Data System (ADS)

    Nathues, Andreas; Hoffmann, Martin; Cloutis, Edward A.; Schäfer, Michael; Reddy, Vishnu; Christensen, Ulrich; Sierks, Holger; Thangjam, Guneshwar Singh; Le Corre, Lucille; Mengel, Kurt; Vincent, Jean-Baptist; Russell, Christopher T.; Prettyman, Tom; Schmedemann, Nico; Kneissl, Thomas; Raymond, Carol; Gutierrez-Marques, Pablo; Hall, Ian; Büttner, Irene

    2014-09-01

    The Dawn mission’s Framing Camera (FC) observed Asteroid (4) Vesta in 2011 and 2012 using seven color filters and one clear filter from different orbits. In the present paper we analyze recalibrated HAMO color cubes (spatial resolution ∼60 m/pixel) with a focus on dark material (DM). We present a definition of highly concentrated DM based on spectral parameters, subsequently map the DM across the Vestan surface, geologically classify DM, study its spectral properties on global and local scales, and finally, compare the FC in-flight color data with laboratory spectra. We have discovered an absorption band centered at 0.72 μm in localities of DM that show the lowest albedo values by using FC data as well as spectral information from Dawn’s imaging spectrometer VIR. Such localities are contained within impact-exposed outcrops on inner crater walls and ejecta material. Comparisons between spectral FC in-flight data, and laboratory spectra of meteorites and mineral mixtures in the wavelength range 0.4-1.0 μm, revealed that the absorption band can be attributed to the mineral serpentine, which is typically present in CM chondrites. Dark material in its purest form is rare on Vesta’s surface and is distributed globally in a non-uniform manner. Our findings confirm the hypothesis of an exogenic origin of the DM by the infall of carbonaceous chondritic material, likely of CM type. It further confirms the hypothesis that most of the DM was deposited by the Veneneia impact.

  14. Resolved spectrophotometric properties of the Ceres surface from Dawn Framing Camera images

    NASA Astrophysics Data System (ADS)

    Schröder, S. E.; Mottola, S.; Carsenty, U.; Ciarniello, M.; Jaumann, R.; Li, J.-Y.; Longobardo, A.; Palmer, E.; Pieters, C.; Preusker, F.; Raymond, C. A.; Russell, C. T.

    2017-05-01

    We present a global spectrophotometric characterization of the Ceres surface using Dawn Framing Camera (FC) images. We identify the photometric model that yields the best results for photometrically correcting images. Corrected FC images acquired on approach to Ceres were assembled into global maps of albedo and color. Generally, albedo and color variations on Ceres are muted. The albedo map is dominated by a large, circular feature in Vendimia Planitia, known from HST images (Li et al., 2006), and dotted by smaller bright features mostly associated with fresh-looking craters. The dominant color variation over the surface is represented by the presence of "blue" material in and around such craters, which has a negative spectral slope over the visible wavelength range when compared to average terrain. We also mapped variations of the phase curve by employing an exponential photometric model, a technique previously applied to asteroid Vesta (Schröder et al., 2013b). The surface of Ceres scatters light differently from Vesta in the sense that the ejecta of several fresh-looking craters may be physically smooth rather than rough. High albedo, blue color, and physical smoothness all appear to be indicators of youth. The blue color may result from the desiccation of ejected material that is similar to the phyllosilicates/water ice mixtures in the experiments of Poch et al. (2016). The physical smoothness of some blue terrains would be consistent with an initially liquid condition, perhaps as a consequence of impact melting of subsurface water ice. We find red terrain (positive spectral slope) near Ernutet crater, where De Sanctis et al. (2017) detected organic material. The spectrophotometric properties of the large Vendimia Planitia feature suggest it is a palimpsest, consistent with the Marchi et al. (2016) impact basin hypothesis. The central bright area in Occator crater, Cerealia Facula, is the brightest on Ceres with an average visual normal albedo of about 0.6 at a resolution of 1.3 km per pixel (six times Ceres average). The albedo of fresh, bright material seen inside this area in the highest resolution images (35 m per pixel) is probably around unity. Cerealia Facula has an unusually steep phase function, which may be due to unresolved topography, high surface roughness, or large average particle size. It has a strongly red spectrum whereas the neighboring, less-bright, Vinalia Faculae are neutral in color. We find no evidence for a diurnal ground fog-type haze in Occator as described by Nathues et al. (2015). We can neither reproduce their findings using the same images, nor confirm them using higher resolution images. FC images have not yet offered direct evidence for present sublimation in Occator.

  15. Mapping Vesta Equatorial Quadrangle V-8EDL: Various Craters and Giant Grooves

    NASA Astrophysics Data System (ADS)

    Le Corre, L.; Nathues, A.; Reddy, V.; Buczkowski, D.; Denevi, B. W.; Gaffey, M.; Williams, D. A.; Garry, W. B.; Yingst, R.; Jaumann, R.; Pieters, C. M.; Russell, C. T.; Raymond, C. A.

    2011-12-01

    NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-8EDL located between -22 and 22 degrees latitude and 144 and 216 degrees East longitude. This quadrangle is dominated by old craters (without any ejecta visible in the clear and color bands), but one small recent crater can be seen with bright ejecta blanket and rays. The latter has some small, dark units outside and inside the crater rim that could be indicative of impact melt. This quadrangle also contains a set of giant linear grooves running almost parallel to the equator that might have formed subsequent to a big impact. We will use FC mosaics with clear images and false color composites as well as VIR spectroscopy data in order to constrain the geology and identify the nature of each unit present in this quadrangle.

  16. New echocardiographic windows for quantitative determination of aortic regurgitation volume using color Doppler flow convergence and vena contracta

    NASA Technical Reports Server (NTRS)

    Shiota, T.; Jones, M.; Agler, D. A.; McDonald, R. W.; Marcella, C. P.; Qin, J. X.; Zetts, A. D.; Greenberg, N. L.; Cardon, L. A.; Sun, J. P.; hide

    1999-01-01

    Color Doppler images of aortic regurgitation (AR) flow acceleration, flow convergence (FC), and the vena contracta (VC) have been reported to be useful for evaluating severity of AR. However, clinical application of these methods has been limited because of the difficulty in clearly imaging the FC and VC. This study aimed to explore new windows for imaging the FC and VC to evaluate AR volumes in patients and to validate this in animals with chronic AR. Forty patients with AR and 17 hemodynamic states in 4 sheep with strictly quantified AR volumes were evaluated. A Toshiba SSH 380A with a 3.75-MHz transducer was used to image the FC and VC. After routine echo Doppler imaging, patients were repositioned in the right lateral decubitus position, and the FC and VC were imaged from high right parasternal windows. In only 15 of the 40 patients was it possible to image clearly and measure accurately the FC and VC from conventional (left decubitus) apical or parasternal views. In contrast, 31 of 40 patients had clearly imaged FC regions and VCs using the new windows. In patients, AR volumes derived from the FC and VC methods combined with continuous velocity agreed well with each other (r = 0.97, mean difference = -7.9 ml +/- 9.9 ml/beat). In chronic animal model studies, AR volumes derived from both the VC and the FC agreed well with the electromagnetically derived AR volumes (r = 0.92, mean difference = -1.3 +/- 4.0 ml/beat). By imaging from high right parasternal windows in the right decubitus position, complementary use of the FC and VC methods can provide clinically valuable information about AR volumes.

  17. New echocardiographic windows for quantitative determination of aortic regurgitation volume using color Doppler flow convergence and vena contracta.

    PubMed

    Shiota, T; Jones, M; Agler, D A; McDonald, R W; Marcella, C P; Qin, J X; Zetts, A D; Greenberg, N L; Cardon, L A; Sun, J P; Sahn, D J; Thomas, J D

    1999-04-01

    Color Doppler images of aortic regurgitation (AR) flow acceleration, flow convergence (FC), and the vena contracta (VC) have been reported to be useful for evaluating severity of AR. However, clinical application of these methods has been limited because of the difficulty in clearly imaging the FC and VC. This study aimed to explore new windows for imaging the FC and VC to evaluate AR volumes in patients and to validate this in animals with chronic AR. Forty patients with AR and 17 hemodynamic states in 4 sheep with strictly quantified AR volumes were evaluated. A Toshiba SSH 380A with a 3.75-MHz transducer was used to image the FC and VC. After routine echo Doppler imaging, patients were repositioned in the right lateral decubitus position, and the FC and VC were imaged from high right parasternal windows. In only 15 of the 40 patients was it possible to image clearly and measure accurately the FC and VC from conventional (left decubitus) apical or parasternal views. In contrast, 31 of 40 patients had clearly imaged FC regions and VCs using the new windows. In patients, AR volumes derived from the FC and VC methods combined with continuous velocity agreed well with each other (r = 0.97, mean difference = -7.9 ml +/- 9.9 ml/beat). In chronic animal model studies, AR volumes derived from both the VC and the FC agreed well with the electromagnetically derived AR volumes (r = 0.92, mean difference = -1.3 +/- 4.0 ml/beat). By imaging from high right parasternal windows in the right decubitus position, complementary use of the FC and VC methods can provide clinically valuable information about AR volumes.

  18. Mapping Vesta Mid-Latitude Quadrangle V-12EW: Mapping the Edge of the South Polar Structure

    NASA Astrophysics Data System (ADS)

    Hoogenboom, T.; Schenk, P.; Williams, D. A.; Hiesinger, H.; Garry, W. B.; Yingst, R.; Buczkowski, D.; McCord, T. B.; Jaumann, R.; Pieters, C. M.; Gaskell, R. W.; Neukum, G.; Schmedemann, N.; Marchi, S.; Nathues, A.; Le Corre, L.; Roatsch, T.; Preusker, F.; White, O. L.; DeSanctis, C.; Filacchione, G.; Raymond, C. A.; Russell, C. T.

    2011-12-01

    NASA's Dawn spacecraft arrived at the asteroid 4Vesta on July 15, 2011, and is now collecting imaging, spectroscopic, and elemental abundance data during its one-year orbital mission. As part of the geological analysis of the surface, a series of 15 quadrangle maps are being produced based on Framing Camera images (FC: spatial resolution: ~65 m/pixel) along with Visible & Infrared Spectrometer data (VIR: spatial resolution: ~180 m/pixel) obtained during the High-Altitude Mapping Orbit (HAMO). This poster presentation concentrates on our geologic analysis and mapping of quadrangle V-12EW. This quadrangle is dominated by the arcuate edge of the large 460+ km diameter south polar topographic feature first observed by HST (Thomas et al., 1997). Sparsely cratered, the portion of this feature covered in V-12EW is characterized by arcuate ridges and troughs forming a generalized arcuate pattern. Mapping of this terrain and the transition to areas to the north will be used to test whether this feature has an impact or other (e.g., internal) origin. We are also using FC stereo and VIR images to assess whether their are any compositional differences between this terrain and areas further to the north, and image data to evaluate the distribution and age of young impact craters within the map area. The authors acknowledge the support of the Dawn Science, Instrument and Operations Teams.

  19. Location Distribution Optimization of Photographing Sites for Indoor Panorama Modeling

    NASA Astrophysics Data System (ADS)

    Zhang, S.; Wu, J.; Zhang, Y.; Zhang, X.; Xin, Z.; Liu, J.

    2017-09-01

    Generally, panoramas image modeling is costly and time-consuming because of photographing continuously to capture enough photos along the routes, especially in complicated indoor environment. Thus, difficulty follows for a wider applications of panoramic image modeling for business. It is indispensable to make a feasible arrangement of panorama sites locations because the locations influence the clarity, coverage and the amount of panoramic images under the condition of certain device. This paper is aim to propose a standard procedure to generate the specific location and total amount of panorama sites in indoor panoramas modeling. Firstly, establish the functional relationship between one panorama site and its objectives. Then, apply the relationship to panorama sites network. We propose the Distance Clarity function (FC and Fe) manifesting the mathematical relationship between panoramas and objectives distance or obstacle distance. The Distance Buffer function (FB) is modified from traditional buffer method to generate the coverage of panorama site. Secondly, transverse every point in possible area to locate possible panorama site, calculate the clarity and coverage synthetically. Finally select as little points as possible to satiate clarity requirement preferentially and then the coverage requirement. In the experiments, detailed parameters of camera lens are given. Still, more experiments parameters need trying out given that relationship between clarity and distance is device dependent. In short, through the function FC, Fe and FB, locations of panorama sites can be generated automatically and accurately.

  20. Using a High-Speed Camera to Measure the Speed of Sound

    ERIC Educational Resources Information Center

    Hack, William Nathan; Baird, William H.

    2012-01-01

    The speed of sound is a physical property that can be measured easily in the lab. However, finding an inexpensive and intuitive way for students to determine this speed has been more involved. The introduction of affordable consumer-grade high-speed cameras (such as the Exilim EX-FC100) makes conceptually simple experiments feasible. Since the…

  1. Targeted imaging of cancer by fluorocoxib C, a near-infrared cyclooxygenase-2 probe

    NASA Astrophysics Data System (ADS)

    Uddin, Md. Jashim; Crews, Brenda C.; Ghebreselasie, Kebreab; Daniel, Cristina K.; Kingsley, Philip J.; Xu, Shu; Marnett, Lawrence J.

    2015-05-01

    Cyclooxygenase-2 (COX-2) is a promising target for the imaging of cancer in a range of diagnostic and therapeutic settings. We report a near-infrared COX-2-targeted probe, fluorocoxib C (FC), for visualization of solid tumors by optical imaging. FC exhibits selective and potent COX-2 inhibition in both purified protein and human cancer cell lines. In vivo optical imaging shows selective accumulation of FC in COX-2-overexpressing human tumor xenografts [1483 head and neck squamous cell carcinoma (HNSCC)] implanted in nude mice, while minimal uptake is detectable in COX-2-negative tumor xenografts (HCT116) or 1483 HNSCC xenografts preblocked with the COX-2-selective inhibitor celecoxib. Time course imaging studies conducted from 3 h to 7-day post-FC injection revealed a marked reduction in nonspecific fluorescent signals with retention of fluorescence in 1483 HNSCC tumors. Thus, use of FC in a delayed imaging protocol offers an approach to improve imaging signal-to-noise that should improve cancer detection in multiple preclinical and clinical settings.

  2. Applications of Collisional Radiative Modeling of Helium and Deuterium for Image Tomography Diagnostic of Te, Ne, and ND in the DIII-D Tokamak

    NASA Astrophysics Data System (ADS)

    Munoz Burgos, J. M.; Brooks, N. H.; Fenstermacher, M. E.; Meyer, W. H.; Unterberg, E. A.; Schmitz, O.; Loch, S. D.; Balance, C. P.

    2011-10-01

    We apply new atomic modeling techniques to helium and deuterium for diagnostics in the divertor and scrape-off layer regions. Analysis of tomographically inverted images is useful for validating detachment prediction models and power balances in the divertor. We apply tomographic image inversion from fast tangential cameras of helium and Dα emission at the divertor in order to obtain 2D profiles of Te, Ne, and ND (neutral ion density profiles). The accuracy of the atomic models for He I will be cross-checked against Thomson scattering measurements of Te and Ne. This work summarizes several current developments and applications of atomic modeling into diagnostic at the DIII-D tokamak. Supported in part by the US DOE under DE-AC05-06OR23100, DE-FC02-04ER54698, DE-AC52-07NA27344, and DE-AC05-00OR22725.

  3. High-resolution Ceres LAMO atlas derived from Dawn FC images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K. D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C.

    2016-12-01

    Introduction: NASA's Dawn spacecraft has been orbiting the dwarf planet Ceres since December 2015 in LAMO (High Altitude Mapping Orbit) with an altitude of about 400 km to characterize for instance the geology, topography, and shape of Ceres. One of the major goals of this mission phase is the global high-resolution mapping of Ceres. Data: The Dawn mission is equipped with a fram-ing camera (FC). The framing camera took until the time of writing about 27,500 clear filter images in LAMO with a resolution of about 30 m/pixel and dif-ferent viewing angles and different illumination condi-tions. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the target. A high-resolution shape model was provided by stereo processing of the HAMO dataset, orbit and attitude data are available as reconstructed SPICE data. Ceres' HAMO shape model is used for the calculation of the ray intersection points while the map projection itself was done onto a reference sphere of Ceres. The final step is the controlled mosaicking of all nadir images to a global mosaic of Ceres, the so called basemap. Ceres map tiles: The Ceres atlas will be produced in a scale of 1:250,000 and will consist of 62 tiles that conforms to the quadrangle schema for Venus at 1:5,000,000. A map scale of 1:250,000 is a compro-mise between the very high resolution in LAMO and a proper map sheet size of the single tiles. Nomenclature: The Dawn team proposed to the International Astronomical Union (IAU) to use the names of gods and goddesses of agriculture and vege-tation from world mythology as names for the craters and to use names of agricultural festivals of the world for other geological features. This proposal was ac-cepted by the IAU and the team proposed 92 names for geological features to the IAU based on the LAMO mosaic. These feature names will be applied to the map tiles.

  4. High-resolution Ceres HAMO Atlas derived from Dawn FC Images

    NASA Astrophysics Data System (ADS)

    Roatsch, T.; Kersten, E.; Matz, K. D.; Preusker, F.; Scholten, F.; Jaumann, R.; Raymond, C. A.; Russell, C. T.

    2015-12-01

    Introduction: NASA's Dawn spacecraft will orbit the dwarf planet Ceres in August and September 2015 in HAMO (High Altitude Mapping Orbit) with an altitude of about 1,500 km to characterize for instance the geology, topography, and shape of Ceres before it will be transferred to the lowest orbit. One of the major goals of this mission phase is the global mapping of Ceres. Data: The Dawn mission is equipped with a fram-ing camera (FC). The framing camera will take about 2600 clear filter images with a resolution of about 120 m/pixel and different viewing angles and different illumination conditions. Data Processing: The first step of the processing chain towards the cartographic products is to ortho-rectify the images to the proper scale and map projec-tion type. This process requires detailed information of the Dawn orbit and attitude data and of the topography of the target. Both, improved orientation and high-resolution shape models, are provided by stereo processing of the HAMO dataset. Ceres' HAMO shape model is used for the calculation of the ray intersection points while the map projection itself will be done onto a reference sphere for Ceres. The final step is the controlled mosaicking of all nadir images to a global mosaic of Ceres, the so called basemap. Ceres map tiles: The Ceres atlas will be produced in a scale of 1:750,000 and will consist of 15 tiles that conform to the quadrangle schema for small planets and medium size Icy satellites. A map scale of 1:750,000 guarantees a mapping at the highest availa-ble Dawn resolution in HAMO. Nomenclature: The Dawn team proposed to the International Astronomical Union (IAU) to use the names of gods and goddesses of agriculture and vege-tation from world mythology as names for the craters. This proposal was accepted by the IAU and the team proposed names for geological features to the IAU based on the HAMO mosaic. These feature names will be applied to the map tiles.

  5. Live Cell Visualization of Multiple Protein-Protein Interactions with BiFC Rainbow.

    PubMed

    Wang, Sheng; Ding, Miao; Xue, Boxin; Hou, Yingping; Sun, Yujie

    2018-05-18

    As one of the most powerful tools to visualize PPIs in living cells, bimolecular fluorescence complementation (BiFC) has gained great advancement during recent years, including deep tissue imaging with far-red or near-infrared fluorescent proteins or super-resolution imaging with photochromic fluorescent proteins. However, little progress has been made toward simultaneous detection and visualization of multiple PPIs in the same cell, mainly due to the spectral crosstalk. In this report, we developed novel BiFC assays based on large-Stokes-shift fluorescent proteins (LSS-FPs) to detect and visualize multiple PPIs in living cells. With the large excitation/emission spectral separation, LSS-FPs can be imaged together with normal Stokes shift fluorescent proteins to realize multicolor BiFC imaging using a simple illumination scheme. We also further demonstrated BiFC rainbow combining newly developed BiFC assays with previously established mCerulean/mVenus-based BiFC assays to achieve detection and visualization of four PPI pairs in the same cell. Additionally, we prove that with the complete spectral separation of mT-Sapphire and CyOFP1, LSS-FP-based BiFC assays can be readily combined with intensity-based FRET measurement to detect ternary protein complex formation with minimal spectral crosstalk. Thus, our newly developed LSS-FP-based BiFC assays not only expand the fluorescent protein toolbox available for BiFC but also facilitate the detection and visualization of multiple protein complex interactions in living cells.

  6. Geologic Structures in Crater Walls on Vesta

    NASA Technical Reports Server (NTRS)

    Mittlefehldt, David W.; Beck, A. W.; Ammannito, E.; Carsenty, U.; DeSanctis, M. C.; LeCorre, L.; McCoy, T. J.; Reddy, V.; Schroeder, S. E.

    2012-01-01

    The Framing Camera (FC) on the Dawn spacecraft has imaged most of the illuminated surface of Vesta with a resolution of apporpx. 20 m/pixel through different wavelength filters that allow for identification of lithologic units. The Visible and Infrared Mapping Spectrometer (VIR) has imaged the surface at lower spatial resolution but high spectral resolution from 0.25 to 5 micron that allows for detailed mineralogical interpretation. The FC has imaged geologic structures in the walls of fresh craters and on scarps on the margin of the Rheasilvia basin that consist of cliff-forming, competent units, either as blocks or semi-continuous layers, hundreds of m to km below the rims. Different units have different albedos, FC color ratios and VIR spectral characteristics, and different units can be juxtaposed in individual craters. We will describe different examples of these competent units and present preliminary interpretations of the structures. A common occurrence is of blocks several hundred m in size of high albedo (bright) and low albedo (dark) materials protruding from crater walls. In many examples, dark material deposits lie below coherent bright material blocks. In FC Clementine color ratios, bright material is green indicating deeper 1 m pyroxene absorption band. VIR spectra show these to have deeper and wider 1 and 2 micron pyroxene absorption bands than the average vestan surface. The associated dark material has subdued pyroxene absorption features compared to the average vestan surface. Some dark material deposits are consistent with mixtures of HED materials with carbonaceous chondrites. This would indicate that some dark material deposits in crater walls are megabreccia blocks. The same would hold for bright material blocks found above them. Thus, these are not intact crustal units. Marcia crater is atypical in that the dark material forms a semi-continuous, thin layer immediately below bright material. Bright material occurs as one or more layers. In one region, there is an apparent angular unconformity between the bright material and the dark material where bright material layers appear to be truncated against the underlying dark layer. One crater within the Rheasilvia basin contains two distinct types of bright materials outcropping on its walls, one like that found elsewhere on Vesta and the other an anomalous block 200 m across. This material has the highest albedo; almost twice that of the vestan average. Unlike all other bright materials, this block has a subdued 1 micron pyroxene absorption band in FC color ratios. These data indicate that this block represents a distinct vestan lithology that is rarely exposed.

  7. Searching for Faint Companions to Nearby Stars with the Hubble Space Telescope

    NASA Technical Reports Server (NTRS)

    Schroeder, Daniel J.; Golimowski, David A.

    1996-01-01

    A search for faint companions (FC's) to selected stars within 5 pc of the Sun using the Hubble Space Telescope's Planetary Camera (PC) has been initiated. To assess the PC's ability to detect FCs, we have constructed both model and laboratory-simulated images and compared them to actual PC images. We find that the PC's point-spread function (PSF) is 3-4 times brighter over the angular range 2-5 sec than the PSF expected for a perfect optical system. Azimuthal variations of the PC's PSF are 10-20 times larger than expected for a perfect PSF. These variations suggest that light is scattered nonuniformly from the surface of the detector. Because the anomalies in the PC's PSF cannot be precisely simulated, subtracting a reference PSF from the PC image is problematic. We have developed a computer algorithm that identifies local brightness anomalies within the PSF as potential FCs. We find that this search algorithm will successfully locate FCs anywhere within the circumstellar field provided that the average pixel signal from the FC is at least 10 sigma above the local background. This detection limit suggests that a comprehensive search for extrasolar Jovian planets with the PC is impractical. However, the PC is useful for detecting other types of substellar objects. With a stellar signal of 10(exp 9) e(-), for example, we may detect brown dwarfs as faint as M(sub I) = 16.7 separated by 1 sec from alpha Cen A.

  8. Imaging Asteroid 4 Vesta Using the Framing Camera

    NASA Technical Reports Server (NTRS)

    Keller, H. Uwe; Nathues, Andreas; Coradini, Angioletta; Jaumann, Ralf; Jorda, Laurent; Li, Jian-Yang; Mittlefehldt, David W.; Mottola, Stefano; Raymond, C. A.; Schroeder, Stefan E.

    2011-01-01

    The Framing Camera (FC) onboard the Dawn spacecraft serves a dual purpose. Next to its central role as a prime science instrument it is also used for the complex navigation of the ion drive spacecraft. The CCD detector with 1024 by 1024 pixels provides the stability for a multiyear mission and its high requirements of photometric accuracy over the wavelength band from 400 to 1000 nm covered by 7 band-pass filters. Vesta will be observed from 3 orbit stages with image scales of 227, 63, and 17 m/px, respectively. The mapping of Vesta s surface with medium resolution will be only completed during the exit phase when the north pole will be illuminated. A detailed pointing strategy will cover the surface at least twice at similar phase angles to provide stereo views for reconstruction of the topography. During approach the phase function of Vesta was determined over a range of angles not accessible from earth. This is the first step in deriving the photometric function of the surface. Combining the topography based on stereo tie points with the photometry in an iterative procedure will disclose details of the surface morphology at considerably smaller scales than the pixel scale. The 7 color filters are well positioned to provide information on the spectral slope in the visible, the depth of the strong pyroxene absorption band, and their variability over the surface. Cross calibration with the VIR spectrometer that extends into the near IR will provide detailed maps of Vesta s surface mineralogy and physical properties. Georeferencing all these observation will result in a coherent and unique data set. During Dawn s approach and capture FC has already demonstrated its performance. The strong variation observed by the Hubble Space Telescope can now be correlated with surface units and features. We will report on results obtained from images taken during survey mode covering the whole illuminated surface. Vesta is a planet-like differentiated body, but its surface gravity and escape velocity are comparable to those of other asteroids and hence much smaller than those of the inner planets or

  9. Regolith Depth, Mobility, and Variability on Vesta from Dawn's Low Altitude Mapping Orbit

    NASA Technical Reports Server (NTRS)

    Denevi, B. W.; Coman, E. I.; Blewett, D. T.; Mittlefehldt, D. W.; Buczkowski, D. L.; Combe, J.-P.; De Sanctis, M. C.; Jaumann, R.; Li, J.-Y.; Marchi, S.; hide

    2012-01-01

    Regolith, the fragmental debris layer formed from impact events of all sizes, covers the surface of all asteroids imaged by spacecraft to date. Here we use Framing Camera (FC) images [1] acquired by the Dawn spacecraft [2] from its low-altitude mapping orbit (LAMO) of 210 km (pixel scales of 20 m) to characterize regolith depth, variability, and mobility on Vesta, and to locate areas of especially thin regolith and exposures of competent material. These results will help to evaluate how the surface of this differentiated asteroid has evolved over time, and provide key contextual information for understanding the origin and degree of mixing of the surficial materials for which compositions are estimated [3,4] and the causes of the relative spectral immaturity of the surface [5]. Vestan regolith samples, in the form of howardite meteorites, can be studied in the laboratory to provide complementary constraints on the regolith process [6].

  10. Multimodal description of whole brain connectivity: A comparison of resting state MEG, fMRI, and DWI.

    PubMed

    Garcés, Pilar; Pereda, Ernesto; Hernández-Tamames, Juan A; Del-Pozo, Francisco; Maestú, Fernando; Pineda-Pardo, José Ángel

    2016-01-01

    Structural and functional connectivity (SC and FC) have received much attention over the last decade, as they offer unique insight into the coordination of brain functioning. They are often assessed independently with three imaging modalities: SC using diffusion-weighted imaging (DWI), FC using functional magnetic resonance imaging (fMRI), and magnetoencephalography/electroencephalography (MEG/EEG). DWI provides information about white matter organization, allowing the reconstruction of fiber bundles. fMRI uses blood-oxygenation level-dependent (BOLD) contrast to indirectly map neuronal activation. MEG and EEG are direct measures of neuronal activity, as they are sensitive to the synchronous inputs in pyramidal neurons. Seminal studies have targeted either the electrophysiological substrate of BOLD or the anatomical basis of FC. However, multimodal comparisons have been scarcely performed, and the relation between SC, fMRI-FC, and MEG-FC is still unclear. Here we present a systematic comparison of SC, resting state fMRI-FC, and MEG-FC between cortical regions, by evaluating their similarities at three different scales: global network, node, and hub distribution. We obtained strong similarities between the three modalities, especially for the following pairwise combinations: SC and fMRI-FC; SC and MEG-FC at theta, alpha, beta and gamma bands; and fMRI-FC and MEG-FC in alpha and beta. Furthermore, highest node similarity was found for regions of the default mode network and primary motor cortex, which also presented the highest hubness score. Distance was partially responsible for these similarities since it biased all three connectivity estimates, but not the unique contributor, since similarities remained after controlling for distance. © 2015 Wiley Periodicals, Inc.

  11. Visible Color and Photometry of Bright Materials on Vesta

    NASA Technical Reports Server (NTRS)

    Schroder, S. E.; Li, J. Y.; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.

    2012-01-01

    The Dawn Framing Camera (FC) collected images of the surface of Vesta at a pixel scale of 70 m in the High Altitude Mapping Orbit (HAMO) phase through its clear and seven color filters spanning from 430 nm to 980 nm. The surface of Vesta displays a large diversity in its brightness and colors, evidently related to the diverse geology [1] and mineralogy [2]. Here we report a detailed investigation of the visible colors and photometric properties of the apparently bright materials on Vesta in order to study their origin. The global distribution and the spectroscopy of bright materials are discussed in companion papers [3, 4], and the synthesis results about the origin of Vestan bright materials are reported in [5].

  12. Track-weighted functional connectivity (TW-FC): a tool for characterizing the structural-functional connections in the brain.

    PubMed

    Calamante, Fernando; Masterton, Richard A J; Tournier, Jacques-Donald; Smith, Robert E; Willats, Lisa; Raffelt, David; Connelly, Alan

    2013-04-15

    MRI provides a powerful tool for studying the functional and structural connections in the brain non-invasively. The technique of functional connectivity (FC) exploits the intrinsic temporal correlations of slow spontaneous signal fluctuations to characterise brain functional networks. In addition, diffusion MRI fibre-tracking can be used to study the white matter structural connections. In recent years, there has been considerable interest in combining these two techniques to provide an overall structural-functional description of the brain. In this work we applied the recently proposed super-resolution track-weighted imaging (TWI) methodology to demonstrate how whole-brain fibre-tracking data can be combined with FC data to generate a track-weighted (TW) FC map of FC networks. The method was applied to data from 8 healthy volunteers, and illustrated with (i) FC networks obtained using a seeded connectivity-based analysis (seeding in the precuneus/posterior cingulate cortex, PCC, known to be part of the default mode network), and (ii) with FC networks generated using independent component analysis (in particular, the default mode, attention, visual, and sensory-motor networks). TW-FC maps showed high intensity in white matter structures connecting the nodes of the FC networks. For example, the cingulum bundles show the strongest TW-FC values in the PCC seeded-based analysis, due to their major role in the connection between medial frontal cortex and precuneus/posterior cingulate cortex; similarly the superior longitudinal fasciculus was well represented in the attention network, the optic radiations in the visual network, and the corticospinal tract and corpus callosum in the sensory-motor network. The TW-FC maps highlight the white matter connections associated with a given FC network, and their intensity in a given voxel reflects the functional connectivity of the part of the nodes of the network linked by the structural connections traversing that voxel. They therefore contain a different (and novel) image contrast from that of the images used to generate them. The results shown in this study illustrate the potential of the TW-FC approach for the fusion of structural and functional data into a single quantitative image. This technique could therefore have important applications in neuroscience and neurology, such as for voxel-based comparison studies. Copyright © 2012 Elsevier Inc. All rights reserved.

  13. Structural architecture supports functional organization in the human aging brain at a regionwise and network level.

    PubMed

    Zimmermann, Joelle; Ritter, Petra; Shen, Kelly; Rothmeier, Simon; Schirner, Michael; McIntosh, Anthony R

    2016-07-01

    Functional interactions in the brain are constrained by the underlying anatomical architecture, and structural and functional networks share network features such as modularity. Accordingly, age-related changes of structural connectivity (SC) may be paralleled by changes in functional connectivity (FC). We provide a detailed qualitative and quantitative characterization of the SC-FC coupling in human aging as inferred from resting-state blood oxygen-level dependent functional magnetic resonance imaging and diffusion-weighted imaging in a sample of 47 adults with an age range of 18-82. We revealed that SC and FC decrease with age across most parts of the brain and there is a distinct age-dependency of regionwise SC-FC coupling and network-level SC-FC relations. A specific pattern of SC-FC coupling predicts age more reliably than does regionwise SC or FC alone (r = 0.73, 95% CI = [0.7093, 0.8522]). Hence, our data propose that regionwise SC-FC coupling can be used to characterize brain changes in aging. Hum Brain Mapp 37:2645-2661, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  14. Live visualization of genomic loci with BiFC-TALE

    PubMed Central

    Hu, Huan; Zhang, Hongmin; Wang, Sheng; Ding, Miao; An, Hui; Hou, Yingping; Yang, Xiaojing; Wei, Wensheng; Sun, Yujie; Tang, Chao

    2017-01-01

    Tracking the dynamics of genomic loci is important for understanding the mechanisms of fundamental intracellular processes. However, fluorescent labeling and imaging of such loci in live cells have been challenging. One of the major reasons is the low signal-to-background ratio (SBR) of images mainly caused by the background fluorescence from diffuse full-length fluorescent proteins (FPs) in the living nucleus, hampering the application of live cell genomic labeling methods. Here, combining bimolecular fluorescence complementation (BiFC) and transcription activator-like effector (TALE) technologies, we developed a novel method for labeling genomic loci (BiFC-TALE), which largely reduces the background fluorescence level. Using BiFC-TALE, we demonstrated a significantly improved SBR by imaging telomeres and centromeres in living cells in comparison with the methods using full-length FP. PMID:28074901

  15. Live visualization of genomic loci with BiFC-TALE.

    PubMed

    Hu, Huan; Zhang, Hongmin; Wang, Sheng; Ding, Miao; An, Hui; Hou, Yingping; Yang, Xiaojing; Wei, Wensheng; Sun, Yujie; Tang, Chao

    2017-01-11

    Tracking the dynamics of genomic loci is important for understanding the mechanisms of fundamental intracellular processes. However, fluorescent labeling and imaging of such loci in live cells have been challenging. One of the major reasons is the low signal-to-background ratio (SBR) of images mainly caused by the background fluorescence from diffuse full-length fluorescent proteins (FPs) in the living nucleus, hampering the application of live cell genomic labeling methods. Here, combining bimolecular fluorescence complementation (BiFC) and transcription activator-like effector (TALE) technologies, we developed a novel method for labeling genomic loci (BiFC-TALE), which largely reduces the background fluorescence level. Using BiFC-TALE, we demonstrated a significantly improved SBR by imaging telomeres and centromeres in living cells in comparison with the methods using full-length FP.

  16. The Multi-Spectral Imaging Diagnostic on Alcator C-MOD and TCV

    NASA Astrophysics Data System (ADS)

    Linehan, B. L.; Mumgaard, R. T.; Duval, B. P.; Theiler, C. G.; TCV Team

    2017-10-01

    The Multi-Spectral Imaging (MSI) diagnostic is a new instrument that captures simultaneous spectrally filtered images from a common sight view while maintaining a large tendue and high spatial resolution. The system uses a polychromator layout where each image is sequentially filtered. This procedure yields a high transmission for each spectral channel with minimal vignetting and aberrations. A four-wavelength system was installed on Alcator C-Mod and then moved to TCV. The system uses industrial cameras to simultaneously image the divertor region at 95 frames per second at f/# 2.8 via a coherent fiber bundle (C-Mod) or a lens-based relay optic (TCV). The images are absolutely calibrated and spatially registered enabling accurate measurement of atomic line ratios and absolute line intensities. The images will be used to study divertor detachment by imaging impurities and Balmer series emissions. Furthermore, the large field of view and an ability to support many types of detectors opens the door for other novel approaches to optically measuring plasma with high temporal, spatial, and spectral resolution. Such measurements will allow for the study of Stark broadening and divertor turbulence. Here, we present the first measurements taken with this cavity imaging system. USDoE awards DE-FC02-99ER54512 and award DE-AC05-06OR23100, ORISE, administered by ORAU.

  17. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features.

    PubMed

    Li, Linyi; Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images.

  18. Fuzzy Classification of High Resolution Remote Sensing Scenes Using Visual Attention Features

    PubMed Central

    Xu, Tingbao; Chen, Yun

    2017-01-01

    In recent years the spatial resolutions of remote sensing images have been improved greatly. However, a higher spatial resolution image does not always lead to a better result of automatic scene classification. Visual attention is an important characteristic of the human visual system, which can effectively help to classify remote sensing scenes. In this study, a novel visual attention feature extraction algorithm was proposed, which extracted visual attention features through a multiscale process. And a fuzzy classification method using visual attention features (FC-VAF) was developed to perform high resolution remote sensing scene classification. FC-VAF was evaluated by using remote sensing scenes from widely used high resolution remote sensing images, including IKONOS, QuickBird, and ZY-3 images. FC-VAF achieved more accurate classification results than the others according to the quantitative accuracy evaluation indices. We also discussed the role and impacts of different decomposition levels and different wavelets on the classification accuracy. FC-VAF improves the accuracy of high resolution scene classification and therefore advances the research of digital image analysis and the applications of high resolution remote sensing images. PMID:28761440

  19. The importance of hippocampal dynamic connectivity in explaining memory function in multiple sclerosis.

    PubMed

    van Geest, Quinten; Hulst, Hanneke E; Meijer, Kim A; Hoyng, Lieke; Geurts, Jeroen J G; Douw, Linda

    2018-05-01

    Brain dynamics (i.e., variable strength of communication between areas), even at the scale of seconds, are thought to underlie complex human behavior, such as learning and memory. In multiple sclerosis (MS), memory problems occur often and have so far only been related to "stationary" brain measures (e.g., atrophy, lesions, activation and stationary (s) functional connectivity (FC) over an entire functional scanning session). However, dynamics in FC (dFC) between the hippocampus and the (neo)cortex may be another important neurobiological substrate of memory impairment in MS that has not yet been explored. Therefore, we investigated hippocampal dFC during a functional (f) magnetic resonance imaging (MRI) episodic memory task and its relationship with verbal and visuospatial memory performance outside the MR scanner. Thirty-eight MS patients and 29 healthy controls underwent neuropsychological tests to assess memory function. Imaging (1.5T) was obtained during performance of a memory task. We assessed hippocampal volume, functional activation, and sFC (i.e., FC of the hippocampus with the rest of the brain averaged over the entire scan, using an atlas-based approach). Dynamic FC of the hippocampus was calculated using a sliding window approach. No group differences were found in hippocampal activation, sFC, and dFC. However, stepwise forward regression analyses in patients revealed that lower dFC of the left hippocampus (standardized β = -0.30; p  =   .021) could explain an additional 7% of variance (53% in total) in verbal memory, in addition to female sex and larger left hippocampal volume. For visuospatial memory, lower dFC of the right hippocampus (standardized β = -0.38; p  =   .013) could explain an additional 13% of variance (24% in total) in addition to higher sFC of the right hippocampus. Low hippocampal dFC is an important indicator for maintained memory performance in MS, in addition to other hippocampal imaging measures. Hence, brain dynamics may offer new insights into the neurobiological mechanisms underlying memory (dys)function.

  20. Anomalous crater Marcia on asteroid 4 Vesta: Spectral signatures and their geological relationship

    NASA Astrophysics Data System (ADS)

    Giebner, T.; Jaumann, R.; Schroeder, S.; Krohn, K.

    2016-12-01

    DAWN Framing Camera (FC) images are used in this study to analyze the diverse spectral signatures of crater Marcia. As the FC offers high spatial resolution as well as several color filters it is well suited to resolve geological correlations on Vestas surface. Our approach comprises the analysis of images from four FC filters ( F3, F4, F5 and F6) that cover the pyroxene absorption band at 0.9 um and the comparison of Vesta data with HED meteorite spectra. We use the ratios R 750/915 (F3/F4) and R 965/830 (F5/F6) [nm] to separate HED lithologies spectrally and depict corresponding areas on HAMO mosaics ( 60 m/px). Additionally, higher resolution LAMO images ( 20 m/px) are analyzed to reveal the geologic setting. In this work, Marcia is broadly classified into three spectral regions. The first region is located in the northwestern part of the crater as well as in the central peak area and shows the most HED-like signature within the Marcia region. The other two regions, with one of them also describing Marcia ejecta, are spectrally further away from HED lithologies and likely display a mixing with more howarditic-rich material associated with carbonaceous chondrite clasts and relatively higher OH and H concentrations (e.g., [1], [2], [3]). In general, these other two regions are also associated with thick flow features within the crater, while the HED-like area does not show such prominent flows. Hence, these darker regions seem to display post-impact material inflow of the weathered howarditic surface regolith. We conclude that the Marcia impactor likely struck through the howarditic regolith and hit the eucritic crust underneath. Depicting this HED-like signature globally, it resides mostly in the Rheasilvia basin and ejecta blanket, as well as in very young crater ejecta in the equatorial region, consistent with it being a signature of fresh basaltic crust. [1] M. C. De Sanctis et al. (2012b) The Astrophysical Journal Letters, 758:L36 (5pp) [2] T. McCord et al. (2012) Nature 491, 83-86 [3] T. H. Prettyman et al. (2012) Science 338, 242-246

  1. Spectral parameters for Dawn FC color data: Carbonaceous chondrites and aqueous alteration products as potential cerean analog materials

    NASA Astrophysics Data System (ADS)

    Schäfer, Tanja; Nathues, Andreas; Mengel, Kurt; Izawa, Matthew R. M.; Cloutis, Edward A.; Schäfer, Michael; Hoffmann, Martin

    2016-02-01

    We identified a set of spectral parameters based on Dawn Framing Camera (FC) bandpasses, covering the wavelength range 0.4-1.0 μm, for mineralogical mapping of potential chondritic material and aqueous alteration products on dwarf planet Ceres. Our parameters are inferred from laboratory spectra of well-described and clearly classified carbonaceous chondrites representative for a dark component. We additionally investigated the FC signatures of candidate bright materials including carbonates, sulfates and hydroxide (brucite), which can possibly be exposed on the cerean surface by impact craters or plume activity. Several materials mineralogically related to carbonaceous chondrites, including pure ferromagnesian phyllosilicates, and serpentinites were also investigated. We tested the potential of the derived FC parameters for distinguishing between different carbonaceous chondritic materials, and between other plausible cerean surface materials. We found that the major carbonaceous chondrite groups (CM, CO, CV, CK, and CR) are distinguishable using the FC filter ratios 0.56/0.44 μm and 0.83/0.97 μm. The absorption bands of Fe-bearing phyllosilicates at 0.7 and 0.9 μm in terrestrial samples and CM carbonaceous chondrites can be detected by a combination of FC band parameters using the filters at 0.65, 0.75, 0.83, 0.92 and 0.97 μm. This set of parameters serves as a basis to identify and distinguish different lithologies on the cerean surface by FC multispectral data.

  2. Soft computing approach to 3D lung nodule segmentation in CT.

    PubMed

    Badura, P; Pietka, E

    2014-10-01

    This paper presents a novel, multilevel approach to the segmentation of various types of pulmonary nodules in computed tomography studies. It is based on two branches of computational intelligence: the fuzzy connectedness (FC) and the evolutionary computation. First, the image and auxiliary data are prepared for the 3D FC analysis during the first stage of an algorithm - the masks generation. Its main goal is to process some specific types of nodules connected to the pleura or vessels. It consists of some basic image processing operations as well as dedicated routines for the specific cases of nodules. The evolutionary computation is performed on the image and seed points in order to shorten the FC analysis and improve its accuracy. After the FC application, the remaining vessels are removed during the postprocessing stage. The method has been validated using the first dataset of studies acquired and described by the Lung Image Database Consortium (LIDC) and by its latest release - the LIDC-IDRI (Image Database Resource Initiative) database. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. In vivo MRI-based simulation of fatigue process: a possible trigger for human carotid atherosclerotic plaque rupture.

    PubMed

    Huang, Yuan; Teng, Zhongzhao; Sadat, Umar; He, Jing; Graves, Martin J; Gillard, Jonathan H

    2013-04-23

    Atherosclerotic plaque is subjected to a repetitive deformation due to arterial pulsatility during each cardiac cycle and damage may be accumulated over a time period causing fibrous cap (FC) fatigue, which may ultimately lead to rupture. In this study, we investigate the fatigue process in human carotid plaques using in vivo carotid magnetic resonance (MR) imaging. Twenty seven patients with atherosclerotic carotid artery disease were included in this study. Multi-sequence, high-resolution MR imaging was performed to depict the plaque structure. Twenty patients were found with ruptured FC or ulceration and 7 without. Modified Paris law was used to govern crack propagation and the propagation direction was perpendicular to the maximum principal stress at the element node located at the vulnerable site. The predicted crack initiations from 20 patients with FC defect all matched with the locations of the in vivo observed FC defect. Crack length increased rapidly with numerical steps. The natural logarithm of fatigue life decreased linearly with the local FC thickness (R(2) = 0.67). Plaques (n=7) without FC defect had a longer fatigue life compared with those with FC defect (p = 0.03). Fatigue process seems to explain the development of cracks in FC, which ultimately lead to plaque rupture.

  4. Atlas-based fuzzy connectedness segmentation and intensity nonuniformity correction applied to brain MRI.

    PubMed

    Zhou, Yongxin; Bai, Jing

    2007-01-01

    A framework that combines atlas registration, fuzzy connectedness (FC) segmentation, and parametric bias field correction (PABIC) is proposed for the automatic segmentation of brain magnetic resonance imaging (MRI). First, the atlas is registered onto the MRI to initialize the following FC segmentation. Original techniques are proposed to estimate necessary initial parameters of FC segmentation. Further, the result of the FC segmentation is utilized to initialize a following PABIC algorithm. Finally, we re-apply the FC technique on the PABIC corrected MRI to get the final segmentation. Thus, we avoid expert human intervention and provide a fully automatic method for brain MRI segmentation. Experiments on both simulated and real MRI images demonstrate the validity of the method, as well as the limitation of the method. Being a fully automatic method, it is expected to find wide applications, such as three-dimensional visualization, radiation therapy planning, and medical database construction.

  5. Segregation of face sensitive areas within the fusiform gyrus using global signal regression? A study on amygdala resting-state functional connectivity.

    PubMed

    Kruschwitz, Johann D; Meyer-Lindenberg, Andreas; Veer, Ilya M; Wackerhagen, Carolin; Erk, Susanne; Mohnke, Sebastian; Pöhland, Lydia; Haddad, Leila; Grimm, Oliver; Tost, Heike; Romanczuk-Seiferth, Nina; Heinz, Andreas; Walter, Martin; Walter, Henrik

    2015-10-01

    The application of global signal regression (GSR) to resting-state functional magnetic resonance imaging data and its usefulness is a widely discussed topic. In this article, we report an observation of segregated distribution of amygdala resting-state functional connectivity (rs-FC) within the fusiform gyrus (FFG) as an effect of GSR in a multi-center-sample of 276 healthy subjects. Specifically, we observed that amygdala rs-FC was distributed within the FFG as distinct anterior versus posterior clusters delineated by positive versus negative rs-FC polarity when GSR was performed. To characterize this effect in more detail, post hoc analyses revealed the following: first, direct overlays of task-functional magnetic resonance imaging derived face sensitive areas and clusters of positive versus negative amygdala rs-FC showed that the positive amygdala rs-FC cluster corresponded best with the fusiform face area, whereas the occipital face area corresponded to the negative amygdala rs-FC cluster. Second, as expected from a hierarchical face perception model, these amygdala rs-FC defined clusters showed differential rs-FC with other regions of the visual stream. Third, dynamic connectivity analyses revealed that these amygdala rs-FC defined clusters also differed in their rs-FC variance across time to the amygdala. Furthermore, subsample analyses of three independent research sites confirmed reliability of the effect of GSR, as revealed by similar patterns of distinct amygdala rs-FC polarity within the FFG. In this article, we discuss the potential of GSR to segregate face sensitive areas within the FFG and furthermore discuss how our results may relate to the functional organization of the face-perception circuit. © 2015 Wiley Periodicals, Inc.

  6. The new grasper-integrated single use flexible cystoscope for double J stent removal: evaluation of image quality, flow and flexibility.

    PubMed

    Talso, M; Emiliani, E; Baghdadi, M; Orosa, A; Servian, P; Barreiro, A; Proietti, S; Traxer, O

    2017-08-01

    A new single use digital flexible cystoscope (FC) Isiris α from Coloplast ® with an incorporated grasper has been developed to perform double J stent removal. There is a lack of data regarding the comparison of image quality, flexibility and flow between classic cystoscopes and the new Isiris α. Five different FC were used to compare the image quality, the field of view, the loss of flow and the deflection loss. Two standardized grids, three stones of different composition and a ruler's image were filmed in four standardized different scenarios. These videos were shown to thirty subjects that had to evaluate them. Water outflow was measured in ml/sec in all devices with and without the grasper inside, instruments tip deflection was measured using a software. In the subjective analysis of the image quality Isiris α was the second FC best scored. At 3 cm of distance, the field view of Isiris α was the narrowest. Comparing the water flow in the different FCs, we observed a water flow decrease in all cystoscopes when the grasper was loaded in the working channel. Isiris α deflection and flow increase when the grasper is activated. In terms of quality of vision and water flow, the FC Isiris α is comparable to the other digital FC tested. Field of view is narrower. The results displayed a valid alternative to the standard procedure for DJ removal.

  7. Altered Functional Connectivity Following an Inflammatory White Matter Injury in the Newborn Rat: A High Spatial and Temporal Resolution Intrinsic Optical Imaging Study

    PubMed Central

    Guevara, Edgar; Pierre, Wyston C.; Tessier, Camille; Akakpo, Luis; Londono, Irène; Lesage, Frédéric; Lodygensky, Gregory A.

    2017-01-01

    Very preterm newborns have an increased risk of developing an inflammatory cerebral white matter injury that may lead to severe neuro-cognitive impairment. In this study we performed functional connectivity (fc) analysis using resting-state optical imaging of intrinsic signals (rs-OIS) to assess the impact of inflammation on resting-state networks (RSN) in a pre-clinical model of perinatal inflammatory brain injury. Lipopolysaccharide (LPS) or saline injections were administered in postnatal day (P3) rat pups and optical imaging of intrinsic signals were obtained 3 weeks later. (rs-OIS) fc seed-based analysis including spatial extent were performed. A support vector machine (SVM) was then used to classify rat pups in two categories using fc measures and an artificial neural network (ANN) was implemented to predict lesion size from those same fc measures. A significant decrease in the spatial extent of fc statistical maps was observed in the injured group, across contrasts and seeds (*p = 0.0452 for HbO2 and **p = 0.0036 for HbR). Both machine learning techniques were applied successfully, yielding 92% accuracy in group classification and a significant correlation r = 0.9431 in fractional lesion volume prediction (**p = 0.0020). Our results suggest that fc is altered in the injured newborn brain, showing the long-standing effect of inflammation. PMID:28725174

  8. Intrinsic functional connectivity of the brain swallowing network during subliminal esophageal acid stimulation.

    PubMed

    Babaei, A; Siwiec, R M; Kern, M; Douglas Ward, B; Li, S-J; Shaker, R

    2013-12-01

    Intrinsic synchronous fluctuations of the functional magnetic resonance imaging signal are indicative of the underlying 'functional connectivity' (FC) and serve as a technique to study dynamics of the neuronal networks of the human brain. Earlier studies have characterized the functional connectivity of a distributed network of brain regions involved in swallowing, called brain swallowing network (BSN). The potential modulatory effect of esophageal afferent signals on the BSN, however, has not been systematically studied. Fourteen healthy volunteers underwent steady state functional magnetic resonance imaging across three conditions: (i) transnasal catheter placed in the esophagus without infusion; (ii) buffer solution infused at 1 mL/min; and (iii) acidic solution infused at 1 mL/min. Data were preprocessed according to the standard FC analysis pipeline. We determined the correlation coefficient values of pairs of brain regions involved in swallowing and calculated average group FC matrices across conditions. Effects of subliminal esophageal acidification and nasopharyngeal intubation were determined. Subliminal esophageal acid stimulation augmented the overall FC of the right anterior insula and specifically the FC to the left inferior parietal lobule. Conscious stimulation by nasopharyngeal intubation reduced the overall FC of the right posterior insula, particularly the FC to the right prefrontal operculum. The FC of BSN is amenable to modulation by sensory input. The modulatory effect of sensory pharyngoesophageal stimulation on BSN is mainly mediated through changes in the FC of the insula. The alteration induced by subliminal visceral esophageal acid stimulation is in different insular connections compared with that of conscious somatic pharyngeal stimulation. © 2013 John Wiley & Sons Ltd.

  9. Unmanned aerial systems-based remote sensing for monitoring sorghum growth and development

    PubMed Central

    Shafian, Sanaz; Schnell, Ronnie; Bagavathiannan, Muthukumar; Valasek, John; Shi, Yeyin; Olsenholler, Jeff

    2018-01-01

    Unmanned Aerial Vehicles and Systems (UAV or UAS) have become increasingly popular in recent years for agricultural research applications. UAS are capable of acquiring images with high spatial and temporal resolutions that are ideal for applications in agriculture. The objective of this study was to evaluate the performance of a UAS-based remote sensing system for quantification of crop growth parameters of sorghum (Sorghum bicolor L.) including leaf area index (LAI), fractional vegetation cover (fc) and yield. The study was conducted at the Texas A&M Research Farm near College Station, Texas, United States. A fixed-wing UAS equipped with a multispectral sensor was used to collect image data during the 2016 growing season (April–October). Flight missions were successfully carried out at 50 days after planting (DAP; 25 May), 66 DAP (10 June) and 74 DAP (18 June). These flight missions provided image data covering the middle growth period of sorghum with a spatial resolution of approximately 6.5 cm. Field measurements of LAI and fc were also collected. Four vegetation indices were calculated using the UAS images. Among those indices, the normalized difference vegetation index (NDVI) showed the highest correlation with LAI, fc and yield with R2 values of 0.91, 0.89 and 0.58 respectively. Empirical relationships between NDVI and LAI and between NDVI and fc were validated and proved to be accurate for estimating LAI and fc from UAS-derived NDVI values. NDVI determined from UAS imagery acquired during the flowering stage (74 DAP) was found to be the most highly correlated with final grain yield. The observed high correlations between UAS-derived NDVI and the crop growth parameters (fc, LAI and grain yield) suggests the applicability of UAS for within-season data collection of agricultural crops such as sorghum. PMID:29715311

  10. Nature of the "Orange" Material on Vesta From Dawn

    NASA Technical Reports Server (NTRS)

    LeCorre, L.; Reddy, V.; Schmedemann, N.; Becker, K. J.; OBrien, D. P.; Yamashita, N.; Peplowski, P. N.; Prettyman, T. H.; Li, J.-Y.; Coultis, E. A.; hide

    2014-01-01

    From ground-based observations of Vesta, it is well-known that the vestan surface has a large variation in albedo. Analysis of images acquired by the Hubble Space Telescope allowed production of the first color maps of Vesta and showed a diverse surface in terms of reflectance. Thanks to images collected by the Dawn spacecraft at Vesta, it became obvious that these specific units observed previously can be linked to geological features. The presence of the darkest material mostly around impact craters and scattered in the Western hemisphere has been associated with carbonaceous chondrite contamination [4]; whereas the brightest materials are believed to result from exposure of unaltered material from the subsurface of Vesta (in fresh looking impact crater rims and in Rheasilvia's ejecta and rim remants). Here we focus on a distinct material characterized by a steep slope in the near-IR relative to all other kinds of materials found on Vesta. It was first detected when combining Dawn Framing Camera (FC) color images in Clementine false-color composites [5] during the Approach phase of the mission (100000 to 5200 km from Vesta). We investigate the mineralogical and elemental composition of this material and its relationship with the HEDs (Howardite-Eucrite- Diogenite group of meteorites).

  11. A three-dimensional insight into the complexity of flow convergence in mitral regurgitation: adjunctive benefit of anatomic regurgitant orifice area.

    PubMed

    Chandra, Sonal; Salgo, Ivan S; Sugeng, Lissa; Weinert, Lynn; Settlemier, Scott H; Mor-Avi, Victor; Lang, Roberto M

    2011-09-01

    Mitral effective regurgitant orifice area (EROA) using the flow convergence (FC) method is used to quantify the severity of mitral regurgitation (MR). However, it is challenging and prone to interobserver variability in complex valvular pathology. We hypothesized that real-time three-dimensional (3D) transesophageal echocardiography (RT3D TEE) derived anatomic regurgitant orifice area (AROA) can be a reasonable adjunct, irrespective of valvular geometry. Our goals were to 1) to determine the regurgitant orifice morphology and distance suitable for FC measurement using 3D computational flow dynamics and finite element analysis (FEA), and (2) to measure AROA from RT3D TEE and compare it with 2D FC derived EROA measurements. We studied 61 patients. EROA was calculated from 2D TEE images using the 2D-FC technique, and AROA was obtained from zoomed RT3DE TEE acquisitions using prototype software. 3D computational fluid dynamics by FEA were applied to 3D TEE images to determine the effects of mitral valve (MV) orifice geometry on FC pattern. 3D FEA analysis revealed that a central regurgitant orifice is suitable for FC measurements at an optimal distance from the orifice but complex MV orifice resulting in eccentric jets yielded nonaxisymmetric isovelocity contours close to the orifice where the assumptions underlying FC are problematic. EROA and AROA measurements correlated well (r = 0.81) with a nonsignificant bias. However, in patients with eccentric MR, the bias was larger than in central MR. Intermeasurement variability was higher for the 2D FC technique than for RT3DE-based measurements. With its superior reproducibility, 3D analysis of the AROA is a useful alternative to quantify MR when 2D FC measurements are challenging.

  12. A new gamma ray imaging diagnostic for runaway electron studies at DIII-D

    NASA Astrophysics Data System (ADS)

    Cooper, C. M.; Pace, D. C.; Eidietis, N. W.; Paz-Soldan, C.; Commaux, N.; Shiraki, D.; Hollmann, E. M.; Moyer, R. A.; Risov, V.

    2015-11-01

    A new Gamma Ray Imager (GRI) is developed to probe the electron distribution function with 2D spatial resolution during runaway electron (RE) experiments at DIII-D. The diagnostic is sensitive to 0.5 - 50 MeV gamma rays, allowing characterization of the RE distribution function evolution during RE dissipation from pellet injection. The GRI consists of a lead ``pinhole camera'' mounted on the midplane with 11x11 counter-current tangential chords 20 cm wide that span the vessel. Up to 30 bismuth germanate (BGO) scintillation detectors capture RE Bremsstrahlung radiation. Detectors operate in current saturation mode at 10 MHz, or the flux is attenuated for Pulse Height Analysis (PHA) capable of discriminating up to ~10k pulses per second. Digital signal processing routines combining shaping filters are performed during PHA to reject noise and record gamma ray energy. The GRI setup and PHA algorithms will be described and initial data from experiments will be presented. Work supported by the US DOE under DE-AC05-00OR22725, DE-FG02-07ER54917 & DE-FC02-04ER54698.

  13. Investigation of runaway electron dissipation in DIII-D using a gamma ray imager

    NASA Astrophysics Data System (ADS)

    Lvovskiy, A.; Paz-Soldan, C.; Eidietis, N.; Pace, D.; Taussig, D.

    2017-10-01

    We report the findings of a novel gamma ray imager (GRI) to study runaway electron (RE) dissipation in the quiescent regime on the DIII-D tokamak. The GRI measures the bremsstrahlung emission by RE providing information on RE energy spectrum and distribution across a poloidal cross-section. It consists of a lead pinhole camera illuminating a matrix of BGO detectors placed in the DIII-D mid-plane. The number of detectors was recently doubled to provide better spatial resolution and additional detector shielding was implemented to reduce un-collimated gamma flux and increase single-to-noise ratio. Under varying loop voltage, toroidal magnetic field and plasma density, a non-monotonic RE distribution function has been revealed as a result of the interplay between electric field, synchrotron radiation and collisional damping. A fraction of the high-energy RE population grows forming a bump at the RE distribution function while synchrotron radiation decreases. A possible destabilizing effect of Parail-Pogutse instability on the RE population will be also discussed. Work supported by the US DOE under DE-FC02-04ER54698.

  14. Quantification of instantaneous flow rate and dynamically changing effective orifice area using a geometry independent three-dimensional digital color Doppler method: An in vitro study mimicking mitral regurgitation.

    PubMed

    Li, Xiaokui; Wanitkun, Suthep; Li, Xiang-Ning; Hashimoto, Ikuo; Mori, Yoshiki; Rusk, Rosemary A; Hicks, Shannon E; Sahn, David J

    2002-10-01

    Our study was intended to test the accuracy of a 3-dimensional (3D) digital color Doppler flow convergence (FC) method for assessing the effective orifice area (EOA) in a new dynamic orifice model mimicking a variety of mitral regurgitation. FC surface area methods for detecting EOA have been reported to be useful for quantifying the severity of valvular regurgitation. With our new 3D digital direct FC method, all raw velocity data are available and variable Nyquist limits can be selected for computation of direct FC surface area for computing instantaneous flow rate and temporal change of EOA. A 7.0-MHz multiplane transesophageal probe from an ultrasound system (ATL HDI 5000) was linked and controlled by a computer workstation to provide 3D images. Three differently shaped latex orifices (zigzag, arc, and straight slit, each with cutting-edge length of 1 cm) were used to mimic the dynamic orifice of mitral regurgitation. 3D FC surface computation was performed on parallel slices through the 3D data set at aliasing velocities (14-48 cm/s) selected to maximize the regularity and minimize lateral dropout of the visualized 3D FC at 5 points per cardiac cycle. Using continuous wave velocity for each, 3D-calculated EOA was compared with EOA determined by using continuous wave Doppler and the flow rate from a reference ultrasonic flow meter. Simultaneous digital video images were also recorded to define the actual orifice size for 9 stroke volumes (15-55 mL/beat with maximum flow rates 45-182 mL/s). Over the 9 pulsatile flow states and 3 orifices, 3D FC EOAs (0.05-0.63 cm(2)) from different phases of the cardiac cycle in each pump setting correlated well with reference EOA (r = 0.89-0.92, SEE = 0.027-0.055cm(2)) and they also correlated well with digital video images of the actual orifice peak (r = 0.97-0.98, SEE = 0.016-0.019 cm(2)), although they were consistently smaller, as expected by the contraction coefficient. The digital 3D FC method can accurately predict flow rate, and, thus, EOA (in conjunction with continuous wave Doppler), because it allows direct FC surface measurement despite temporal variability of FC shape.

  15. Relating Structure and Function in the Human Brain: Relative Contributions of Anatomy, Stationary Dynamics, and Non-stationarities

    PubMed Central

    Messé, Arnaud; Rudrauf, David; Benali, Habib; Marrelec, Guillaume

    2014-01-01

    Investigating the relationship between brain structure and function is a central endeavor for neuroscience research. Yet, the mechanisms shaping this relationship largely remain to be elucidated and are highly debated. In particular, the existence and relative contributions of anatomical constraints and dynamical physiological mechanisms of different types remain to be established. We addressed this issue by systematically comparing functional connectivity (FC) from resting-state functional magnetic resonance imaging data with simulations from increasingly complex computational models, and by manipulating anatomical connectivity obtained from fiber tractography based on diffusion-weighted imaging. We hypothesized that FC reflects the interplay of at least three types of components: (i) a backbone of anatomical connectivity, (ii) a stationary dynamical regime directly driven by the underlying anatomy, and (iii) other stationary and non-stationary dynamics not directly related to the anatomy. We showed that anatomical connectivity alone accounts for up to 15% of FC variance; that there is a stationary regime accounting for up to an additional 20% of variance and that this regime can be associated to a stationary FC; that a simple stationary model of FC better explains FC than more complex models; and that there is a large remaining variance (around 65%), which must contain the non-stationarities of FC evidenced in the literature. We also show that homotopic connections across cerebral hemispheres, which are typically improperly estimated, play a strong role in shaping all aspects of FC, notably indirect connections and the topographic organization of brain networks. PMID:24651524

  16. Gender transition affects neural correlates of empathy: A resting state functional connectivity study with ultra high-field 7T MR imaging.

    PubMed

    Spies, M; Hahn, A; Kranz, G S; Sladky, R; Kaufmann, U; Hummer, A; Ganger, S; Kraus, C; Winkler, D; Seiger, R; Comasco, E; Windischberger, C; Kasper, S; Lanzenberger, R

    2016-09-01

    Sex-steroid hormones have repeatedly been shown to influence empathy, which is in turn reflected in resting state functional connectivity (rsFC). Cross-sex hormone treatment in transgender individuals provides the opportunity to examine changes to rsFC over gender transition. We aimed to investigate whether sex-steroid hormones influence rsFC patterns related to unique aspects of empathy, namely emotion recognition and description as well as emotional contagion. RsFC data was acquired with 7Tesla magnetic resonance imaging in 24 male-to-female (MtF) and 33 female-to-male (FtM) transgender individuals before treatment, in addition to 33 male- and 44 female controls. Of the transgender participants, 15 MtF and 20 FtM were additionally assessed after 4 weeks and 4 months of treatment. Empathy scores were acquired at the same time-points. MtF differed at baseline from all other groups and assimilated over the course of gender transition in a rsFC network around the supramarginal gyrus, a region central to interpersonal emotion processing. While changes to sex-steroid hormones did not correlate with rsFC in this network, a sex hormone independent association between empathy scores and rsFC was found. Our results underline that 1) MtF transgender persons demonstrate unique rsFC patterns in a network related to empathy and 2) changes within this network over gender transition are likely related to changes in emotion recognition, -description, and -contagion, and are sex-steroid hormone independent. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Bimolecular fluorescence complementation: lighting up seven transmembrane domain receptor signalling networks

    PubMed Central

    Rose, Rachel H; Briddon, Stephen J; Holliday, Nicholas D

    2010-01-01

    There is increasing complexity in the organization of seven transmembrane domain (7TM) receptor signalling pathways, and in the ability of their ligands to modulate and direct this signalling. Underlying these events is a network of protein interactions between the 7TM receptors themselves and associated effectors, such as G proteins and β-arrestins. Bimolecular fluorescence complementation, or BiFC, is a technique capable of detecting these protein–protein events essential for 7TM receptor function. Fluorescent proteins, such as those from Aequorea victoria, are split into two non-fluorescent halves, which then tag the proteins under study. On association, these fragments refold and regenerate a mature fluorescent protein, producing a BiFC signal indicative of complex formation. Here, we review the experimental criteria for successful application of BiFC, considered in the context of 7TM receptor signalling events such as receptor dimerization, G protein and β-arrestin signalling. The advantages and limitations of BiFC imaging are compared with alternative resonance energy transfer techniques. We show that the essential simplicity of the fluorescent BiFC measurement allows high-content and advanced imaging applications, and that it can probe more complex multi-protein interactions alone or in combination with resonance energy transfer. These capabilities suggest that BiFC techniques will become ever more useful in the analysis of ligand and 7TM receptor pharmacology at the molecular level of protein–protein interactions. This article is part of a themed section on Imaging in Pharmacology. To view the editorial for this themed section visit http://dx.doi.org/10.1111/j.1476-5381.2010.00685.x PMID:20015298

  18. Variability in Cumulative Habitual Sleep Duration Predicts Waking Functional Connectivity.

    PubMed

    Khalsa, Sakh; Mayhew, Stephen D; Przezdzik, Izabela; Wilson, Rebecca; Hale, Joanne; Goldstone, Aimee; Bagary, Manny; Bagshaw, Andrew P

    2016-01-01

    We examined whether interindividual differences in habitual sleep patterns, quantified as the cumulative habitual total sleep time (cTST) over a 2-w period, were reflected in waking measurements of intranetwork and internetwork functional connectivity (FC) between major nodes of three intrinsically connected networks (ICNs): default mode network (DMN), salience network (SN), and central executive network (CEN). Resting state functional magnetic resonance imaging (fMRI) study using seed-based FC analysis combined with 14-d wrist actigraphy, sleep diaries, and subjective questionnaires (N = 33 healthy adults, mean age 34.3, standard deviation ± 11.6 y). Data were statistically analyzed using multiple linear regression. Fourteen consecutive days of wrist actigraphy in participant's home environment and fMRI scanning on day 14 at the Birmingham University Imaging Centre. Seed-based FC analysis on ICNs from resting-state fMRI data and multiple linear regression analysis performed for each ICN seed and target. cTST was used to predict FC (controlling for age). cTST was specific predictor of intranetwork FC when the mesial prefrontal cortex (MPFC) region of the DMN was used as a seed for FC, with a positive correlation between FC and cTST observed. No significant relationship between FC and cTST was seen for any pair of nodes not including the MPFC. Internetwork FC between the DMN (MPFC) and SN (right anterior insula) was also predicted by cTST, with a negative correlation observed between FC and cTST. This study improves understanding of the relationship between intranetwork and internetwork functional connectivity of intrinsically connected networks (ICNs) in relation to habitual sleep quality and duration. The cumulative amount of sleep that participants achieved over a 14-d period was significantly predictive of intranetwork and inter-network functional connectivity of ICNs, an observation that may underlie the link between sleep status and cognitive performance. © 2016 Associated Professional Sleep Societies, LLC.

  19. Ceres' Yellow Spots - Observations with Dawn Framing Camera

    NASA Astrophysics Data System (ADS)

    Schäfer, Michael; Schäfer, Tanja; Cloutis, Edward A.; Izawa, Matthew R. M.; Platz, Thomas; Castillo-Rogez, Julie C.; Hoffmann, Martin; Thangjam, Guneshwar S.; Kneissl, Thomas; Nathues, Andreas; Mengel, Kurt; Williams, David A.; Kallisch, Jan; Ripken, Joachim; Russell, Christopher T.

    2016-04-01

    The Framing Camera (FC) onboard the Dawn spacecraft acquired several spectral data sets of (1) Ceres with increasing spatial resolution (up to 135 m/pixel with nearly global coverage). The FC is equipped with seven color filters (0.4-1.0 μm) plus one panchromatic ('clear') filter [1]. We produced spectral mosaics using photometrically corrected FC color filter images as described in [2]. Even early FC color mosaics obtained during Dawn's approach unexpectedly exhibited quite a diversity of surface materials on Ceres. Besides the ordinary cerean surface material, potentially composed of ammoniated phyllosilicates [3] or some other alteration product of carbonaceous chondrites [4], a large number of bright spots were found on Ceres [5]. These spots are substantially brighter than the average surface (exceeding its triple standard deviation), with the spots within Occator crater being the brightest and most prominent examples (reflectance more than 10 times the average of Ceres). We observed bright spots which are different by their obvious yellow color. This yellow color appears both in a 'true color' RGB display (R=0.65, G=0.55, B=0.44 μm) as well as in a false color display (R=0.97, G=0.75, B=0.44 μm) using a linear 2% stretch. Their spectra show a steep red slope between 0.44 and 0.55 μm (UV drop-off). On the contrary to these yellow spots, the vast majority of bright spots appears white in the aforementioned color displays and exhibit blue sloped spectra, except for a shallow UV drop-off. Thus, yellow spots are easily distinguishable from white spots and the remaining cerean surface by their high values in the ratio 0.55/0.44 μm. We found 8 occurrences of yellow spots on Ceres. Most of them (>70 individual spots) occur both inside and outside crater Dantu, where white spots are also found in the immediate vicinity. Besides Dantu, further occurrences with only a few yellow spots were found at craters Ikapati and Gaue. Less definite occurrences are found at 97°E/24°N, 205°E/22°S, 244°E/31°S, 213°E/37.5°S, and at Azacca crater. Often, the yellow spots exhibit well-defined boundaries, but sometimes we found a fainter diffuse yellow tinge around them, enclosing several individual yellow spots. Rarely, they are associated with mass wasting on steep slopes, most notably on the SE crater wall of Dantu. Recently acquired clear filter images with 35 m/pixel resolution indicate only a small number of yellow spots to be situated nearby craters. These craters could also be interpreted as pits probably formed by exhalation vents. More frequently, we found yellow spots linked to small positive landforms. Only a few of the yellow spots seem to be interrelated with crater floor fractures. As with white bright spots, which were interpreted as evaporite deposits of magnesium-sulfate salts [5], the yellow spots appear to emerge from the sub-surface as a result of material transport, possibly driven by sublimation of ice [5], where vents or cracks penetrate the insulating lag deposits. However, in contrast to the white spots, a different mineralogy seems to have emerged at yellow spots. First comparisons of FC spectra with laboratory spectra indicate pyrite/marcasite as a possible component. The relatively strong UV drop-off may at least indicate some kind of sulfide- or sulfur-bearing mixture. As identifications of minerals based on FC spectra are often ambiguous, further investigations by high-resolution data yet to come from Dawn's VIR spectrometer may shed light into the compositional differences between yellow and white bright spots. References: [1] Sierks, H. et al., Space Sci. Rev., 163, 263-327, 2011. [2] Schäfer, M. et al., EPSC, Vol. 10, #488, 2015. [3] De Sanctis, M. C. et al., Nature 528, 241-244, 2015. [4] Schäfer, T. et al., EGU, #12370, 2016. [5] Nathues, A. et al., Nature 528, 237-240, 2015.

  20. Phases of Hyperconnectivity and Hypoconnectivity in the Default Mode and Salience Networks Track with Amyloid and Tau in Clinically Normal Individuals

    PubMed Central

    Hedden, Trey; Mormino, Elizabeth C.; Huijbers, Willem; LaPoint, Molly; Buckley, Rachel F.

    2017-01-01

    Alzheimer's disease (AD) is characterized by two hallmark molecular pathologies: amyloid aβ1–42 and Tau neurofibrillary tangles. To date, studies of functional connectivity MRI (fcMRI) in individuals with preclinical AD have relied on associations with in vivo measures of amyloid pathology. With the recent advent of in vivo Tau-PET tracers it is now possible to extend investigations on fcMRI in a sample of cognitively normal elderly humans to regional measures of Tau. We modeled fcMRI measures across four major cortical association networks [default-mode network (DMN), salience network (SAL), dorsal attention network, and frontoparietal control network] as a function of global cortical amyloid [Pittsburgh Compound B (PiB)-PET] and regional Tau (AV1451-PET) in entorhinal, inferior temporal (IT), and inferior parietal cortex. Results showed that the interaction term between PiB and IT AV1451 was significantly associated with connectivity in the DMN and salience. The interaction revealed that amyloid-positive (aβ+) individuals show increased connectivity in the DMN and salience when neocortical Tau levels are low, whereas aβ+ individuals demonstrate decreased connectivity in these networks as a function of elevated Tau-PET signal. This pattern suggests a hyperconnectivity phase followed by a hypoconnectivity phase in the course of preclinical AD. SIGNIFICANCE STATEMENT This article offers a first look at the relationship between Tau-PET imaging with F18-AV1451 and functional connectivity MRI (fcMRI) in the context of amyloid-PET imaging. The results suggest a nonlinear relationship between fcMRI and both Tau-PET and amyloid-PET imaging. The pattern supports recent conjecture that the AD fcMRI trajectory is characterized by periods of both hyperconnectivity and hypoconnectivity. Furthermore, this nonlinear pattern can account for the sometimes conflicting reports of associations between amyloid and fcMRI in individuals with preclinical Alzheimer's disease. PMID:28314821

  1. Phases of Hyperconnectivity and Hypoconnectivity in the Default Mode and Salience Networks Track with Amyloid and Tau in Clinically Normal Individuals.

    PubMed

    Schultz, Aaron P; Chhatwal, Jasmeer P; Hedden, Trey; Mormino, Elizabeth C; Hanseeuw, Bernard J; Sepulcre, Jorge; Huijbers, Willem; LaPoint, Molly; Buckley, Rachel F; Johnson, Keith A; Sperling, Reisa A

    2017-04-19

    Alzheimer's disease (AD) is characterized by two hallmark molecular pathologies: amyloid aβ 1-42 and Tau neurofibrillary tangles. To date, studies of functional connectivity MRI (fcMRI) in individuals with preclinical AD have relied on associations with in vivo measures of amyloid pathology. With the recent advent of in vivo Tau-PET tracers it is now possible to extend investigations on fcMRI in a sample of cognitively normal elderly humans to regional measures of Tau. We modeled fcMRI measures across four major cortical association networks [default-mode network (DMN), salience network (SAL), dorsal attention network, and frontoparietal control network] as a function of global cortical amyloid [Pittsburgh Compound B (PiB)-PET] and regional Tau (AV1451-PET) in entorhinal, inferior temporal (IT), and inferior parietal cortex. Results showed that the interaction term between PiB and IT AV1451 was significantly associated with connectivity in the DMN and salience. The interaction revealed that amyloid-positive (aβ + ) individuals show increased connectivity in the DMN and salience when neocortical Tau levels are low, whereas aβ + individuals demonstrate decreased connectivity in these networks as a function of elevated Tau-PET signal. This pattern suggests a hyperconnectivity phase followed by a hypoconnectivity phase in the course of preclinical AD. SIGNIFICANCE STATEMENT This article offers a first look at the relationship between Tau-PET imaging with F 18 -AV1451 and functional connectivity MRI (fcMRI) in the context of amyloid-PET imaging. The results suggest a nonlinear relationship between fcMRI and both Tau-PET and amyloid-PET imaging. The pattern supports recent conjecture that the AD fcMRI trajectory is characterized by periods of both hyperconnectivity and hypoconnectivity. Furthermore, this nonlinear pattern can account for the sometimes conflicting reports of associations between amyloid and fcMRI in individuals with preclinical Alzheimer's disease. Copyright © 2017 the authors 0270-6474/17/374324-09$15.00/0.

  2. In vivo quantitative visualization of hypochlorous acid in the liver using a novel selective two-photon fluorescent probe

    NASA Astrophysics Data System (ADS)

    Wang, Haolu; Jayachandran, Aparna; Gravot, Germain; Liang, Xiaowen; Thorling, Camilla A.; Zhang, Run; Liu, Xin; Roberts, Michael S.

    2016-11-01

    Hypochlorous acid (HOCl) plays a vital role in physiological events and diseases. During hepatic ischemia-reperfusion (I/R) injury, HOCl is generated by neutrophils and diffuses into hepatocytes, causing oxidant stress-mediated injury. Although many probes have been developed to detect HOCl, most were difficult to be distinguished from endogenous fluorophores in intravital imaging and only can be employed under one-photon microscopy. A novel iridium(III) complex-based ferrocene dual-signaling chemosensor (Ir-Fc) was designed and synthesized. Ir-Fc exhibited a strong positive fluorescent response only in the presence of HOCl, whereas negligible fluorescent signals were observed upon the additions of other reactive oxygen/nitrogen species and metal ions. There was a good linear relationship between probe responsive fluorescent intensity and HOCl concentration. Ir-Fc was then intravenously injected into BALB/c mice at the final concentration of 50 μM and the mouse livers were imaged using multiphoton microscopy (MPM). In the I/R liver, reduced autofluorescence was detected by MPM, indicating the hepatocyte necrosis. Remarkable enhancement of red fluorescence was observed in hepatocytes with decreased autofluorescence, indicating the reaction of Ir-Fc with endogenous HOCl molecules. The cellular concentration of HOCl was first calculated based on the intensity of MPM images. No obvious toxic effects were observed in histological examination of major organs after Ir-Fc injection. In summary, Ir-Fc has low cytotoxicity, high specificity to HOCl, and rapid "off-on" fluorescence. It is suitable for dynamic quantitatively monitoring HOCl generation using MPM at the cellular level. This technique can be readily extended to examination of liver diseases and injury.

  3. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia

    PubMed Central

    Kim, Junghoe; Calhoun, Vince D.; Shim, Eunsoo; Lee, Jong-Hwan

    2015-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns. PMID:25987366

  4. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, Mi; University of Chinese Academy of Sciences, Beijing 100049; Liu, Lianqing, E-mail: lqliu@sia.cn

    Highlights: •Nanoscale cellular ultra-structures of macrophages were observed. •The binding affinities of FcγRs were measured directly on macrophages. •The nanoscale distributions of FcγRs were mapped on macrophages. -- Abstract: Fc gamma receptors (FcγR), widely expressed on effector cells (e.g., NK cells, macrophages), play an important role in clinical cancer immunotherapy. The binding of FcγRs to the Fc portions of antibodies that are attached to the target cells can activate the antibody-dependent cell-mediated cytotoxicity (ADCC) killing mechanism which leads to the lysis of target cells. In this work, we used atomic force microscopy (AFM) to observe the cellular ultra-structures and measuremore » the biophysical properties (affinity and distribution) of FcγRs on single macrophages in aqueous environments. AFM imaging was used to obtain the topographies of macrophages, revealing the nanoscale cellular fine structures. For molecular interaction recognition, antibody molecules were attached onto AFM tips via a heterobifunctional polyethylene glycol (PEG) crosslinker. With AFM single-molecule force spectroscopy, the binding affinities of FcγRs were quantitatively measured on single macrophages. Adhesion force mapping method was used to localize the FcγRs, revealing the nanoscale distribution of FcγRs on local areas of macrophages. The experimental results can improve our understanding of FcγRs on macrophages; the established approach will facilitate further research on physiological activities involved in antibody-based immunotherapy.« less

  5. Dawn Maps the Surface Composition of Vesta

    NASA Technical Reports Server (NTRS)

    Prettyman, T.; Palmer, E.; Reedy, R.; Sykes, M.; Yingst, R.; McSween, H.; DeSanctis, M. C.; Capaccinoni, F.; Capria, M. T.; Filacchione, G.; hide

    2011-01-01

    By 7-October-2011, the Dawn mission will have completed Survey orbit and commenced high altitude mapping of 4-Vesta. We present a preliminary analysis of data acquired by Dawn's Framing Camera (FC) and the Visual and InfraRed Spectrometer (VIR) to map mineralogy and surface temperature, and to detect and quantify surficial OH. The radiometric calibration of VIR and FC is described. Background counting data acquired by GRaND are used to determine elemental detection limits from measurements at low altitude, which will commence in November. Geochemical models used in the interpretation of the data are described. Thermal properties, mineral-, and geochemical-data are combined to provide constraints on Vesta s formation and thermal evolution, the delivery of exogenic materials, space weathering processes, and the origin of the howardite, eucrite, and diogenite (HED) meteorites.

  6. Correlation Functions Quantify Super-Resolution Images and Estimate Apparent Clustering Due to Over-Counting

    PubMed Central

    Veatch, Sarah L.; Machta, Benjamin B.; Shelby, Sarah A.; Chiang, Ethan N.; Holowka, David A.; Baird, Barbara A.

    2012-01-01

    We present an analytical method using correlation functions to quantify clustering in super-resolution fluorescence localization images and electron microscopy images of static surfaces in two dimensions. We use this method to quantify how over-counting of labeled molecules contributes to apparent self-clustering and to calculate the effective lateral resolution of an image. This treatment applies to distributions of proteins and lipids in cell membranes, where there is significant interest in using electron microscopy and super-resolution fluorescence localization techniques to probe membrane heterogeneity. When images are quantified using pair auto-correlation functions, the magnitude of apparent clustering arising from over-counting varies inversely with the surface density of labeled molecules and does not depend on the number of times an average molecule is counted. In contrast, we demonstrate that over-counting does not give rise to apparent co-clustering in double label experiments when pair cross-correlation functions are measured. We apply our analytical method to quantify the distribution of the IgE receptor (FcεRI) on the plasma membranes of chemically fixed RBL-2H3 mast cells from images acquired using stochastic optical reconstruction microscopy (STORM/dSTORM) and scanning electron microscopy (SEM). We find that apparent clustering of FcεRI-bound IgE is dominated by over-counting labels on individual complexes when IgE is directly conjugated to organic fluorophores. We verify this observation by measuring pair cross-correlation functions between two distinguishably labeled pools of IgE-FcεRI on the cell surface using both imaging methods. After correcting for over-counting, we observe weak but significant self-clustering of IgE-FcεRI in fluorescence localization measurements, and no residual self-clustering as detected with SEM. We also apply this method to quantify IgE-FcεRI redistribution after deliberate clustering by crosslinking with two distinct trivalent ligands of defined architectures, and we evaluate contributions from both over-counting of labels and redistribution of proteins. PMID:22384026

  7. Functional Connectivity Measures After Psilocybin Inform a Novel Hypothesis of Early Psychosis

    PubMed Central

    Carhart-Harris, Robin L.

    2013-01-01

    Psilocybin is a classic psychedelic and a candidate drug model of psychosis. This study measured the effects of psilocybin on resting-state network and thalamocortical functional connectivity (FC) using functional magnetic resonance imaging (fMRI). Fifteen healthy volunteers received intravenous infusions of psilocybin and placebo in 2 task-free resting-state scans. Primary analyses focused on changes in FC between the default-mode- (DMN) and task-positive network (TPN). Spontaneous activity in the DMN is orthogonal to spontaneous activity in the TPN, and it is well known that these networks support very different functions (ie, the DMN supports introspection, whereas the TPN supports externally focused attention). Here, independent components and seed-based FC analyses revealed increased DMN-TPN FC and so decreased DMN-TPN orthogonality after psilocybin. Increased DMN-TPN FC has been found in psychosis and meditatory states, which share some phenomenological similarities with the psychedelic state. Increased DMN-TPN FC has also been observed in sedation, as has decreased thalamocortical FC, but here we found preserved thalamocortical FC after psilocybin. Thus, we propose that thalamocortical FC may be related to arousal, whereas DMN-TPN FC is related to the separateness of internally and externally focused states. We suggest that this orthogonality is compromised in early psychosis, explaining similarities between its phenomenology and that of the psychedelic state and supporting the utility of psilocybin as a model of early psychosis. PMID:23044373

  8. Functional connectivity measures after psilocybin inform a novel hypothesis of early psychosis.

    PubMed

    Carhart-Harris, Robin L; Leech, Robert; Erritzoe, David; Williams, Tim M; Stone, James M; Evans, John; Sharp, David J; Feilding, Amanda; Wise, Richard G; Nutt, David J

    2013-11-01

    Psilocybin is a classic psychedelic and a candidate drug model of psychosis. This study measured the effects of psilocybin on resting-state network and thalamocortical functional connectivity (FC) using functional magnetic resonance imaging (fMRI). Fifteen healthy volunteers received intravenous infusions of psilocybin and placebo in 2 task-free resting-state scans. Primary analyses focused on changes in FC between the default-mode- (DMN) and task-positive network (TPN). Spontaneous activity in the DMN is orthogonal to spontaneous activity in the TPN, and it is well known that these networks support very different functions (ie, the DMN supports introspection, whereas the TPN supports externally focused attention). Here, independent components and seed-based FC analyses revealed increased DMN-TPN FC and so decreased DMN-TPN orthogonality after psilocybin. Increased DMN-TPN FC has been found in psychosis and meditatory states, which share some phenomenological similarities with the psychedelic state. Increased DMN-TPN FC has also been observed in sedation, as has decreased thalamocortical FC, but here we found preserved thalamocortical FC after psilocybin. Thus, we propose that thalamocortical FC may be related to arousal, whereas DMN-TPN FC is related to the separateness of internally and externally focused states. We suggest that this orthogonality is compromised in early psychosis, explaining similarities between its phenomenology and that of the psychedelic state and supporting the utility of psilocybin as a model of early psychosis.

  9. The Development of Human Amygdala Functional Connectivity at Rest from 4 to 23 Years: a cross-sectional study

    PubMed Central

    Gabard-Durnam, Laurel J.; Flannery, Jessica; Goff, Bonnie; Gee, Dylan G.; Humphreys, Kathryn L.; Telzer, Eva; Hare, Todd; Tottenham, Nim

    2014-01-01

    Functional connections (FC) between the amygdala and cortical and subcortical regions underlie a range of affective and cognitive processes. Despite the central role amygdala networks have in these functions, the normative developmental emergence of FC between the amygdala and the rest of the brain is still largely undefined. This study employed amygdala subregion maps and resting-state functional magnetic resonance imaging to characterize the typical development of human amygdala FC from age 4 to 23 years old (n = 58). Amygdala FC with subcortical and limbic regions was largely stable across this developmental period. However, three cortical regions exhibited age-dependent changes in FC: amygdala FC with the medial prefrontal cortex (mPFC) increased with age, while amygdala FC with a region including the insula and superior temporal sulcus decreased with age, and amygdala FC with a region encompassing the parahippocampal gyrus and posterior cingulate also decreased with age. The transition from childhood to adolescence (around age 10 years) marked an important change-point in the nature of amygdala-cortical FC. We distinguished unique developmental patterns of coupling for three amygdala subregions and found particularly robust convergence of FC for all subregions with the mPFC. These findings suggest that there are extensive changes in amygdala-cortical functional connectivity that emerge between childhood and adolescence. PMID:24662579

  10. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  11. Image quality prediction - An aid to the Viking lander imaging investigation on Mars

    NASA Technical Reports Server (NTRS)

    Huck, F. O.; Wall, S. D.

    1976-01-01

    Image quality criteria and image quality predictions are formulated for the multispectral panoramic cameras carried by the Viking Mars landers. Image quality predictions are based on expected camera performance, Mars surface radiance, and lighting and viewing geometry (fields of view, Mars lander shadows, solar day-night alternation), and are needed in diagnosis of camera performance, in arriving at a preflight imaging strategy, and revision of that strategy should the need arise. Landing considerations, camera control instructions, camera control logic, aspects of the imaging process (spectral response, spatial response, sensitivity), and likely problems are discussed. Major concerns include: degradation of camera response by isotope radiation, uncertainties in lighting and viewing geometry and in landing site local topography, contamination of camera window by dust abrasion, and initial errors in assigning camera dynamic ranges (gains and offsets).

  12. How to characterize terrains on 4 Vesta using Dawn Framing Camera color bands?

    NASA Astrophysics Data System (ADS)

    Le Corre, Lucille; Reddy, Vishnu; Nathues, Andreas; Cloutis, Edward A.

    2011-12-01

    We present methods for terrain classification on 4 Vesta using Dawn Framing Camera (FC) color information derived from laboratory spectra of HED meteorites and other Vesta-related assemblages. Color and spectral parameters have been derived using publicly available spectra of these analog materials to identify the best criteria for distinguishing various terrains. We list the relevant parameters for identifying eucrites, diogenites, mesosiderites, pallasites, clinopyroxenes and olivine + orthopyroxene mixtures using Dawn FC color cubes. Pseudo Band I minima derived by fitting a low order polynomial to the color data are found to be useful for extracting the pyroxene chemistry. Our investigation suggests a good correlation (R2 = 0.88) between laboratory measured ferrosilite (Fs) pyroxene chemistry vs. those from pseudo Band I minima using equations from Burbine et al. (Burbine, T.H., Buchanan, P.C., Dolkar, T., Binzel, R.P. [2009]. Planetary Science 44, 1331-1341). The pyroxene chemistry information is a complementary terrain classification capability beside the color ratios. We also investigated the effects of exogenous material (i.e., CM2 carbonaceous chondrites) on the spectra of HEDs using laboratory mixtures of these materials. Our results are the basis for an automated software pipeline that will allow us to classify terrains on 4 Vesta efficiently.

  13. Detection of figure and caption pairs based on disorder measurements

    NASA Astrophysics Data System (ADS)

    Faure, Claudie; Vincent, Nicole

    2010-01-01

    Figures inserted in documents mediate a kind of information for which the visual modality is more appropriate than the text. A complete understanding of a figure often necessitates the reading of its caption or to establish a relationship with the main text using a numbered figure identifier which is replicated in the caption and in the main text. A figure and its caption are closely related; they constitute single multimodal components (FC-pair) that Document Image Analysis cannot extract with text and graphics segmentation. We propose a method to go further than the graphics and text segmentation in order to extract FC-pairs without performing a full labelling of the page components. Horizontal and vertical text lines are detected in the pages. The graphics are associated with selected text lines to initiate the detector of FC-pairs. Spatial and visual disorders are introduced to define a layout model in terms of properties. It enables to cope with most of the numerous spatial arrangements of graphics and text lines. The detector of FC-pairs performs operations in order to eliminate the layout disorder and assigns a quality value to each FC-pair. The processed documents were collected in medic@, the digital historical collection of the BIUM (Bibliothèque InterUniversitaire Médicale). A first set of 98 pages constitutes the design set. Then 298 pages were collected to evaluate the system. The performances are the result of a full process, from the binarisation of the digital images to the detection of FC-pairs.

  14. Functional brain connectivity is predictable from anatomic network's Laplacian eigen-structure.

    PubMed

    Abdelnour, Farras; Dayan, Michael; Devinsky, Orrin; Thesen, Thomas; Raj, Ashish

    2018-05-15

    How structural connectivity (SC) gives rise to functional connectivity (FC) is not fully understood. Here we mathematically derive a simple relationship between SC measured from diffusion tensor imaging, and FC from resting state fMRI. We establish that SC and FC are related via (structural) Laplacian spectra, whereby FC and SC share eigenvectors and their eigenvalues are exponentially related. This gives, for the first time, a simple and analytical relationship between the graph spectra of structural and functional networks. Laplacian eigenvectors are shown to be good predictors of functional eigenvectors and networks based on independent component analysis of functional time series. A small number of Laplacian eigenmodes are shown to be sufficient to reconstruct FC matrices, serving as basis functions. This approach is fast, and requires no time-consuming simulations. It was tested on two empirical SC/FC datasets, and was found to significantly outperform generative model simulations of coupled neural masses. Copyright © 2018. Published by Elsevier Inc.

  15. Motion Estimation Utilizing Range Detection-Enhanced Visual Odometry

    NASA Technical Reports Server (NTRS)

    Morris, Daniel Dale (Inventor); Chang, Hong (Inventor); Friend, Paul Russell (Inventor); Chen, Qi (Inventor); Graf, Jodi Seaborn (Inventor)

    2016-01-01

    A motion determination system is disclosed. The system may receive a first and a second camera image from a camera, the first camera image received earlier than the second camera image. The system may identify corresponding features in the first and second camera images. The system may receive range data comprising at least one of a first and a second range data from a range detection unit, corresponding to the first and second camera images, respectively. The system may determine first positions and the second positions of the corresponding features using the first camera image and the second camera image. The first positions or the second positions may be determined by also using the range data. The system may determine a change in position of the machine based on differences between the first and second positions, and a VO-based velocity of the machine based on the determined change in position.

  16. Design upgrades to the DIII-D gamma ray imager

    NASA Astrophysics Data System (ADS)

    Lvovskiy, A.; Cooper, C. M.; Eidietis, N. W.; Pace, D.; Paz-Soldan, C.

    2016-10-01

    Generation of runaway electrons (RE) in tokamak disruptions can cause damage of plasma facing components. RE studies are necessary in order to provide a reliable mechanism of RE mitigation. For that task a gamma ray imager (GRI) has been developed for DIII-D. It measures the bremsstrahlung emission by RE providing information on RE energy spectrum and RE distribution across a poloidal cross-section. The GRI consists of a lead pinhole camera illuminating a 2D array of 30 BGO detectors placed in the DIII-D mid-plane. First results showed the successful measurements of RE energy spectra in the range 1 - 60 MeV with time resolution 100 μs. They have been obtained in the low-flux quiescent RE regime via pulse-high analysis. The measurements in the high gamma flux post-disruption RE regime showed strong signal saturation. Here we present GRI design upgrades towards signal attenuation and better detector shielding including Monte-Carlo Neutral Particle modeling of GRI irradiation, as well as improved calibration techniques and options to improve electronic noise rejection. Work supported by US DOE under DE-AC05-06OR23100 and DE-FC02-04ER54698.

  17. Altered resting-state hippocampal and caudate functional networks in patients with obstructive sleep apnea.

    PubMed

    Song, Xiaopeng; Roy, Bhaswati; Kang, Daniel W; Aysola, Ravi S; Macey, Paul M; Woo, Mary A; Yan-Go, Frisca L; Harper, Ronald M; Kumar, Rajesh

    2018-05-10

    Brain structural injury and metabolic deficits in the hippocampus and caudate nuclei may contribute to cognitive and emotional deficits found in obstructive sleep apnea (OSA) patients. If such contributions exist, resting-state interactions of these subcortical sites with cortical areas mediating affective symptoms and cognition should be disturbed. Our aim was to examine resting-state functional connectivity (FC) of the hippocampus and caudate to other brain areas in OSA relative to control subjects, and to relate these changes to mood and neuropsychological scores. We acquired resting-state functional magnetic resonance imaging (fMRI) data from 70 OSA and 89 healthy controls using a 3.0-Tesla magnetic resonance imaging scanner, and assessed psychological and behavioral functions, as well as sleep issues. After standard fMRI data preprocessing, FC maps were generated for bilateral hippocampi and caudate nuclei, and compared between groups (ANCOVA; covariates, age and gender). Obstructive sleep apnea subjects showed significantly higher levels of anxiety and depressive symptoms over healthy controls. In OSA subjects, the hippocampus showed disrupted FC with the thalamus, para-hippocampal gyrus, medial and superior temporal gyrus, insula, and posterior cingulate cortex. Left and right caudate nuclei showed impaired FC with the bilateral inferior frontal gyrus and right angular gyrus. In addition, altered limbic-striatal-cortical FC in OSA showed relationships with behavioral and neuropsychological variables. The compromised hippocampal-cortical FC in OSA may underlie depression and anxious mood levels in OSA, while impaired caudate-cortical FC may indicate deficits in reward processing and cognition. These findings provide insights into the neural mechanisms underlying the comorbidity of mood and cognitive deficits in OSA. © 2018 The Authors. Brain and Behavior published by Wiley Periodicals, Inc.

  18. Brain Activity in Patients With Adductor Spasmodic Dysphonia Detected by Functional Magnetic Resonance Imaging.

    PubMed

    Kiyuna, Asanori; Kise, Norimoto; Hiratsuka, Munehisa; Kondo, Shunsuke; Uehara, Takayuki; Maeda, Hiroyuki; Ganaha, Akira; Suzuki, Mikio

    2017-05-01

    Spasmodic dysphonia (SD) is considered a focal dystonia. However, the detailed pathophysiology of SD remains unclear, despite the detection of abnormal activity in several brain regions. The aim of this study was to clarify the pathophysiological background of SD. This is a case-control study. Both task-related brain activity measured by functional magnetic resonance imaging by reading the five-digit numbers and resting-state functional connectivity (FC) measured by 150 T2-weighted echo planar images acquired without any task were investigated in 12 patients with adductor SD and in 16 healthy controls. The patients with SD showed significantly higher task-related brain activation in the left middle temporal gyrus, left thalamus, bilateral primary motor area, bilateral premotor area, bilateral cerebellum, bilateral somatosensory area, right insula, and right putamen compared with the controls. Region of interest voxel FC analysis revealed many FC changes within the cerebellum-basal ganglia-thalamus-cortex loop in the patients with SD. Of the significant connectivity changes between the patients with SD and the controls, the FC between the left thalamus and the left caudate nucleus was significantly correlated with clinical parameters in SD. The higher task-related brain activity in the insula and cerebellum was consistent with previous neuroimaging studies, suggesting that these areas are one of the unique characteristics of phonation-induced brain activity in SD. Based on FC analysis and their significant correlations with clinical parameters, the basal ganglia network plays an important role in the pathogenesis of SD. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  19. Deep neural network with weight sparsity control and pre-training extracts hierarchical features and enhances classification performance: Evidence from whole-brain resting-state functional connectivity patterns of schizophrenia.

    PubMed

    Kim, Junghoe; Calhoun, Vince D; Shim, Eunsoo; Lee, Jong-Hwan

    2016-01-01

    Functional connectivity (FC) patterns obtained from resting-state functional magnetic resonance imaging data are commonly employed to study neuropsychiatric conditions by using pattern classifiers such as the support vector machine (SVM). Meanwhile, a deep neural network (DNN) with multiple hidden layers has shown its ability to systematically extract lower-to-higher level information of image and speech data from lower-to-higher hidden layers, markedly enhancing classification accuracy. The objective of this study was to adopt the DNN for whole-brain resting-state FC pattern classification of schizophrenia (SZ) patients vs. healthy controls (HCs) and identification of aberrant FC patterns associated with SZ. We hypothesized that the lower-to-higher level features learned via the DNN would significantly enhance the classification accuracy, and proposed an adaptive learning algorithm to explicitly control the weight sparsity in each hidden layer via L1-norm regularization. Furthermore, the weights were initialized via stacked autoencoder based pre-training to further improve the classification performance. Classification accuracy was systematically evaluated as a function of (1) the number of hidden layers/nodes, (2) the use of L1-norm regularization, (3) the use of the pre-training, (4) the use of framewise displacement (FD) removal, and (5) the use of anatomical/functional parcellation. Using FC patterns from anatomically parcellated regions without FD removal, an error rate of 14.2% was achieved by employing three hidden layers and 50 hidden nodes with both L1-norm regularization and pre-training, which was substantially lower than the error rate from the SVM (22.3%). Moreover, the trained DNN weights (i.e., the learned features) were found to represent the hierarchical organization of aberrant FC patterns in SZ compared with HC. Specifically, pairs of nodes extracted from the lower hidden layer represented sparse FC patterns implicated in SZ, which was quantified by using kurtosis/modularity measures and features from the higher hidden layer showed holistic/global FC patterns differentiating SZ from HC. Our proposed schemes and reported findings attained by using the DNN classifier and whole-brain FC data suggest that such approaches show improved ability to learn hidden patterns in brain imaging data, which may be useful for developing diagnostic tools for SZ and other neuropsychiatric disorders and identifying associated aberrant FC patterns. Copyright © 2015 Elsevier Inc. All rights reserved.

  20. Application of Sensor Fusion to Improve Uav Image Classification

    NASA Astrophysics Data System (ADS)

    Jabari, S.; Fathollahi, F.; Zhang, Y.

    2017-08-01

    Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.

  1. Real-time three-dimensional color doppler evaluation of the flow convergence zone for quantification of mitral regurgitation: Validation experimental animal study and initial clinical experience

    NASA Technical Reports Server (NTRS)

    Sitges, Marta; Jones, Michael; Shiota, Takahiro; Qin, Jian Xin; Tsujino, Hiroyuki; Bauer, Fabrice; Kim, Yong Jin; Agler, Deborah A.; Cardon, Lisa A.; Zetts, Arthur D.; hide

    2003-01-01

    BACKGROUND: Pitfalls of the flow convergence (FC) method, including 2-dimensional imaging of the 3-dimensional (3D) geometry of the FC surface, can lead to erroneous quantification of mitral regurgitation (MR). This limitation may be mitigated by the use of real-time 3D color Doppler echocardiography (CE). Our objective was to validate a real-time 3D navigation method for MR quantification. METHODS: In 12 sheep with surgically induced chronic MR, 37 different hemodynamic conditions were studied with real-time 3DCE. Using real-time 3D navigation, the radius of the largest hemispherical FC zone was located and measured. MR volume was quantified according to the FC method after observing the shape of FC in 3D space. Aortic and mitral electromagnetic flow probes and meters were balanced against each other to determine reference MR volume. As an initial clinical application study, 22 patients with chronic MR were also studied with this real-time 3DCE-FC method. Left ventricular (LV) outflow tract automated cardiac flow measurement (Toshiba Corp, Tokyo, Japan) and real-time 3D LV stroke volume were used to quantify the reference MR volume (MR volume = 3DLV stroke volume - automated cardiac flow measurement). RESULTS: In the sheep model, a good correlation and agreement was seen between MR volume by real-time 3DCE and electromagnetic (y = 0.77x + 1.48, r = 0.87, P <.001, delta = -0.91 +/- 2.65 mL). In patients, real-time 3DCE-derived MR volume also showed a good correlation and agreement with the reference method (y = 0.89x - 0.38, r = 0.93, P <.001, delta = -4.8 +/- 7.6 mL). CONCLUSIONS: real-time 3DCE can capture the entire FC image, permitting geometrical recognition of the FC zone geometry and reliable MR quantification.

  2. Tree Canopy Light Interception Estimates in Almond and a Walnut Orchards Using Ground, Low Flying Aircraft, and Satellite Based Methods to Improve Irrigation Scheduling Programs

    NASA Technical Reports Server (NTRS)

    Rosecrance, Richard C.; Johnson, Lee; Soderstrom, Dominic

    2016-01-01

    Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.

  3. Altered striatal intrinsic functional connectivity in pediatric anxiety

    PubMed Central

    Dorfman, Julia; Benson, Brenda; Farber, Madeline; Pine, Daniel; Ernst, Monique

    2016-01-01

    Anxiety disorders are among the most common psychiatric disorders of adolescence. Behavioral and task-based imaging studies implicate altered reward system function, including striatal dysfunction, in adolescent anxiety. However, no study has yet examined alterations of the striatal intrinsic functional connectivity in adolescent anxiety disorders. The current study examines striatal intrinsic functional connectivity (iFC), using six bilateral striatal seeds, among 35 adolescents with anxiety disorders and 36 healthy comparisons. Anxiety is associated with abnormally low iFC within the striatum (e.g., between nucleus accumbens and caudate nucleus), and between the striatum and prefrontal regions, including subgenual anterior cingulate cortex, posterior insula and supplementary motor area. The current findings extend prior behavioral and task-based imaging research, and provide novel data implicating decreased striatal iFC in adolescent anxiety. Alterations of striatal neurocircuitry identified in this study may contribute to the perturbations in the processing of motivational, emotional, interoceptive, and motor information seen in pediatric anxiety disorders. This pattern of the striatal iFC perturbations can guide future research on specific mechanisms underlying anxiety. PMID:27004799

  4. Tree canopy light interception estimates in almond and a walnut orchards using ground, low flying aircraft, and satellite based methods to improve irrigation scheduling programs.

    NASA Astrophysics Data System (ADS)

    Rosecrance, R. C.; Johnson, L.; Soderstrom, D.

    2016-12-01

    Canopy light interception is a main driver of water use and crop yield in almond and walnut production. Fractional green canopy cover (Fc) is a good indicator of light interception and can be estimated remotely from satellite using the normalized difference vegetation index (NDVI) data. Satellite-based Fc estimates could be used to inform crop evapotranspiration models, and hence support improvements in irrigation evaluation and management capabilities. Satellite estimates of Fc in almond and walnut orchards, however, need to be verified before incorporating them into irrigation scheduling or other crop water management programs. In this study, Landsat-based NDVI and Fc from NASA's Satellite Irrigation Management Support (SIMS) were compared with four estimates of canopy cover: 1. light bar measurement, 2. in-situ and image-based dimensional tree-crown analyses, 3. high-resolution NDVI data from low flying aircraft, and 4. orchard photos obtained via Google Earth and processed by an Image J thresholding routine. Correlations between the various estimates are discussed.

  5. Context of Carbonate Rocks in Heavily Eroded Martian Terrain

    NASA Technical Reports Server (NTRS)

    2008-01-01

    The color coding on this composite image of an area about 20 kilometers (12 miles) wide on Mars is based on infrared spectral information interpreted as evidence of various minerals present. Carbonate, which is indicative of a wet and non-acidic history, occurs in very small patches of exposed rock appearing green in this color representation, such as near the lower right corner.

    The scene is heavily eroded terrain to the west of a small canyon in the Nili Fossae region of Mars. It was one of the first areas where researchers on the Compact Reconnaissance Imaging Spectrometer for Mars (CRISM) science team detected carbonate in Mars rocks. The spectral information comes from infrared imaging by CRISM, one of six science instruments on NASA's Mars Reconnaissance Orbiter. That coloring is overlaid on a grayscale image from the same orbiter's Context Camera.

    The uppermost capping rock unit (purple) is underlain successively by banded olivine-bearing rocks (yellow) and rocks bearing iron-magnesium smectite clay (blue). Where the olivine is a greenish hue, it has been partially altered by interaction with water. The carbonate and olivine occupy the same level in the stratigraphy, and it is thought that the carbonate formed by aqueous alteration of olivine. The channel running from upper left to lower right through the image and eroding into the layers of bedrock testifies to the past presence of water in this region. That some of the channels are closely associated with carbonate (lower right) indicates that waters interacting with the carbonate were neutral to alkaline because acidic waters would have dissolved the carbonate.

    Information for the color coding came from CRISM images catalogued as FRT0000B438, FRT0000A4FC, and FRT00003E12. This composite was made using 2.38-micrometer-wavelenghth data as red, 1.80 micrometer as green and 1.15 micrometer as blue.

    The base black-and-white image, acquired at a resolution of 5 meters (16 feet) per pixel, is catalogued as CTX P03_002176_2024_XI_22N283W_070113 by the Context Camera science team.

    NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, manages the Mars Reconnaissance Orbiter for the NASA Science Mission Directorate, Washington. Lockheed Martin Space Systems, Denver, is the prime contractor for the project and built the spacecraft. The Johns Hopkins University Applied Physics Laboratory led the effort to build the CRISM instrument and operates CRISM in coordination with an international team of researchers from universities, government and the private sector. Malin Space Science Systems, San Diego, provided and operates the Context Camera.

  6. Tyrosine phosphorylation and association of Syk with Fc gamma RII in monocytic THP-1 cells.

    PubMed Central

    Ghazizadeh, S; Bolen, J B; Fleit, H B

    1995-01-01

    Although the cytoplasmic portion of the low-affinity receptor for immunoglobulin G, Fc gamma RII, does not contain a kinase domain, rapid tyrosine phosphorylation of intracellular substrates occurs in response to aggregation of the receptor. The use of specific tyrosine kinase inhibitors has suggested that these phosphorylations are required for subsequent cellular responses. We previously demonstrated the coprecipitation of a tyrosine kinase activity with Fc gamma RII, suggesting that non-receptor tyrosine kinases might associate with the cytoplasmic domain of Fc gamma RII. Anti-receptor immune complex kinase assays revealed the coprecipitation of several phosphoproteins, most notably p56/53lyn, an Src-family protein tyrosine kinase (PTK), and a 72 kDa phosphoprotein. Here we identify the 72 kDa Fc gamma RII-associated protein as p72syk (Syk), a member of a newly described family of non-receptor PTKs. A rapid and transient tyrosine phosphorylation of Syk was observed following Fc gamma RII activation. Syk was also tyrosyl-phosphorylated following aggregation of the high-affinity Fc gamma receptor, Fc gamma RI. The Fc gamma RI activation did not result in association of Syk with Fc gamma RII, implying that distinct pools of Syk are activated upon aggregation of each receptor in a localized manner. These results demonstrate a physical association between Syk and Fc gamma RII and suggest that the molecules involved in Fc gamma RII signalling are very similar to the ones utilized by multichain immune recognition receptors such as the B-cell antigen receptor and the high-affinity IgE receptor. Images Figure 1 Figure 2 Figure 3 Figure 4 PMID:7530449

  7. Dynamic functional-structural coupling within acute functional state change phases: Evidence from a depression recognition study.

    PubMed

    Bi, Kun; Hua, Lingling; Wei, Maobin; Qin, Jiaolong; Lu, Qing; Yao, Zhijian

    2016-02-01

    Dynamic functional-structural connectivity (FC-SC) coupling might reflect the flexibility by which SC relates to functional connectivity (FC). However, during the dynamic acute state change phases of FC, the relationship between FC and SC may be distinctive and embody the abnormality inherent in depression. This study investigated the depression-related inter-network FC-SC coupling within particular dynamic acute state change phases of FC. Magnetoencephalography (MEG) and diffusion tensor imaging (DTI) data were collected from 26 depressive patients (13 women) and 26 age-matched controls (13 women). We constructed functional brain networks based on MEG data and structural networks from DTI data. The dynamic connectivity regression algorithm was used to identify the state change points of a time series of inter-network FC. The time period of FC that contained change points were partitioned into types of dynamic phases (acute rising phase, acute falling phase,acute rising and falling phase and abrupt FC variation phase) to explore the inter-network FC-SC coupling. The selected FC-SC couplings were then fed into the support vector machine (SVM) for depression recognition. The best discrimination accuracy was 82.7% (P=0.0069) with FC-SC couplings, particularly in the acute rising phase of FC. Within the FC phases of interest, the significant discriminative network pair was related to the salience network vs ventral attention network (SN-VAN) (P=0.0126) during the early rising phase (70-170ms). This study suffers from a small sample size, and the individual acute length of the state change phases was not considered. The increased values of significant discriminative vectors of FC-SC coupling in depression suggested that the capacity to process negative emotion might be more directly related to the SC abnormally and be indicative of more stringent and less dynamic brain function in SN-VAN, especially in the acute rising phase of FC. We demonstrated that depressive brain dysfunctions could be better characterized by reduced FC-SC coupling flexibility in this particular phase. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  9. A comparison of select image-compression algorithms for an electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    This effort is a study of image-compression algorithms for an electronic still camera. An electronic still camera can record and transmit high-quality images without the use of film, because images are stored digitally in computer memory. However, high-resolution images contain an enormous amount of information, and will strain the camera's data-storage system. Image compression will allow more images to be stored in the camera's memory. For the electronic still camera, a compression algorithm that produces a reconstructed image of high fidelity is most important. Efficiency of the algorithm is the second priority. High fidelity and efficiency are more important than a high compression ratio. Several algorithms were chosen for this study and judged on fidelity, efficiency and compression ratio. The transform method appears to be the best choice. At present, the method is compressing images to a ratio of 5.3:1 and producing high-fidelity reconstructed images.

  10. Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras

    NASA Astrophysics Data System (ADS)

    Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro

    2018-03-01

    Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.

  11. Megapixel mythology and photospace: estimating photospace for camera phones from large image sets

    NASA Astrophysics Data System (ADS)

    Hultgren, Bror O.; Hertel, Dirk W.

    2008-01-01

    It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.

  12. Brain processing of rectal sensation in adolescents with functional defecation disorders and healthy controls.

    PubMed

    Mugie, S M; Koppen, I J N; van den Berg, M M; Groot, P F C; Reneman, L; de Ruiter, M B; Benninga, M A

    2018-03-01

    Decreased sensation of urge to defecate is often reported by children with functional constipation (FC) and functional nonretentive fecal incontinence (FNRFI). The aim of this cross-sectional study was to evaluate cerebral activity in response to rectal distension in adolescents with FC and FNRFI compared with healthy controls (HCs). We included 15 adolescents with FC, 10 adolescents with FNRFI, and 15 young adult HCs. Rectal barostat was performed prior to functional magnetic resonance imaging (fMRI) to determine individual pressure thresholds for urge sensation. Subjects received 2 sessions of 5 × 30 seconds of barostat stimulation during the acquisition of blood oxygenation level-dependent fMRI. Functional magnetic resonance imaging signal differences were analyzed using SPM8 in Matlab. Functional constipation and FNRFI patients had higher thresholds for urgency than HCs (P < .001). During rectal distension, FC patients showed activation in the anterior cingulate cortex, dorsolateral prefrontal cortex, inferior parietal lobule, and putamen. No activations were observed in controls and FNRFI patients. Functional nonretentive fecal incontinence patients showed deactivation in the hippocampus, parahippocampal gyrus, fusiform gyrus (FFG), lingual gyrus, posterior parietal cortex, and precentral gyrus. In HCs, deactivated areas were detected in the hippocampus, amygdala, FFG, insula, thalamus, precuneus, and primary somatosensory cortex. In contrast, no regions with significant deactivation were detected in FC patients. Children with FC differ from children with FNRFI and HCs with respect to patterns of cerebral activation and deactivation during rectal distension. Functional nonretentive fecal incontinence patients seem to resemble HCs when it comes to brain processing of rectal distension. © 2017 John Wiley & Sons Ltd.

  13. Interhemispheric connectivity in amyotrophic lateral sclerosis: A near-infrared spectroscopy and diffusion tensor imaging study.

    PubMed

    Kopitzki, Klaus; Oldag, Andreas; Sweeney-Reed, Catherine M; Machts, Judith; Veit, Maria; Kaufmann, Jörn; Hinrichs, Hermann; Heinze, Hans-Jochen; Kollewe, Katja; Petri, Susanne; Mohammadi, Bahram; Dengler, Reinhard; Kupsch, Andreas R; Vielhaber, Stefan

    2016-01-01

    Aim of the present study was to investigate potential impairment of non-motor areas in amyotrophic lateral sclerosis (ALS) using near-infrared spectroscopy (NIRS) and diffusion tensor imaging (DTI). In particular, we evaluated whether homotopic resting-state functional connectivity (rs-FC) of non-motor associated cortical areas correlates with clinical parameters and disease-specific degeneration of the corpus callosum (CC) in ALS. Interhemispheric homotopic rs-FC was assessed in 31 patients and 30 healthy controls (HCs) for 8 cortical sites, from prefrontal to occipital cortex, using NIRS. DTI was performed in a subgroup of 21 patients. All patients were evaluated for cognitive dysfunction in the executive, memory, and visuospatial domains. ALS patients displayed an altered spatial pattern of correlation between homotopic rs-FC values when compared to HCs ( p  = 0.000013). In patients without executive dysfunction a strong correlation existed between the rate of motor decline and homotopic rs-FC of the anterior temporal lobes (ATLs) (ρ = - 0.85, p  = 0.0004). Furthermore, antero-temporal homotopic rs-FC correlated with fractional anisotropy in the central corpus callosum (CC), corticospinal tracts (CSTs), and forceps minor as determined by DTI ( p  < 0.05). The present study further supports involvement of non-motor areas in ALS. Our results render homotopic rs-FC as assessed by NIRS a potential clinical marker for disease progression rate in ALS patients without executive dysfunction and a potential anatomical marker for ALS-specific degeneration of the CC and CSTs.

  14. Paired split-plot designs of multireader multicase studies.

    PubMed

    Chen, Weijie; Gong, Qi; Gallas, Brandon D

    2018-07-01

    The widely used multireader multicase ROC study design for comparing imaging modalities is the fully crossed (FC) design: every reader reads every case of both modalities. We investigate paired split-plot (PSP) designs that may allow for reduced cost and increased flexibility compared with the FC design. In the PSP design, case images from two modalities are read by the same readers, thereby the readings are paired across modalities. However, within each modality, not every reader reads every case. Instead, both the readers and the cases are partitioned into a fixed number of groups and each group of readers reads its own group of cases-a split-plot design. Using a [Formula: see text]-statistic based variance analysis for AUC (i.e., area under the ROC curve), we show analytically that precision can be gained by the PSP design as compared with the FC design with the same number of readers and readings. Equivalently, we show that the PSP design can achieve the same statistical power as the FC design with a reduced number of readings. The trade-off for the increased precision in the PSP design is the cost of collecting a larger number of truth-verified patient cases than the FC design. This means that one can trade-off between different sources of cost and choose a least burdensome design. We provide a validation study to show the iMRMC software can be reliably used for analyzing data from both FC and PSP designs. Finally, we demonstrate the advantages of the PSP design with a reader study comparing full-field digital mammography with screen-film mammography.

  15. Geologic Map of the Northern Hemisphere of Vesta

    NASA Astrophysics Data System (ADS)

    Hiesinger, Harald; Ruesch, Ottaviano; Blewett, Dave T.; Buczkowski, Debra L.; Scully, Jennifer; Williams, Dave A.; Aileen Yingst, R.; Russell, Chris T.; Raymond, Carol A.

    2013-04-01

    For more than a year, the NASA Dawn mission acquired Framing Camera (FC) images from orbit around Vesta. The surface of the asteroid was completely imaged [1] before Dawn left for its next target, the asteroid Ceres. In an early phase of the mission, the southern and equatorial regions were imaged, allowing the production of several geologic quadrangle maps [2]. During the second High Altitude Mapping Orbit (HAMO-2), the northern hemisphere became illuminated and visible. Here we present the first geologic map of the northern vestan hemisphere, from 21°N to 85°N, derived mainly from HAMO-2 observations. Detailed studies of specific geologic features within this hemisphere are presented elsewhere [e.g., 3,4]. For our geologic map we used high-resolution FC images [5] with ~20 m/pixel from the Low Altitude Mapping Orbit (LAMO), which unfortunately only cover the southern part of the study area (21°N to 45°N). For areas farther north, LAMO images are supplemented with HAMO-2 images, which have a pixel scale of about 70 m/pixel. During the departure phase, images of the north pole area with even lower spatial resolutions were acquired. Due to observational constraints, considerable shadowing is present north of 75°. From these data, an albedo mosaic and a stereo-photogrammetric digital terrain model [6] was produced, which serve as basis for our geologic map. For the geologic mapping at a scale of 1:500,000, all data were incorporated into a Geographic Information System (ArcGIS). We have identified several geologic units within the study area, including cratered highland material (ch) and the Saturnalia Formation (Sf), which is characterized by large-scale ridges and troughs, presumably associated with the south polar Veneneia impact [7]. In addition, we mapped undifferentiated crater material (uc), discontinuous ejecta material (dem), and dark/bright crater material and dark/bright crater ray material (dc/bc and dcr/bcr). We will present a detailed description of the geologic units and their relative stratigraphy [8]. References: [1] Russell C. T. et al. (2012) GSA Ann. Meet., 152-1. [2] Yingst R. A. et al. (2012) EGU, Gen. Ass., 6225. [3] Blewett D. T. et al. (2012) GSA Ann. Meet., 152-9. [4] Scully J. (2012) DPS Meet. 44, #207.08. [5] Sierks H. et al. (2011) Space Sci Rev. [6] Preusker et al. (2012) LPSC 43, #2012. [7] Jaumann et al. (2012) Science Vol. 336, pp. 687-690. [8] Hiesinger H. et al. (2013) LPSC 44, #2582.

  16. Effectiveness of fluorescence-based methods to detect in situ demineralization and remineralization on smooth surfaces.

    PubMed

    Moriyama, C M; Rodrigues, J A; Lussi, A; Diniz, M B

    2014-01-01

    This study aimed to evaluate the effectiveness of fluorescence-based methods (DIAGNOdent, LF; DIAGNOdent pen, LFpen, and VistaProof fluorescence camera, FC) in detecting demineralization and remineralization on smooth surfaces in situ. Ten volunteers wore acrylic palatal appliances, each containing 6 enamel blocks that were demineralized for 14 days by exposure to a 20% sucrose solution and 3 of them were remineralized for 7 days with fluoride dentifrice. Sixty enamel blocks were evaluated at baseline, after demineralization and 30 blocks after remineralization by two examiners using LF, LFpen and FC. They were submitted to surface microhardness (SMH) and cross-sectional microhardness analysis. The integrated loss of surface hardness (ΔKHN) was calculated. The intraclass correlation coefficient for interexaminer reproducibility ranged from 0.21 (FC) to 0.86 (LFpen). SMH, LF and LFpen values presented significant differences among the three phases. However, FC fluorescence values showed no significant differences between the demineralization and remineralization phases. Fluorescence values for baseline, demineralized and remineralized enamel were, respectively, 5.4 ± 1.0, 9.2 ± 2.2 and 7.0 ± 1.5 for LF; 10.5 ± 2.0, 15.0 ± 3.2 and 12.5 ± 2.9 for LFpen, and 1.0 ± 0.0, 1.0 ± 0.1 and 1.0 ± 0.1 for FC. SMH and ΔKHN showed significant differences between demineralization and remineralization phases. There was a negative and significant correlation between SMH and LF and LFpen in the remineralization phase. In conclusion, LF and LFpen devices were effective in detecting demineralization and remineralization on smooth surfaces provoked in situ.

  17. Image Mosaicking Approach for a Double-Camera System in the GaoFen2 Optical Remote Sensing Satellite Based on the Big Virtual Camera.

    PubMed

    Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng

    2017-06-20

    The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.

  18. Methods for identification of images acquired with digital cameras

    NASA Astrophysics Data System (ADS)

    Geradts, Zeno J.; Bijhold, Jurrien; Kieft, Martijn; Kurosawa, Kenji; Kuroki, Kenro; Saitoh, Naoki

    2001-02-01

    From the court we were asked whether it is possible to determine if an image has been made with a specific digital camera. This question has to be answered in child pornography cases, where evidence is needed that a certain picture has been made with a specific camera. We have looked into different methods of examining the cameras to determine if a specific image has been made with a camera: defects in CCDs, file formats that are used, noise introduced by the pixel arrays and watermarking in images used by the camera manufacturer.

  19. Cross-hemispheric functional connectivity in the human fetal brain.

    PubMed

    Thomason, Moriah E; Dassanayake, Maya T; Shen, Stephen; Katkuri, Yashwanth; Alexis, Mitchell; Anderson, Amy L; Yeo, Lami; Mody, Swati; Hernandez-Andrade, Edgar; Hassan, Sonia S; Studholme, Colin; Jeong, Jeong-Won; Romero, Roberto

    2013-02-20

    Compelling evidence indicates that psychiatric and developmental disorders are generally caused by disruptions in the functional connectivity (FC) of brain networks. Events occurring during development, and in particular during fetal life, have been implicated in the genesis of such disorders. However, the developmental timetable for the emergence of neural FC during human fetal life is unknown. We present the results of resting-state functional magnetic resonance imaging performed in 25 healthy human fetuses in the second and third trimesters of pregnancy (24 to 38 weeks of gestation). We report the presence of bilateral fetal brain FC and regional and age-related variation in FC. Significant bilateral connectivity was evident in half of the 42 areas tested, and the strength of FC between homologous cortical brain regions increased with advancing gestational age. We also observed medial to lateral gradients in fetal functional brain connectivity. These findings improve understanding of human fetal central nervous system development and provide a basis for examining the role of insults during fetal life in the subsequent development of disorders in neural FC.

  20. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  1. Modulated electron-multiplied fluorescence lifetime imaging microscope: all-solid-state camera for fluorescence lifetime imaging.

    PubMed

    Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted

    2012-12-01

    We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.

  2. Evaluating and predicting the effectiveness of farmland consolidation on improving agricultural productivity in China.

    PubMed

    Fan, Yeting; Jin, Xiaobin; Xiang, Xiaomin; Gan, Le; Yang, Xuhong; Zhang, Zhihong; Zhou, Yinkang

    2018-01-01

    Food security has always been a focus issue in China. Farmland consolidation (FC) was regarded as a critical way to increase the quantity and improve the quality of farmland to ensure food security by Chinese government. FC projects have been nationwide launched, however few studies focused on evaluating the effectiveness of FC at a national scale. As such, an efficient way to evaluate the effectiveness of FC on improving agricultural productivity in China will be needed and it is critical for future national land consolidation planning. In this study, we selected 7505 FC projects completed between 2006 and 2013 with good quality Normalized Difference Vegetation Index (NDVI) as samples to evaluate the effectiveness of FC. We used time-series Moderate Resolution Imaging Spectroradiometer NDVI from 2001 to 2013, to extract four indicators to characterize agricultural productivity change of 4442 FC projects completed between 2006 and 2010, i.e., productivity level (PL), productivity variation (PV), productivity potential (PP), and multi-cropping index (MI). On this basis, we further predicted the same four characteristics for 3063 FC projects completed between 2011 and 2013, respectively, using Support Vector Machines (SVM). We found FC showed an overall effective status on improving agricultural productivity between 2006 and 2013 in China, especially on upgrading PL and improving PP. The positive effect was more prominent in the southeast and eastern China. It is noteworthy that 27.30% of all the 7505 projects were still ineffective on upgrading PL, the elementary improvement of agricultural productivity. Finally, we proposed that location-specific factors should be taken into consideration for launching FC projects and diverse financial sources are also needed for supporting FC. The results provide a reference for government to arrange FC projects reasonably and to formulate land consolidation planning in a proper way that better improve the effectiveness of FC.

  3. Evaluating and predicting the effectiveness of farmland consolidation on improving agricultural productivity in China

    PubMed Central

    Xiang, Xiaomin; Gan, Le; Yang, Xuhong; Zhang, Zhihong; Zhou, Yinkang

    2018-01-01

    Food security has always been a focus issue in China. Farmland consolidation (FC) was regarded as a critical way to increase the quantity and improve the quality of farmland to ensure food security by Chinese government. FC projects have been nationwide launched, however few studies focused on evaluating the effectiveness of FC at a national scale. As such, an efficient way to evaluate the effectiveness of FC on improving agricultural productivity in China will be needed and it is critical for future national land consolidation planning. In this study, we selected 7505 FC projects completed between 2006 and 2013 with good quality Normalized Difference Vegetation Index (NDVI) as samples to evaluate the effectiveness of FC. We used time-series Moderate Resolution Imaging Spectroradiometer NDVI from 2001 to 2013, to extract four indicators to characterize agricultural productivity change of 4442 FC projects completed between 2006 and 2010, i.e., productivity level (PL), productivity variation (PV), productivity potential (PP), and multi-cropping index (MI). On this basis, we further predicted the same four characteristics for 3063 FC projects completed between 2011 and 2013, respectively, using Support Vector Machines (SVM). We found FC showed an overall effective status on improving agricultural productivity between 2006 and 2013 in China, especially on upgrading PL and improving PP. The positive effect was more prominent in the southeast and eastern China. It is noteworthy that 27.30% of all the 7505 projects were still ineffective on upgrading PL, the elementary improvement of agricultural productivity. Finally, we proposed that location-specific factors should be taken into consideration for launching FC projects and diverse financial sources are also needed for supporting FC. The results provide a reference for government to arrange FC projects reasonably and to formulate land consolidation planning in a proper way that better improve the effectiveness of FC. PMID:29874258

  4. Digital camera with apparatus for authentication of images produced from an image file

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1993-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely based upon the private key that digital data encrypted with the private key by the processor may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating at any time the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match, since even one bit change in the image hash will cause the image hash to be totally different from the secure hash.

  5. Preliminary Geological Map of the Ac-H-12 Toharu Quadrangle of Ceres: An Integrated Mapping Study Using Dawn Spacecraft Data

    NASA Astrophysics Data System (ADS)

    Mest, S. C.; Williams, D. A.; Crown, D. A.; Yingst, R. A.; Buczkowski, D.; Schenk, P.; Scully, J. E. C.; Jaumann, R.; Roatsch, T.; Preusker, F.; Platz, T.; Nathues, A.; Hoffmann, M.; Schäfer, M.; Marchi, S.; De Sanctis, M. C.; Russell, C. T.; Raymond, C. A.

    2015-12-01

    We are using recent data from the Dawn spacecraft to map the geology of the Ac-H-12 Toharu Quadrangle (21-66°S, 90-180°E) of the dwarf planet Ceres in order to examine its surface geology and understand its geologic history. At the time of this writing, mapping was performed on Framing Camera (FC) mosaics from late Approach (1.3 km/px) and Survey (415 m/px) orbits, including clear filter and color images and digital terrain models derived from stereo images. Images from the High Altitude Mapping Orbit (140 m/px) will be used to refine the map in Fall 2015, followed by the Low Altitude Mapping Orbit (35 m/px) starting in December 2015. The quad is named after crater Toharu (87 km diameter; 49°S, 155°E). The southern rim of Kerwan basin (284 km diameter) is visible along the northern edge of the quad, which is preserved as a low-relief scarp. The quad exhibits smooth terrain in the north, and more heavily cratered terrain in the south. The smooth terrain forms nearly flat-lying plains in some areas, such as on the floor and to the southeast of Kerwan, and overlies hummocky materials in other areas. These smooth materials extend over a much broader area outside of the quad, and appear to contain some of the lowest crater densities on Ceres. Impact craters exhibit a range of coinciding sizes and preservation styles. Smaller craters (<40 km) generally appear morphologically "fresh", and their rims are nearly circular and raised above the surrounding terrain. Larger craters, such as Toharu, appear more degraded, exhibiting irregularly shaped, sometimes scalloped, rim structures, and debris lobes on their floors. Numerous craters (> 20 km) contain central mounds; at current FC resolution, it is difficult to discern if these are primary structures (i.e., central peaks) or secondary features. Support of the Dawn Instrument, Operations, & Science Teams is acknowledged. This work is supported by grants from NASA, DLR and MPG.

  6. Plenoptic camera image simulation for reconstruction algorithm verification

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim

    2014-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.

  7. Exploring Formation Models for Ceres Tholi and Montes

    NASA Astrophysics Data System (ADS)

    Ruesch, O.; Platz, T.; McFadden, L. A.; Hiesinger, H.; Schenk, P.; Sykes, M. V.; Schmidt, B. E.; Buczkowski, D.; Thangjam, G.; Raymond, C. A.; Russell, C. T.

    2015-12-01

    Dawn Framing Camera (FC) images of Ceres surface revealed tholi and mons, i.e., positive relief features with sub-circular to irregular basal shapes and varying height to diameter ratios and flank slopes. These domes and mounts are tentatively interpreted as volcanic constructs [1]. Alternative formation mechanisms, e.g., uplifting by diapirism or shallow intrusions [e.g., 2], could have also led to the observed features with different geological implications. FC images derived local digital elevation models reveal that the largest dome on Ceres (near Rongo crater) has a ~100 km wide base, concave downward margins with slopes of 10°-20°, a relatively flat top reaching altitudes of ~5 km relative to surrounding, and a summit pit chain of putative endogenic origin. A relevant mons on Ceres is a cone-shaped relief (10°S/316°E) with a ~30x20 km base, reaching a high of ~5 km relative to surroundings. Flank slopes approach a concave upward shape. These constructs are located in a complex geological area having resurfaced units with onlap contacts. Because of the varying morphometries of the reliefs, we explore several physical models of volcanic constructs, e.g., steep-sided dome and shield volcano. Physical models are based on radially spreading viscous gravity currents with a free upper surface [e.g., 3, 4]. Testing formation scenarios will exploit recently developed methods, such as time-variable viscosity and fixed-volume models [5], and constant flow rate models [6]. We aim to provide constraints on viable emplacement mechanisms for the different reliefs. [1] Platz et al. (2015), EPSC abstract 915, vol. 10; [2] Fagents, S.A. (2003), JGR, vol. 108, E12, 5139; [3] Huppert, H. (1982), J. Fluid Mech., vol. 121, pp. 43-58; [4] Lacey et al. (1981), EPSL, vol. 54, pp. 139-143; [5] Glaze et al. (2012), LPSC abstract 1074 ; [6] Glaze et al. (2015), LPSC abstract 1326.

  8. Geologic Mapping of the Av-11 Pinaria Quadrangle of Asteroid 4 Vesta

    NASA Astrophysics Data System (ADS)

    Schenk, P.; Hoogenboom, T.; Williams, D.; Yingst, R. A.; Jaumann, R.; Gaskell, R.; Preusker, F.; Nathues, A.; Roatsch, T.

    2012-04-01

    As part of the Dawn's orbital mapping investigation of Vesta, the Science Team is conducting geologic mapping of the surface in the form of 15 quadrangle maps, including quadrangle Av-11 (Pinaria). The base map is a monochrome Framing Camera (FC) mosaic at ~70 m/pixel, supplemented by Digital Terrain Models (DTM) and FC color ratio images, both at ~250 m/pixel, slope and contour maps, and Visible and Infrared (VIR) hyperspectral images. Av-11 straddles the 45-degree longitude in the South Polar Region, and is dominated by the rim of the ~505 km south polar topographic feature, Rheasilvia. Sparsely cratered (relatively), Av-11 is dominated by a 20 km high rim scarp (Matronalia Rupes) and by arcuate ridges and troughs forming a radial to spiral pattern across the basin floor. Primary geologic features of Av-11 include the following. Ridge-and-groove terrain radiating arcuately from the central mound unit, interpreted to be structural disruption of the basin floor associated with basin formation. The largest crater in Av-11 is Pinaria (37 km). Mass wasting deposits are observed on its floor. Secondary crater chains and fields are also evident. Mass wasting observed along Rheasilvia rim scarp and in the largest craters indicates scarp failure is a significant process. Parallel fault scarps mark this deposit of slumped debris at the base of 20 km high Matronalia Rupes, which may have formed during or shortly after basin excavation. We interpret most of these deposits as slump material emplaced as a result of the effects of basin formation and collapse. Lobate materials are characterized by lineations and lobate scarps and interpreted as Rheasilvia ejecta deposit outside Rheasilvia rim (the smoothest terrain on Vesta), and are consistent with formation by ejecta. Partial burial of older craters near the edge of these deposits are also observed.

  9. Near-infrared fluorescence cholangiography with indocyanine green for biliary atresia. Real-time imaging during the Kasai procedure: a pilot study.

    PubMed

    Hirayama, Yutaka; Iinuma, Yasushi; Yokoyama, Naoyuki; Otani, Tetsuya; Masui, Daisuke; Komatsuzaki, Naoko; Higashidate, Naruki; Tsuruhisa, Shiori; Iida, Hisataka; Nakaya, Kengo; Naito, Shinichi; Nitta, Koju; Yagi, Minoru

    2015-12-01

    Hepatoportoenterostomy (HPE) with the Kasai procedure is the treatment of choice for biliary atresia (BA) as the initial surgery. However, the appropriate level of dissection level of the fibrous cone (FC) of the porta hepatis (PH) is frequently unclear, and the procedure sometimes results in unsuccessful outcomes. Recently, indocyanine green near-infrared fluorescence imaging (ICG-FCG) has been developed as a form of real-time cholangiography. We applied this technique in five patients with BA to visualize the biliary flow at the PH intraoperatively. ICG was injected intravenously the day before surgery as the liver function test, and the liver was observed with a near-infrared camera system during the operation while the patient's feces was also observed. In all patients, the whole liver fluoresced diffusely with ICG-containing stagnant bile, whereas no extrahepatic structures fluoresced. The findings of the ICG fluorescence pattern of the PH after dissection of the FC were classified into three types: spotty fluorescence, one patient; diffuse weak fluorescence, three patients; and diffuse strong fluorescence, one patient. In all five patients, the feces evacuated after HPE showed distinct fluorescent spots, although that obtained before surgery showed no fluorescence. One patient with diffuse strong fluorescence who did not achieve JF underwent living related liver transplantation six months after the initial HPE procedure. Four patients, including three cases involving diffuse weak fluorescence and one case involving spotty fluorescence showed weak fluorescence compared to that of the surrounding liver surface. We were able to detect the presence of bile excretion at the time of HPE intraoperatively and successfully evaluated the extent of bile excretion using this new technique. Furthermore, the ICG-FCG findings may provide information leading to a new classification and potentially function as an indicator predicting the clinical outcomes after HPE.

  10. Preliminary study on filamentous particle distribution in septic tank effluent and their impact on filter cake development.

    PubMed

    Spychała, Marcin; Nieć, Jakub; Pawlak, Maciej

    2013-01-01

    In this paper, the preliminary study on the impact of filamentous particles (FP) in the septic tank effluent (STE) on filter cake (FC) development was presented. The number, length and diameter (30 p./cm3, 451 and 121 microm, respectively, on average) of FPs were measured using microscope image analysis of STE samples condensed using a vacuum evaporation set. Results of this study showed, that 0.73% of volatile suspended solids (VSSs) mass from the STE occurs in the form of FPs. No correlation between FP total mass and VSS was found. An experiment with a layer of FPs simulated by ground toilet paper was conducted and showed the impact of this layer (4.89 mg/cm2) on wastewater hydraulic conductivity--for an FC with FPs (FC-FP), hydraulic conductivity was seven times lower than for the FC without the FP layer, and on outflow quality (lower concentration of organic matter expressed as chemical oxygen demand (COD) in effluent from the FC-FP filter than in the effluent from the FC filter: 618 and 732 gO2/m3, respectively). Despite a relatively small amount of FPs in STE solids (as volume fraction), they play an important role in FC development due to their relatively high length and low degradability. Probably relatively small pores of the FC containing FPs (FC-FP) caused a small particle blocking and a decrease in permeability.

  11. Development of bimolecular fluorescence complementation using rsEGFP2 for detection and super-resolution imaging of protein-protein interactions in live cells

    PubMed Central

    Wang, Sheng; Ding, Miao; Chen, Xuanze; Chang, Lei; Sun, Yujie

    2017-01-01

    Direct visualization of protein-protein interactions (PPIs) at high spatial and temporal resolution in live cells is crucial for understanding the intricate and dynamic behaviors of signaling protein complexes. Recently, bimolecular fluorescence complementation (BiFC) assays have been combined with super-resolution imaging techniques including PALM and SOFI to visualize PPIs at the nanometer spatial resolution. RESOLFT nanoscopy has been proven as a powerful live-cell super-resolution imaging technique. With regard to the detection and visualization of PPIs in live cells with high temporal and spatial resolution, here we developed a BiFC assay using split rsEGFP2, a highly photostable and reversibly photoswitchable fluorescent protein previously developed for RESOLFT nanoscopy. Combined with parallelized RESOLFT microscopy, we demonstrated the high spatiotemporal resolving capability of a rsEGFP2-based BiFC assay by detecting and visualizing specifically the heterodimerization interactions between Bcl-xL and Bak as well as the dynamics of the complex on mitochondria membrane in live cells. PMID:28663931

  12. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  13. An evolution of image source camera attribution approaches.

    PubMed

    Jahanirad, Mehdi; Wahab, Ainuddin Wahid Abdul; Anuar, Nor Badrul

    2016-05-01

    Camera attribution plays an important role in digital image forensics by providing the evidence and distinguishing characteristics of the origin of the digital image. It allows the forensic analyser to find the possible source camera which captured the image under investigation. However, in real-world applications, these approaches have faced many challenges due to the large set of multimedia data publicly available through photo sharing and social network sites, captured with uncontrolled conditions and undergone variety of hardware and software post-processing operations. Moreover, the legal system only accepts the forensic analysis of the digital image evidence if the applied camera attribution techniques are unbiased, reliable, nondestructive and widely accepted by the experts in the field. The aim of this paper is to investigate the evolutionary trend of image source camera attribution approaches from fundamental to practice, in particular, with the application of image processing and data mining techniques. Extracting implicit knowledge from images using intrinsic image artifacts for source camera attribution requires a structured image mining process. In this paper, we attempt to provide an introductory tutorial on the image processing pipeline, to determine the general classification of the features corresponding to different components for source camera attribution. The article also reviews techniques of the source camera attribution more comprehensively in the domain of the image forensics in conjunction with the presentation of classifying ongoing developments within the specified area. The classification of the existing source camera attribution approaches is presented based on the specific parameters, such as colour image processing pipeline, hardware- and software-related artifacts and the methods to extract such artifacts. The more recent source camera attribution approaches, which have not yet gained sufficient attention among image forensics researchers, are also critically analysed and further categorised into four different classes, namely, optical aberrations based, sensor camera fingerprints based, processing statistics based and processing regularities based, to present a classification. Furthermore, this paper aims to investigate the challenging problems, and the proposed strategies of such schemes based on the suggested taxonomy to plot an evolution of the source camera attribution approaches with respect to the subjective optimisation criteria over the last decade. The optimisation criteria were determined based on the strategies proposed to increase the detection accuracy, robustness and computational efficiency of source camera brand, model or device attribution. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  14. Composite video and graphics display for multiple camera viewing system in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1991-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  15. Composite video and graphics display for camera viewing systems in robotics and teleoperation

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor); Venema, Steven C. (Inventor)

    1993-01-01

    A system for real-time video image display for robotics or remote-vehicle teleoperation is described that has at least one robot arm or remotely operated vehicle controlled by an operator through hand-controllers, and one or more television cameras and optional lighting element. The system has at least one television monitor for display of a television image from a selected camera and the ability to select one of the cameras for image display. Graphics are generated with icons of cameras and lighting elements for display surrounding the television image to provide the operator information on: the location and orientation of each camera and lighting element; the region of illumination of each lighting element; the viewed region and range of focus of each camera; which camera is currently selected for image display for each monitor; and when the controller coordinate for said robot arms or remotely operated vehicles have been transformed to correspond to coordinates of a selected or nonselected camera.

  16. Learning normalized inputs for iterative estimation in medical image segmentation.

    PubMed

    Drozdzal, Michal; Chartrand, Gabriel; Vorontsov, Eugene; Shakeri, Mahsa; Di Jorio, Lisa; Tang, An; Romero, Adriana; Bengio, Yoshua; Pal, Chris; Kadoury, Samuel

    2018-02-01

    In this paper, we introduce a simple, yet powerful pipeline for medical image segmentation that combines Fully Convolutional Networks (FCNs) with Fully Convolutional Residual Networks (FC-ResNets). We propose and examine a design that takes particular advantage of recent advances in the understanding of both Convolutional Neural Networks as well as ResNets. Our approach focuses upon the importance of a trainable pre-processing when using FC-ResNets and we show that a low-capacity FCN model can serve as a pre-processor to normalize medical input data. In our image segmentation pipeline, we use FCNs to obtain normalized images, which are then iteratively refined by means of a FC-ResNet to generate a segmentation prediction. As in other fully convolutional approaches, our pipeline can be used off-the-shelf on different image modalities. We show that using this pipeline, we exhibit state-of-the-art performance on the challenging Electron Microscopy benchmark, when compared to other 2D methods. We improve segmentation results on CT images of liver lesions, when contrasting with standard FCN methods. Moreover, when applying our 2D pipeline on a challenging 3D MRI prostate segmentation challenge we reach results that are competitive even when compared to 3D methods. The obtained results illustrate the strong potential and versatility of the pipeline by achieving accurate segmentations on a variety of image modalities and different anatomical regions. Copyright © 2017 Elsevier B.V. All rights reserved.

  17. Amygdala functional disconnection with the prefrontal-cingulate-temporal circuit in chronic tinnitus patients with depressive mood.

    PubMed

    Chen, Yu-Chen; Bo, Fan; Xia, Wenqing; Liu, Shenghua; Wang, Peng; Su, Wen; Xu, Jin-Jing; Xiong, Zhenyu; Yin, Xindao

    2017-10-03

    Chronic tinnitus is often accompanied with depressive symptom, which may arise from aberrant functional coupling between the amygdala and cerebral cortex. To explore this hypothesis, resting-state functional magnetic resonance imaging (fMRI) was used to investigate the disrupted amygdala-cortical functional connectivity (FC) in chronic tinnitus patients with depressive mood. Chronic tinnitus patients with depressive mood (n=20), without depressive mood (n=20), and well-matched healthy controls (n=23) underwent resting-state fMRI scanning. Amygdala-cortical FC was characterized using a seed-based whole-brain correlation method. The bilateral amygdala FC was compared among the three groups. Compared to non-depressed patients, depressive tinnitus patients showed decreased amygdala FC with the prefrontal cortex and anterior cingulate cortex as well as increased amygdala FC with the postcentral gyrus and lingual gyrus. Relative to healthy controls, depressive tinnitus patients revealed decreased amygdala FC with the superior and middle temporal gyrus, anterior and posterior cingulate cortex, and prefrontal cortex, as well as increased amygdala FC with the postcentral gyrus and lingual gyrus. The current study identified for the first time abnormal resting-state amygdala-cortical FC with the prefrontal-cingulate-temporal circuit in chronic tinnitus patients with depressive mood, which will provide novel insight into the underlying neuropathological mechanisms of tinnitus-induced depressive disorder. Copyright © 2017 Elsevier Inc. All rights reserved.

  18. Study of the influence of over-the-counter vitamin supplement intake on urine fluorescence to optimize cancer detection by fluorescence cystoscopy

    NASA Astrophysics Data System (ADS)

    Zellweger, Matthieu; Martoccia, Carla; Mengin, Matthieu; Iselin, Christophe; Bergh, Hubert van den; Wagnières, Georges

    2015-06-01

    Fluorescence cystoscopy (FC) efficiently enhances the detection and improves the therapeutic management of early bladder cancer. During an FC, about 150 ml of water is needed to inflate the bladder. The water is quickly diluted by urine which can be fluorescent. If this bladder washout fluid (BWF) becomes fluorescent, the FC images are frequently degraded. Unfortunately, it is unclear which elements of the diet may contribute to this background fluorescence. We propose to start this exploration with over-the-counter (OTC) vitamin supplements. To this end, we measured excitation-emission matrices of urine samples and the kinetics of modifications of urine fluorescence obtained from nine healthy volunteers before, during, and after intake of a commercially available OTC vitamin supplement. The pharmacokinetics shows that the BWF fluorescence values reach a maximum 8 to 10 h after vitamin intake. They decrease in the half-day that follows and reach values close to baseline ˜1 day afterward. Based on these results, we conclude that, in order to avoid degradations of fluorescence images, it is likely best that the intake of OTC vitamin supplements be avoided during the week preceding an FC.

  19. Digital Camera with Apparatus for Authentication of Images Produced from an Image File

    NASA Technical Reports Server (NTRS)

    Friedman, Gary L. (Inventor)

    1996-01-01

    A digital camera equipped with a processor for authentication of images produced from an image file taken by the digital camera is provided. The digital camera processor has embedded therein a private key unique to it, and the camera housing has a public key that is so uniquely related to the private key that digital data encrypted with the private key may be decrypted using the public key. The digital camera processor comprises means for calculating a hash of the image file using a predetermined algorithm, and second means for encrypting the image hash with the private key, thereby producing a digital signature. The image file and the digital signature are stored in suitable recording means so they will be available together. Apparatus for authenticating the image file as being free of any alteration uses the public key for decrypting the digital signature, thereby deriving a secure image hash identical to the image hash produced by the digital camera and used to produce the digital signature. The authenticating apparatus calculates from the image file an image hash using the same algorithm as before. By comparing this last image hash with the secure image hash, authenticity of the image file is determined if they match. Other techniques to address time-honored methods of deception, such as attaching false captions or inducing forced perspectives, are included.

  20. Image Alignment for Multiple Camera High Dynamic Range Microscopy.

    PubMed

    Eastwood, Brian S; Childs, Elisabeth C

    2012-01-09

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera.

  1. Image Alignment for Multiple Camera High Dynamic Range Microscopy

    PubMed Central

    Eastwood, Brian S.; Childs, Elisabeth C.

    2012-01-01

    This paper investigates the problem of image alignment for multiple camera high dynamic range (HDR) imaging. HDR imaging combines information from images taken with different exposure settings. Combining information from multiple cameras requires an alignment process that is robust to the intensity differences in the images. HDR applications that use a limited number of component images require an alignment technique that is robust to large exposure differences. We evaluate the suitability for HDR alignment of three exposure-robust techniques. We conclude that image alignment based on matching feature descriptors extracted from radiant power images from calibrated cameras yields the most accurate and robust solution. We demonstrate the use of this alignment technique in a high dynamic range video microscope that enables live specimen imaging with a greater level of detail than can be captured with a single camera. PMID:22545028

  2. Pre-flight and On-orbit Geometric Calibration of the Lunar Reconnaissance Orbiter Camera

    NASA Astrophysics Data System (ADS)

    Speyerer, E. J.; Wagner, R. V.; Robinson, M. S.; Licht, A.; Thomas, P. C.; Becker, K.; Anderson, J.; Brylow, S. M.; Humm, D. C.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) consists of two imaging systems that provide multispectral and high resolution imaging of the lunar surface. The Wide Angle Camera (WAC) is a seven color push-frame imager with a 90∘ field of view in monochrome mode and 60∘ field of view in color mode. From the nominal 50 km polar orbit, the WAC acquires images with a nadir ground sampling distance of 75 m for each of the five visible bands and 384 m for the two ultraviolet bands. The Narrow Angle Camera (NAC) consists of two identical cameras capable of acquiring images with a ground sampling distance of 0.5 m from an altitude of 50 km. The LROC team geometrically calibrated each camera before launch at Malin Space Science Systems in San Diego, California and the resulting measurements enabled the generation of a detailed camera model for all three cameras. The cameras were mounted and subsequently launched on the Lunar Reconnaissance Orbiter (LRO) on 18 June 2009. Using a subset of the over 793000 NAC and 207000 WAC images of illuminated terrain collected between 30 June 2009 and 15 December 2013, we improved the interior and exterior orientation parameters for each camera, including the addition of a wavelength dependent radial distortion model for the multispectral WAC. These geometric refinements, along with refined ephemeris, enable seamless projections of NAC image pairs with a geodetic accuracy better than 20 meters and sub-pixel precision and accuracy when orthorectifying WAC images.

  3. Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products

    NASA Astrophysics Data System (ADS)

    Williams, Don; Burns, Peter D.

    2007-01-01

    There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.

  4. NV-CMOS HD camera for day/night imaging

    NASA Astrophysics Data System (ADS)

    Vogelsong, T.; Tower, J.; Sudol, Thomas; Senko, T.; Chodelka, D.

    2014-06-01

    SRI International (SRI) has developed a new multi-purpose day/night video camera with low-light imaging performance comparable to an image intensifier, while offering the size, weight, ruggedness, and cost advantages enabled by the use of SRI's NV-CMOS HD digital image sensor chip. The digital video output is ideal for image enhancement, sharing with others through networking, video capture for data analysis, or fusion with thermal cameras. The camera provides Camera Link output with HD/WUXGA resolution of 1920 x 1200 pixels operating at 60 Hz. Windowing to smaller sizes enables operation at higher frame rates. High sensitivity is achieved through use of backside illumination, providing high Quantum Efficiency (QE) across the visible and near infrared (NIR) bands (peak QE <90%), as well as projected low noise (<2h+) readout. Power consumption is minimized in the camera, which operates from a single 5V supply. The NVCMOS HD camera provides a substantial reduction in size, weight, and power (SWaP) , ideal for SWaP-constrained day/night imaging platforms such as UAVs, ground vehicles, fixed mount surveillance, and may be reconfigured for mobile soldier operations such as night vision goggles and weapon sights. In addition the camera with the NV-CMOS HD imager is suitable for high performance digital cinematography/broadcast systems, biofluorescence/microscopy imaging, day/night security and surveillance, and other high-end applications which require HD video imaging with high sensitivity and wide dynamic range. The camera comes with an array of lens mounts including C-mount and F-mount. The latest test data from the NV-CMOS HD camera will be presented.

  5. Alteration of the Intra- and Cross- Hemisphere Posterior Default Mode Network in Frontal Lobe Glioma Patients.

    PubMed

    Zhang, Haosu; Shi, Yonghong; Yao, Chengjun; Tang, Weijun; Yao, Demin; Zhang, Chenxi; Wang, Manning; Wu, Jinsong; Song, Zhijian

    2016-06-01

    Patients with frontal lobe gliomas often experience neurocognitive dysfunctions before surgery, which affects the default mode network (DMN) to different degrees. This study quantitatively analyzed this effect from the perspective of cerebral hemispheric functional connectivity (FC). We collected resting-state fMRI data from 20 frontal lobe glioma patients before treatment and 20 healthy controls. All of the patients and controls were right-handed. After pre-processing the images, FC maps were built from the seed defined in the left or right posterior cingulate cortex (PCC) to the target regions determined in the left or right temporal-parietal junction (TPJ), respectively. The intra- and cross-group statistical calculations of FC strength were compared. The conclusions were as follows: (1) the intra-hemisphere FC strength values between the PCC and TPJ on the left and right were decreased in patients compared with controls; and (2) the correlation coefficients between the FC pairs in the patients were increased compared with the corresponding controls. When all of the patients were grouped by their tumor's hemispheric location, (3) the FC of the subgroups showed that the dominant hemisphere was vulnerable to glioma, and (4) the FC in the dominant hemisphere showed a significant correlation with WHO grade.

  6. Toward an image compression algorithm for the high-resolution electronic still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    Taking pictures with a camera that uses a digital recording medium instead of film has the advantage of recording and transmitting images without the use of a darkroom or a courier. However, high-resolution images contain an enormous amount of information and strain data-storage systems. Image compression will allow multiple images to be stored in the High-Resolution Electronic Still Camera. The camera is under development at Johnson Space Center. Fidelity of the reproduced image and compression speed are of tantamount importance. Lossless compression algorithms are fast and faithfully reproduce the image, but their compression ratios will be unacceptably low due to noise in the front end of the camera. Future efforts will include exploring methods that will reduce the noise in the image and increase the compression ratio.

  7. Contralesional homotopic activity negatively influences functional recovery after stroke (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bauer, Adam Q.; Kraft, Andrew; Baxter, Grant A.; Bruchas, Michael R.; Lee, Jin-Moo; Culver, Joseph P.

    2017-02-01

    Recent fcMRI studies examining spontaneous brain activity after stoke have revealed disrupted global patterns of functional connectivity (FC). Interestingly, acute interhemispheric homotopic FC has been shown to be predictive of recovery potential. While substantial indirect evidence also suggests that homotopic brain activity may directly impact recovery, results in humans are extremely varied. A better understanding of how activity within networks functionally-connected to lesioned tissue influences brain plasticity might improve therapeutic strategies. We combine cell-type specific optogenetic targeting with optical intrinsic signal (OIS) imaging to assess the effects of homotopic contralesional activity (specifically in excitatory CamKIIa pyramidal neurons) on FC, cortical remapping, and behavior after stroke. Thirty-one mice were housed in enriched cages for the experiment. OIS imaging was performed before, 1, and 4 weeks after photothrombosis of left forepaw somatosensory cortex (S1fp). On day 1 after stroke, 17 mice were subjected to chronic, intermittent optical stimulation of right S1fp for 10 min, 5 days/week for 4 weeks. New cortical representations of left S1fp appeared in non-stimulated mice at week 1, but not in stimulated mice (p=0.005). Evoked responses were comparable in both groups at week 4 (p=0.57). Homotopic FC between left and right S1fp regions was equally reduced in both groups (p=0.012) at week 1. However, in non-stimulated mice, behavioral performance and FC between right S1fp and left perilesional S1 cortex was significantly higher by 4 weeks compared to stimulated mice (p=0.009). Our results suggest that increased homotopic, contralesional activity in excitatory neurons negatively influences spontaneous recovery following ischemic stroke.

  8. Ventrolateral Motor Thalamus Abnormal Connectivity in Essential Tremor Before and After Thalamotomy: A Resting-State Functional Magnetic Resonance Imaging Study.

    PubMed

    Tuleasca, Constantin; Najdenovska, Elena; Régis, Jean; Witjas, Tatiana; Girard, Nadine; Champoudry, Jérôme; Faouzi, Mohamed; Thiran, Jean-Philippe; Cuadra, Meritxell Bach; Levivier, Marc; Van De Ville, Dimitri

    2018-05-01

    To evaluate functional connectivity (FC) of the ventrolateral thalamus, a common target for drug-resistant essential tremor (ET), resting-state data were analyzed before and 1 year after stereotactic radiosurgical thalamotomy and compared against healthy controls (HCs). In total, 17 consecutive patients with ET and 10 HCs were enrolled. Tremor network was investigated using the ventrolateral ventral (VLV) thalamic nucleus as the region of interest, extracted with automated segmentation from pretherapeutic diffusion magnetic resonance imaging. Temporal correlations of VLV at whole brain level were evaluated by comparing drug-naïve patients with ET with HCs, and longitudinally, 1 year after stereotactic radiosurgical thalamotomy. 1 year thalamotomy MR signature was always located inside VLV and did not correlate with any of FC measures (P > 0.05). This suggested presence of longitudinal changes in VLV FC independently of the MR signature volume. Pretherapeutic ET displayed altered VLV FC with left primary sensory-motor cortex, pedunculopontine nucleus, dorsal anterior cingulate, left visual association, and left superior parietal areas. Pretherapeutic negative FC with primary somatosensory cortex and pedunculopontine nucleus correlated with poorer baseline tremor scores (Spearman = 0.04 and 0.01). Longitudinal study displayed changes within right dorsal attention (frontal eye-fields and posterior parietal) and salience (anterior insula) networks, as well as areas involved in hand movement planning or language production. Our results demonstrated that patients with ET and HCs differ in their left VLV FC to primary somatosensory and supplementary motor, visual association, or brainstem areas (pedunculopontine nucleus). Longitudinal changes display reorganization of dorsal attention and salience networks after thalamotomy. Beside attentional gateway, they are also known for their major role in facilitating a rapid access to the motor system. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  10. Automatic source camera identification using the intrinsic lens radial distortion

    NASA Astrophysics Data System (ADS)

    Choi, Kai San; Lam, Edmund Y.; Wong, Kenneth K. Y.

    2006-11-01

    Source camera identification refers to the task of matching digital images with the cameras that are responsible for producing these images. This is an important task in image forensics, which in turn is a critical procedure in law enforcement. Unfortunately, few digital cameras are equipped with the capability of producing watermarks for this purpose. In this paper, we demonstrate that it is possible to achieve a high rate of accuracy in the identification by noting the intrinsic lens radial distortion of each camera. To reduce manufacturing cost, the majority of digital cameras are equipped with lenses having rather spherical surfaces, whose inherent radial distortions serve as unique fingerprints in the images. We extract, for each image, parameters from aberration measurements, which are then used to train and test a support vector machine classifier. We conduct extensive experiments to evaluate the success rate of a source camera identification with five cameras. The results show that this is a viable approach with high accuracy. Additionally, we also present results on how the error rates may change with images captured using various optical zoom levels, as zooming is commonly available in digital cameras.

  11. Measuring Positions of Objects using Two or More Cameras

    NASA Technical Reports Server (NTRS)

    Klinko, Steve; Lane, John; Nelson, Christopher

    2008-01-01

    An improved method of computing positions of objects from digitized images acquired by two or more cameras (see figure) has been developed for use in tracking debris shed by a spacecraft during and shortly after launch. The method is also readily adaptable to such applications as (1) tracking moving and possibly interacting objects in other settings in order to determine causes of accidents and (2) measuring positions of stationary objects, as in surveying. Images acquired by cameras fixed to the ground and/or cameras mounted on tracking telescopes can be used in this method. In this method, processing of image data starts with creation of detailed computer- aided design (CAD) models of the objects to be tracked. By rotating, translating, resizing, and overlaying the models with digitized camera images, parameters that characterize the position and orientation of the camera can be determined. The final position error depends on how well the centroids of the objects in the images are measured; how accurately the centroids are interpolated for synchronization of cameras; and how effectively matches are made to determine rotation, scaling, and translation parameters. The method involves use of the perspective camera model (also denoted the point camera model), which is one of several mathematical models developed over the years to represent the relationships between external coordinates of objects and the coordinates of the objects as they appear on the image plane in a camera. The method also involves extensive use of the affine camera model, in which the distance from the camera to an object (or to a small feature on an object) is assumed to be much greater than the size of the object (or feature), resulting in a truly two-dimensional image. The affine camera model does not require advance knowledge of the positions and orientations of the cameras. This is because ultimately, positions and orientations of the cameras and of all objects are computed in a coordinate system attached to one object as defined in its CAD model.

  12. Localization of neonatal Fc receptor for IgG in aggregated lymphoid nodules area in abomasum of Bactrian camels (Camelus bactrianus) of different ages.

    PubMed

    Zhang, Wang-Dong; Wang, Wen-Hui; Li, Shu-Xian; Jia, Shuai; Zhang, Xue-Feng; Cao, Ting-Ting

    2016-10-20

    The neonatal Fc receptor (FcRn) plays a crucial role in transporting IgG and associated antigens across polarized epithelial barriers in mucosal immunity. However, it was not clear that FcRn expression in aggregated lymphoid nodules area (ALNA) in abomasum, a unique and important mucosal immune structure discovered only in Bactrian camels. In the present study, 27 Alashan Bactrian camels were divided into the following five age groups: fetus (10-13 months of gestation), young (1-2 years), pubertal (3-5 years), middle-aged (6-16 years) and old (17-20 years). The FcRn expressions were observed and analyzed in detail with histology, immunohistochemistry, micro-image analysis and statistical methods. The results showed that the FcRn was expressed in mucosal epithelial cells of ALNA from the fetus to the old group, although the expression level rapidly declined in old group; moreover, after the ALNA maturated, the FcRn expression level in the non-follicle-associated epithelium (non-FAE) was significantly higher than that in FAE (P < 0.05). In addition, the FcRn was also expressed in the vessel endothelium, smooth muscle tissue, and macrophages and dendritic cells (DCs) of secondary lymphoid follicles (sLFs). It was demonstrated that FcRn was mainly expressed in non-FAE, the effector sites, although which was expressed in FAE, the inductive sites for mucosal immunity. And it was also expressed in DCs and macrophages in sLFs of all ages of Bactrian camels. The results provided a powerful evidence that IgG (including HCAb) could participate in mucosal immune response and tolerance in ALNA of Bactrian camels through FcRn transmembrane transport.

  13. Shifted intrinsic connectivity of central executive and salience network in borderline personality disorder

    PubMed Central

    Doll, Anselm; Sorg, Christian; Manoliu, Andrei; Wöller, Andreas; Meng, Chun; Förstl, Hans; Zimmer, Claus; Wohlschläger, Afra M.; Riedl, Valentin

    2013-01-01

    Borderline personality disorder (BPD) is characterized by “stable instability” of emotions and behavior and their regulation. This emotional and behavioral instability corresponds with a neurocognitive triple network model of psychopathology, which suggests that aberrant emotional saliency and cognitive control is associated with aberrant interaction across three intrinsic connectivity networks [i.e., the salience network (SN), default mode network (DMN), and central executive network (CEN)]. The objective of the current study was to investigate whether and how such triple network intrinsic functional connectivity (iFC) is changed in patients with BPD. We acquired resting-state functional magnetic resonance imaging (rs-fMRI) data from 14 patients with BPD and 16 healthy controls. High-model order independent component analysis was used to extract spatiotemporal patterns of ongoing, coherent blood-oxygen-level-dependent signal fluctuations from rs-fMRI data. Main outcome measures were iFC within networks (intra-iFC) and between networks (i.e., network time course correlation inter-iFC). Aberrant intra-iFC was found in patients’ DMN, SN, and CEN, consistent with previous findings. While patients’ inter-iFC of the CEN was decreased, inter-iFC of the SN was increased. In particular, a balance index reflecting the relationship of CEN- and SN-inter-iFC across networks was strongly shifted from CEN to SN connectivity in patients. Results provide first preliminary evidence for aberrant triple network iFC in BPD. Our data suggest a shift of inter-network iFC from networks involved in cognitive control to those of emotion-related activity in BPD, potentially reflecting the persistent instability of emotion regulation in patients. PMID:24198777

  14. Functional connectivity-based parcellation and connectome of cortical midline structures in the mouse: a perfusion autoradiography study

    PubMed Central

    Holschneider, Daniel P.; Wang, Zhuo; Pang, Raina D.

    2014-01-01

    Rodent cortical midline structures (CMS) are involved in emotional, cognitive and attentional processes. Tract tracing has revealed complex patterns of structural connectivity demonstrating connectivity-based integration and segregation for the prelimbic, cingulate area 1, retrosplenial dysgranular cortices dorsally, and infralimbic, cingulate area 2, and retrosplenial granular cortices ventrally. Understanding of CMS functional connectivity (FC) remains more limited. Here we present the first subregion-level FC analysis of the mouse CMS, and assess whether fear results in state-dependent FC changes analogous to what has been reported in humans. Brain mapping using [14C]-iodoantipyrine was performed in mice during auditory-cued fear conditioned recall and in controls. Regional cerebral blood flow (CBF) was analyzed in 3-D images reconstructed from brain autoradiographs. Regions-of-interest were selected along the CMS anterior-posterior and dorsal-ventral axes. In controls, pairwise correlation and graph theoretical analyses showed strong FC within each CMS structure, strong FC along the dorsal-ventral axis, with segregation of anterior from posterior structures. Seed correlation showed FC of anterior regions to limbic/paralimbic areas, and FC of posterior regions to sensory areas–findings consistent with functional segregation noted in humans. Fear recall increased FC between the cingulate and retrosplenial cortices, but decreased FC between dorsal and ventral structures. In agreement with reports in humans, fear recall broadened FC of anterior structures to the amygdala and to somatosensory areas, suggesting integration and processing of both limbic and sensory information. Organizational principles learned from animal models at the mesoscopic level (brain regions and pathways) will not only critically inform future work at the microscopic (single neurons and synapses) level, but also have translational value to advance our understanding of human brain architecture. PMID:24966831

  15. Functional connectivity-based parcellation and connectome of cortical midline structures in the mouse: a perfusion autoradiography study.

    PubMed

    Holschneider, Daniel P; Wang, Zhuo; Pang, Raina D

    2014-01-01

    Rodent cortical midline structures (CMS) are involved in emotional, cognitive and attentional processes. Tract tracing has revealed complex patterns of structural connectivity demonstrating connectivity-based integration and segregation for the prelimbic, cingulate area 1, retrosplenial dysgranular cortices dorsally, and infralimbic, cingulate area 2, and retrosplenial granular cortices ventrally. Understanding of CMS functional connectivity (FC) remains more limited. Here we present the first subregion-level FC analysis of the mouse CMS, and assess whether fear results in state-dependent FC changes analogous to what has been reported in humans. Brain mapping using [(14)C]-iodoantipyrine was performed in mice during auditory-cued fear conditioned recall and in controls. Regional cerebral blood flow (CBF) was analyzed in 3-D images reconstructed from brain autoradiographs. Regions-of-interest were selected along the CMS anterior-posterior and dorsal-ventral axes. In controls, pairwise correlation and graph theoretical analyses showed strong FC within each CMS structure, strong FC along the dorsal-ventral axis, with segregation of anterior from posterior structures. Seed correlation showed FC of anterior regions to limbic/paralimbic areas, and FC of posterior regions to sensory areas-findings consistent with functional segregation noted in humans. Fear recall increased FC between the cingulate and retrosplenial cortices, but decreased FC between dorsal and ventral structures. In agreement with reports in humans, fear recall broadened FC of anterior structures to the amygdala and to somatosensory areas, suggesting integration and processing of both limbic and sensory information. Organizational principles learned from animal models at the mesoscopic level (brain regions and pathways) will not only critically inform future work at the microscopic (single neurons and synapses) level, but also have translational value to advance our understanding of human brain architecture.

  16. Earth elevation map production and high resolution sensing camera imaging analysis

    NASA Astrophysics Data System (ADS)

    Yang, Xiubin; Jin, Guang; Jiang, Li; Dai, Lu; Xu, Kai

    2010-11-01

    The Earth's digital elevation which impacts space camera imaging has prepared and imaging has analysed. Based on matching error that TDI CCD integral series request of the speed of image motion, statistical experimental methods-Monte Carlo method is used to calculate the distribution histogram of Earth's elevation in image motion compensated model which includes satellite attitude changes, orbital angular rate changes, latitude, longitude and the orbital inclination changes. And then, elevation information of the earth's surface from SRTM is read. Earth elevation map which produced for aerospace electronic cameras is compressed and spliced. It can get elevation data from flash according to the shooting point of latitude and longitude. If elevation data between two data, the ways of searching data uses linear interpolation. Linear interpolation can better meet the rugged mountains and hills changing requests. At last, the deviant framework and camera controller are used to test the character of deviant angle errors, TDI CCD camera simulation system with the material point corresponding to imaging point model is used to analyze the imaging's MTF and mutual correlation similarity measure, simulation system use adding cumulation which TDI CCD imaging exceeded the corresponding pixel horizontal and vertical offset to simulate camera imaging when stability of satellite attitude changes. This process is practicality. It can effectively control the camera memory space, and meet a very good precision TDI CCD camera in the request matches the speed of image motion and imaging.

  17. Technique for improving the quality of images from digital cameras using ink-jet printers and smoothed RGB transfer curves

    NASA Astrophysics Data System (ADS)

    Sampat, Nitin; Grim, John F.; O'Hara, James E.

    1998-04-01

    The digital camera market is growing at an explosive rate. At the same time, the quality of photographs printed on ink- jet printers continues to improve. Most of the consumer cameras are designed with the monitor as the target output device and ont the printer. When a user is printing his images from a camera, he/she needs to optimize the camera and printer combination in order to maximize image quality. We describe the details of one such method for improving image quality using a AGFA digital camera and an ink jet printer combination. Using Adobe PhotoShop, we generated optimum red, green and blue transfer curves that match the scene content to the printers output capabilities. Application of these curves to the original digital image resulted in a print with more shadow detail, no loss of highlight detail, a smoother tone scale, and more saturated colors. The image also exhibited an improved tonal scale and visually more pleasing images than those captured and printed without any 'correction'. While we report the results for one camera-printer combination we tested this technique on numbers digital cameras and printer combinations and in each case produced a better looking image. We also discuss the problems we encountered in implementing this technique.

  18. Hierarchical brain tissue segmentation and its application in multiple sclerosis and Alzheimer's disease

    NASA Astrophysics Data System (ADS)

    Lei, Tianhu; Udupa, Jayaram K.; Moonis, Gul; Schwartz, Eric; Balcer, Laura

    2005-04-01

    Based on Fuzzy Connectedness (FC) object delineation principles and algorithms, a hierarchical brain tissue segmentation technique has been developed for MR images. After MR image background intensity inhomogeneity correction and intensity standardization, three FC objects for cerebrospinal fluid (CSF), gray matter (GM), and white matter (WM) are generated via FC object delineation, and an intracranial (IC) mask is created via morphological operations. Then, the IC mask is decomposed into parenchymal (BP) and CSF masks, while the BP mask is separated into WM and GM masks. WM mask is further divided into pure and dirty white matter masks (PWM and DWM). In Multiple Sclerosis studies, a severe white matter lesion (LS) mask is defined from DWM mask. Based on the segmented brain tissue images, a histogram-based method has been developed to find disease-specific, image-based quantitative markers for characterizing the macromolecular manifestation of the two diseases. These same procedures have been applied to 65 MS (46 patients and 19 normal subjects) and 25 AD (15 patients and 10 normal subjects) data sets, each of which consists of FSE PD- and T2-weighted MR images. Histograms representing standardized PD and T2 intensity distributions and their numerical parameters provide an effective means for characterizing the two diseases. The procedures are systematic, nearly automated, robust, and the results are reproducible.

  19. Recovery of directed intracortical connectivity from fMRI data

    NASA Astrophysics Data System (ADS)

    Gilson, Matthieu; Ritter, Petra; Deco, Gustavo

    2016-06-01

    The brain exhibits complex spatio-temporal patterns of activity. In particular, its baseline activity at rest has a specific structure: imaging techniques (e.g., fMRI, EEG and MEG) show that cortical areas experience correlated fluctuations, which is referred to as functional connectivity (FC). The present study relies on our recently developed model in which intracortical white-matter connections shape noise-driven fluctuations to reproduce FC observed in experimental data (here fMRI BOLD signal). Here noise has a functional role and represents the variability of neural activity. The model also incorporates anatomical information obtained using diffusion tensor imaging (DTI), which estimates the density of white-matter fibers (structural connectivity, SC). After optimization to match empirical FC, the model provides an estimation of the efficacies of these fibers, which we call effective connectivity (EC). EC differs from SC, as EC not only accounts for the density of neural fibers, but also the concentration of synapses formed at their end, the type of neurotransmitters associated and the excitability of target neural populations. In summary, the model combines anatomical SC and activity FC to evaluate what drives the neural dynamics, embodied in EC. EC can then be analyzed using graph theory to understand how it generates FC and to seek for functional communities among cortical areas (parcellation of 68 areas). We find that intracortical connections are not symmetric, which affects the dynamic range of cortical activity (i.e., variety of states it can exhibit).

  20. Underconnectivity of the superior temporal sulcus predicts emotion recognition deficits in autism

    PubMed Central

    Woolley, Daniel G.; Steyaert, Jean; Di Martino, Adriana; Swinnen, Stephan P.; Wenderoth, Nicole

    2014-01-01

    Neurodevelopmental disconnections have been assumed to cause behavioral alterations in autism spectrum disorders (ASDs). Here, we combined measurements of intrinsic functional connectivity (iFC) from resting-state functional magnetic resonance imaging (fMRI) with task-based fMRI to explore whether altered activity and/or iFC of the right posterior superior temporal sulcus (pSTS) mediates deficits in emotion recognition in ASD. Fifteen adults with ASD and 15 matched-controls underwent resting-state and task-based fMRI, during which participants discriminated emotional states from point light displays (PLDs). Intrinsic FC of the right pSTS was further examined using 584 (278 ASD/306 controls) resting-state data of the Autism Brain Imaging Data Exchange (ABIDE). Participants with ASD were less accurate than controls in recognizing emotional states from PLDs. Analyses revealed pronounced ASD-related reductions both in task-based activity and resting-state iFC of the right pSTS with fronto-parietal areas typically encompassing the action observation network (AON). Notably, pSTS-hypo-activity was related to pSTS-hypo-connectivity, and both measures were predictive of emotion recognition performance with each measure explaining a unique part of the variance. Analyses with the large independent ABIDE dataset replicated reductions in pSTS-iFC to fronto-parietal regions. These findings provide novel evidence that pSTS hypo-activity and hypo-connectivity with the fronto-parietal AON are linked to the social deficits characteristic of ASD. PMID:24078018

  1. Altered functional connectivity of the subthalamus and the bed nucleus of the stria terminalis in obsessive-compulsive disorder.

    PubMed

    Cano, M; Alonso, P; Martínez-Zalacaín, I; Subirà, M; Real, E; Segalàs, C; Pujol, J; Cardoner, N; Menchón, J M; Soriano-Mas, C

    2018-04-01

    The assessment of inter-regional functional connectivity (FC) has allowed for the description of the putative mechanism of action of treatments such as deep brain stimulation (DBS) of the nucleus accumbens in patients with obsessive-compulsive disorder (OCD). Nevertheless, the possible FC alterations of other clinically-effective DBS targets have not been explored. Here we evaluated the FC patterns of the subthalamic nucleus (STN) and the bed nucleus of the stria terminalis (BNST) in patients with OCD, as well as their association with symptom severity. Eighty-six patients with OCD and 104 healthy participants were recruited. A resting-state image was acquired for each participant and a seed-based analysis focused on our two regions of interest was performed using statistical parametric mapping software (SPM8). Between-group differences in FC patterns were assessed with two-sample t test models, while the association between symptom severity and FC patterns was assessed with multiple regression analyses. In comparison with controls, patients with OCD showed: (1) increased FC between the left STN and the right pre-motor cortex, (2) decreased FC between the right STN and the lenticular nuclei, and (3) increased FC between the left BNST and the right frontopolar cortex. Multiple regression analyses revealed a negative association between clinical severity and FC between the right STN and lenticular nucleus. This study provides a neurobiological framework to understand the mechanism of action of DBS on the STN and the BNST, which seems to involve brain circuits related with motor response inhibition and anxiety control, respectively.

  2. The dynamic functional connectome: State-of-the-art and perspectives.

    PubMed

    Preti, Maria Giulia; Bolton, Thomas Aw; Van De Ville, Dimitri

    2017-10-15

    Resting-state functional magnetic resonance imaging (fMRI) has highlighted the rich structure of brain activity in absence of a task or stimulus. A great effort has been dedicated in the last two decades to investigate functional connectivity (FC), i.e. the functional interplay between different regions of the brain, which was for a long time assumed to have stationary nature. Only recently was the dynamic behaviour of FC revealed, showing that on top of correlational patterns of spontaneous fMRI signal fluctuations, connectivity between different brain regions exhibits meaningful variations within a typical resting-state fMRI experiment. As a consequence, a considerable amount of work has been directed to assessing and characterising dynamic FC (dFC), and several different approaches were explored to identify relevant FC fluctuations. At the same time, several questions were raised about the nature of dFC, which would be of interest only if brought back to a neural origin. In support of this, correlations with electroencephalography (EEG) recordings, demographic and behavioural data were established, and various clinical applications were explored, where the potential of dFC could be preliminarily demonstrated. In this review, we aim to provide a comprehensive description of the dFC approaches proposed so far, and point at the directions that we see as most promising for the future developments of the field. Advantages and pitfalls of dFC analyses are addressed, helping the readers to orient themselves through the complex web of available methodologies and tools. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  3. Effects of global signal regression and subtraction methods on resting-state functional connectivity using arterial spin labeling data.

    PubMed

    Silva, João Paulo Santos; Mônaco, Luciana da Mata; Paschoal, André Monteiro; Oliveira, Ícaro Agenor Ferreira de; Leoni, Renata Ferranti

    2018-05-16

    Arterial spin labeling (ASL) is an established magnetic resonance imaging (MRI) technique that is finding broader applications in functional studies of the healthy and diseased brain. To promote improvement in cerebral blood flow (CBF) signal specificity, many algorithms and imaging procedures, such as subtraction methods, were proposed to eliminate or, at least, minimize noise sources. Therefore, this study addressed the main considerations of how CBF functional connectivity (FC) is changed, regarding resting brain network (RBN) identification and correlations between regions of interest (ROI), by different subtraction methods and removal of residual motion artifacts and global signal fluctuations (RMAGSF). Twenty young healthy participants (13 M/7F, mean age = 25 ± 3 years) underwent an MRI protocol with a pseudo-continuous ASL (pCASL) sequence. Perfusion-based images were obtained using simple, sinc and running subtraction. RMAGSF removal was applied to all CBF time series. Independent Component Analysis (ICA) was used for RBN identification, while Pearson' correlation was performed for ROI-based FC analysis. Temporal signal-to-noise ratio (tSNR) was higher in CBF maps obtained by sinc subtraction, although RMAGSF removal had a significant effect on maps obtained with simple and running subtractions. Neither the subtraction method nor the RMAGSF removal directly affected the identification of RBNs. However, the number of correlated and anti-correlated voxels varied for different subtraction and filtering methods. In an ROI-to-ROI level, changes were prominent in FC values and their statistical significance. Our study showed that both RMAGSF filtering and subtraction method might influence resting-state FC results, especially in an ROI level, consequently affecting FC analysis and its interpretation. Taking our results and the whole discussion together, we understand that for an exploratory assessment of the brain, one could avoid removing RMAGSF to not bias FC measures, but could use sinc subtraction to minimize low-frequency contamination. However, CBF signal specificity and frequency range for filtering purposes still need to be assessed in future studies. Copyright © 2018 Elsevier Inc. All rights reserved.

  4. Neural Network of Body Representation Differs between Transsexuals and Cissexuals

    PubMed Central

    Lin, Chia-Shu; Ku, Hsiao-Lun; Chao, Hsiang-Tai; Tu, Pei-Chi; Li, Cheng-Ta; Cheng, Chou-Ming; Su, Tung-Ping; Lee, Ying-Chiao; Hsieh, Jen-Chuen

    2014-01-01

    Body image is the internal representation of an individual’s own physical appearance. Individuals with gender identity disorder (GID), commonly referred to as transsexuals (TXs), are unable to form a satisfactory body image due to the dissonance between their biological sex and gender identity. We reasoned that changes in the resting-state functional connectivity (rsFC) network would neurologically reflect such experiential incongruence in TXs. Using graph theory-based network analysis, we investigated the regional changes of the degree centrality of the rsFC network. The degree centrality is an index of the functional importance of a node in a neural network. We hypothesized that three key regions of the body representation network, i.e., the primary somatosensory cortex, the superior parietal lobule and the insula, would show a higher degree centrality in TXs. Twenty-three pre-treatment TXs (11 male-to-female and 12 female-to-male TXs) as one psychosocial group and 23 age-matched healthy cissexual control subjects (CISs, 11 males and 12 females) were recruited. Resting-state functional magnetic resonance imaging was performed, and binarized rsFC networks were constructed. The TXs demonstrated a significantly higher degree centrality in the bilateral superior parietal lobule and the primary somatosensory cortex. In addition, the connectivity between the right insula and the bilateral primary somatosensory cortices was negatively correlated with the selfness rating of their desired genders. These data indicate that the key components of body representation manifest in TXs as critical function hubs in the rsFC network. The negative association may imply a coping mechanism that dissociates bodily emotion from body image. The changes in the functional connectome may serve as representational markers for the dysphoric bodily self of TXs. PMID:24465785

  5. Event-Driven Random-Access-Windowing CCD Imaging System

    NASA Technical Reports Server (NTRS)

    Monacos, Steve; Portillo, Angel; Ortiz, Gerardo; Alexander, James; Lam, Raymond; Liu, William

    2004-01-01

    A charge-coupled-device (CCD) based high-speed imaging system, called a realtime, event-driven (RARE) camera, is undergoing development. This camera is capable of readout from multiple subwindows [also known as regions of interest (ROIs)] within the CCD field of view. Both the sizes and the locations of the ROIs can be controlled in real time and can be changed at the camera frame rate. The predecessor of this camera was described in High-Frame-Rate CCD Camera Having Subwindow Capability (NPO- 30564) NASA Tech Briefs, Vol. 26, No. 12 (December 2002), page 26. The architecture of the prior camera requires tight coupling between camera control logic and an external host computer that provides commands for camera operation and processes pixels from the camera. This tight coupling limits the attainable frame rate and functionality of the camera. The design of the present camera loosens this coupling to increase the achievable frame rate and functionality. From a host computer perspective, the readout operation in the prior camera was defined on a per-line basis; in this camera, it is defined on a per-ROI basis. In addition, the camera includes internal timing circuitry. This combination of features enables real-time, event-driven operation for adaptive control of the camera. Hence, this camera is well suited for applications requiring autonomous control of multiple ROIs to track multiple targets moving throughout the CCD field of view. Additionally, by eliminating the need for control intervention by the host computer during the pixel readout, the present design reduces ROI-readout times to attain higher frame rates. This camera (see figure) includes an imager card consisting of a commercial CCD imager and two signal-processor chips. The imager card converts transistor/ transistor-logic (TTL)-level signals from a field programmable gate array (FPGA) controller card. These signals are transmitted to the imager card via a low-voltage differential signaling (LVDS) cable assembly. The FPGA controller card is connected to the host computer via a standard peripheral component interface (PCI).

  6. You are here: Earth as seen from Mars

    NASA Image and Video Library

    2004-03-11

    This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. The inset shows a combination of four panoramic camera images zoomed in on Earth. The arrow points to Earth. Earth was too faint to be detected in images taken with the panoramic camera's color filters. http://photojournal.jpl.nasa.gov/catalog/PIA05547

  7. The sequence measurement system of the IR camera

    NASA Astrophysics Data System (ADS)

    Geng, Ai-hui; Han, Hong-xia; Zhang, Hai-bo

    2011-08-01

    Currently, the IR cameras are broadly used in the optic-electronic tracking, optic-electronic measuring, fire control and optic-electronic countermeasure field, but the output sequence of the most presently applied IR cameras in the project is complex and the giving sequence documents from the leave factory are not detailed. Aiming at the requirement that the continuous image transmission and image procession system need the detailed sequence of the IR cameras, the sequence measurement system of the IR camera is designed, and the detailed sequence measurement way of the applied IR camera is carried out. The FPGA programming combined with the SignalTap online observation way has been applied in the sequence measurement system, and the precise sequence of the IR camera's output signal has been achieved, the detailed document of the IR camera has been supplied to the continuous image transmission system, image processing system and etc. The sequence measurement system of the IR camera includes CameraLink input interface part, LVDS input interface part, FPGA part, CameraLink output interface part and etc, thereinto the FPGA part is the key composed part in the sequence measurement system. Both the video signal of the CmaeraLink style and the video signal of LVDS style can be accepted by the sequence measurement system, and because the image processing card and image memory card always use the CameraLink interface as its input interface style, the output signal style of the sequence measurement system has been designed into CameraLink interface. The sequence measurement system does the IR camera's sequence measurement work and meanwhile does the interface transmission work to some cameras. Inside the FPGA of the sequence measurement system, the sequence measurement program, the pixel clock modification, the SignalTap file configuration and the SignalTap online observation has been integrated to realize the precise measurement to the IR camera. Te sequence measurement program written by the verilog language combining the SignalTap tool on line observation can count the line numbers in one frame, pixel numbers in one line and meanwhile account the line offset and row offset of the image. Aiming at the complex sequence of the IR camera's output signal, the sequence measurement system of the IR camera accurately measures the sequence of the project applied camera, supplies the detailed sequence document to the continuous system such as image processing system and image transmission system and gives out the concrete parameters of the fval, lval, pixclk, line offset and row offset. The experiment shows that the sequence measurement system of the IR camera can get the precise sequence measurement result and works stably, laying foundation for the continuous system.

  8. Curvilinear locus coeruleus functional connectivity trajectories over the adult lifespan: a 7T MRI study.

    PubMed

    Jacobs, Heidi I L; Müller-Ehrenberg, Lisa; Priovoulos, Nikos; Roebroeck, Alard

    2018-05-24

    The locus coeruleus (LC) plays a crucial role in modulating several higher order cognitive functions via its widespread projections to the entire brain. We set out to investigate the hypothesis that LC functional connectivity (FC) may fluctuate nonlinearly with age and explored its relation to memory function. To that end, 49 cognitively healthy individuals (19-74 years) underwent ultra high-resolution 7T resting-state functional magnetic resonance imaging and cognitive testing. FC patterns from the LC to regions of the isodendritic core network and cortical regions were examined using region of interest-to-region of interest analyses. Curvilinear patterns with age were observed for FC between the left LC and cortical regions and the nucleus basalis of Meynert. A linear negative association was observed between age and LC-FC and ventral tegmental area. Higher levels of FC between the LC and nucleus basalis of Meynert or ventral tegmental area were associated with lower memory performance from age of 40 years onward. Thus, different LC-FC patterns early in life can signal subtle memory deficits. Furthermore, these results highlight the importance of intact interactions between neurotransmitter systems for optimal cognitive aging. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. Investigating the Origin of Bright Materials on Vesta: Synthesis, Conclusions, and Implications

    NASA Technical Reports Server (NTRS)

    Li, Jian-Yang; Mittlefehldt, D. W.; Pieters, C. M.; De Sanctis, M. C.; Schroder, S. E.; Hiesinger, H.; Blewett, D. T.; Russell, C. T.; Raymond, C. A.; Keller, H. U.

    2012-01-01

    The Dawn spacecraft started orbiting the second largest asteroid (4) Vesta in August 2011, revealing the details of its surface at an unprecedented pixel scale as small as approx.70 m in Framing Camera (FC) clear and color filter images and approx.180 m in the Visible and Infrared Spectrometer (VIR) data in its first two science orbits, the Survey Orbit and the High Altitude Mapping Orbit (HAMO) [1]. The surface of Vesta displays the greatest diversity in terms of geology and mineralogy of all asteroids studied in detail [2, 3]. While the albedo of Vesta of approx.0.38 in the visible wavelengths [4, 5] is one of the highest among all asteroids, the surface of Vesta shows the largest variation of albedos found on a single asteroid, with geometric albedos ranging at least from approx.0.10 to approx.0.67 in HAMO images [5]. There are many distinctively bright and dark areas observed on Vesta, associated with various geological features and showing remarkably different forms. Here we report our initial attempt to understand the origin of the areas that are distinctively brighter than their surroundings. The dark materials on Vesta clearly are different in origin from bright materials and are reported in a companion paper [6].

  10. Mars Descent Imager for Curiosity

    NASA Image and Video Library

    2010-07-19

    A pocketknife provides scale for this image of the Mars Descent Imager camera; the camera will fly on the Curiosity rover of NASA Mars Science Laboratory mission. Malin Space Science Systems, San Diego, Calif., supplied the camera for the mission.

  11. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  12. Altered intra- and inter-network functional coupling of resting-state networks associated with motor dysfunction in stroke.

    PubMed

    Zhao, Zhiyong; Wu, Jie; Fan, Mingxia; Yin, Dazhi; Tang, Chaozheng; Gong, Jiayu; Xu, Guojun; Gao, Xinjie; Yu, Qiurong; Yang, Hao; Sun, Limin; Jia, Jie

    2018-04-24

    Motor functions are supported through functional integration across the extended motor system network. Individuals following stroke often show deficits on motor performance requiring coordination of multiple brain networks; however, the assessment of connectivity patterns after stroke was still unclear. This study aimed to investigate the changes in intra- and inter-network functional connectivity (FC) of multiple networks following stroke and further correlate FC with motor performance. Thirty-three left subcortical chronic stroke patients and 34 healthy controls underwent resting-state functional magnetic resonance imaging. Eleven resting-state networks were identified via independent component analysis (ICA). Compared with healthy controls, the stroke group showed abnormal FC within the motor network (MN), visual network (VN), dorsal attention network (DAN), and executive control network (ECN). Additionally, the FC values of the ipsilesional inferior parietal lobule (IPL) within the ECN were negatively correlated with the Fugl-Meyer Assessment (FMA) scores (hand + wrist). With respect to inter-network interactions, the ipsilesional frontoparietal network (FPN) decreased FC with the MN and DAN; the contralesional FPN decreased FC with the ECN, but it increased FC with the default mode network (DMN); and the posterior DMN decreased FC with the VN. In sum, this study demonstrated the coexistence of intra- and inter-network alterations associated with motor-visual attention and high-order cognitive control function in chronic stroke, which might provide insights into brain network plasticity following stroke. © 2018 Wiley Periodicals, Inc.

  13. Label-Free Ferrocene-Loaded Nanocarrier Engineering for In Vivo Cochlear Drug Delivery and Imaging.

    PubMed

    Youm, Ibrahima; Musazzi, Umberto M; Gratton, Michael Anne; Murowchick, James B; Youan, Bi-Botti C

    2016-10-01

    It is hypothesized that ferrocene (FC)-loaded nanocarriers (FC-NCs) are safe label-free contrast agents for cochlear biodistribution study by transmission electron microscopy (TEM). To test this hypothesis, after engineering, the poly(epsilon-caprolactone)/polyglycolide NCs are tested for stability with various types and ratios of sugar cryoprotectants during freeze-drying. Their physicochemical properties are characterized by UV-visible spectroscopy, dynamic light scattering, Fourier transform infrared spectroscopy, and scanning electron microscopy coupled with energy dispersive X-ray spectroscopy (SEM/EDS). The biodistribution of the FC-NCs in the cochlear tissue after intratympanic injection in guinea pigs is visualized by TEM. Auditory brainstem responses are measured before and after 4-day treatments. These FC-NCs have 153.4 ± 8.7 nm, 85.5 ± 11.2%, and -22.1 ± 1.1 mV as mean diameters, percent drug association efficiency, and zeta potential, respectively (n = 3). The incorporation of FC into the NCs is confirmed by Fourier transform infrared spectroscopy and SEM/EDS spectra. Lactose (3:1 ratio, v/v) is the most effective stabilizer after a 12-day study. The administered NCs are visible by TEM in the scala media cells of the cochlea. Based on auditory brainstem response data, FC-NCs do not adversely affect hearing. Considering the electrondense, radioactive, and magnetic properties of iron inside FC, FC-NCs are promising nanotemplate for future inner ear theranostics. Copyright © 2016 American Pharmacists Association®. Published by Elsevier Inc. All rights reserved.

  14. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  15. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  16. White Matter Structural Connectivity Is Not Correlated to Cortical Resting-State Functional Connectivity over the Healthy Adult Lifespan.

    PubMed

    Tsang, Adrian; Lebel, Catherine A; Bray, Signe L; Goodyear, Bradley G; Hafeez, Moiz; Sotero, Roberto C; McCreary, Cheryl R; Frayne, Richard

    2017-01-01

    Structural connectivity (SC) of white matter (WM) and functional connectivity (FC) of cortical regions undergo changes in normal aging. As WM tracts form the underlying anatomical architecture that connects regions within resting state networks (RSNs), it is intuitive to expect that SC and FC changes with age are correlated. Studies that investigated the relationship between SC and FC in normal aging are rare, and have mainly compared between groups of elderly and younger subjects. The objectives of this work were to investigate linear SC and FC changes across the healthy adult lifespan, and to define relationships between SC and FC measures within seven whole-brain large scale RSNs. Diffusion tensor imaging (DTI) and resting-state functional MRI (rs-fMRI) data were acquired from 177 healthy participants (male/female = 69/108; aged 18-87 years). Forty cortical regions across both hemispheres belonging to seven template-defined RSNs were considered. Mean diffusivity (MD), fractional anisotropy (FA), mean tract length, and number of streamlines derived from DTI data were used as SC measures, delineated using deterministic tractography, within each RSN. Pearson correlation coefficients of rs-fMRI-obtained BOLD signal time courses between cortical regions were used as FC measure. SC demonstrated significant age-related changes in all RSNs (decreased FA, mean tract length, number of streamlines; and increased MD), and significant FC decrease was observed in five out of seven networks. Among the networks that showed both significant age related changes in SC and FC, however, SC was not in general significantly correlated with FC, whether controlling for age or not. The lack of observed relationship between SC and FC suggests that measures derived from DTI data that are commonly used to infer the integrity of WM microstructure are not related to the corresponding changes in FC within RSNs. The possible temporal lag between SC and FC will need to be addressed in future longitudinal studies to better elucidate the links between SC and FC changes in normal aging.

  17. White Matter Structural Connectivity Is Not Correlated to Cortical Resting-State Functional Connectivity over the Healthy Adult Lifespan

    PubMed Central

    Tsang, Adrian; Lebel, Catherine A.; Bray, Signe L.; Goodyear, Bradley G.; Hafeez, Moiz; Sotero, Roberto C.; McCreary, Cheryl R.; Frayne, Richard

    2017-01-01

    Structural connectivity (SC) of white matter (WM) and functional connectivity (FC) of cortical regions undergo changes in normal aging. As WM tracts form the underlying anatomical architecture that connects regions within resting state networks (RSNs), it is intuitive to expect that SC and FC changes with age are correlated. Studies that investigated the relationship between SC and FC in normal aging are rare, and have mainly compared between groups of elderly and younger subjects. The objectives of this work were to investigate linear SC and FC changes across the healthy adult lifespan, and to define relationships between SC and FC measures within seven whole-brain large scale RSNs. Diffusion tensor imaging (DTI) and resting-state functional MRI (rs-fMRI) data were acquired from 177 healthy participants (male/female = 69/108; aged 18–87 years). Forty cortical regions across both hemispheres belonging to seven template-defined RSNs were considered. Mean diffusivity (MD), fractional anisotropy (FA), mean tract length, and number of streamlines derived from DTI data were used as SC measures, delineated using deterministic tractography, within each RSN. Pearson correlation coefficients of rs-fMRI-obtained BOLD signal time courses between cortical regions were used as FC measure. SC demonstrated significant age-related changes in all RSNs (decreased FA, mean tract length, number of streamlines; and increased MD), and significant FC decrease was observed in five out of seven networks. Among the networks that showed both significant age related changes in SC and FC, however, SC was not in general significantly correlated with FC, whether controlling for age or not. The lack of observed relationship between SC and FC suggests that measures derived from DTI data that are commonly used to infer the integrity of WM microstructure are not related to the corresponding changes in FC within RSNs. The possible temporal lag between SC and FC will need to be addressed in future longitudinal studies to better elucidate the links between SC and FC changes in normal aging. PMID:28572765

  18. A digital gigapixel large-format tile-scan camera.

    PubMed

    Ben-Ezra, M

    2011-01-01

    Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.

  19. Geometric rectification of camera-captured document images.

    PubMed

    Liang, Jian; DeMenthon, Daniel; Doermann, David

    2008-04-01

    Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.

  20. L-band radar sensing of soil moisture. [Kern County, California

    NASA Technical Reports Server (NTRS)

    Chang, A. T. C.; Atwater, S.; Salomonson, V. V.; Estes, J. E.; Simonett, D. S.; Bryan, M. L.

    1980-01-01

    The performance of an L-band, 25 cm wavelength imaging synthetic aperture radar was assessed for soil moisture determination, and the temporal variability of radar returns from a number of agricultural fields was studied. A series of three overflights was accomplished over an agricultural test site in Kern County, California. Soil moisture samples were collected from bare fields at nine sites at depths of 0-2, 2-5, 5-15, and 15-30 cm. These gravimetric measurements were converted to percent of field capacity for correlation to the radar return signal. The initial signal film was optically correlated and scanned to produce image data numbers. These numbers were then converted to relative return power by linear interpolation of the noise power wedge which was introduced in 5 dB steps into the original signal film before and after each data run. Results of correlations between the relative return power and percent of field capacity (FC) demonstrate that the relative return power from this imaging radar system is responsive to the amount of soil moisture in bare fields. The signal returned from dry (15% FC) and wet (130% FC) fields where furrowing is parallel to the radar beam differs by about 10 dB.

  1. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  2. The Art of Astrophotography

    NASA Astrophysics Data System (ADS)

    Morison, Ian

    2017-02-01

    1. Imaging star trails; 2. Imaging a constellation with a DSLR and tripod; 3. Imaging the Milky Way with a DSLR and tracking mount; 4. Imaging the Moon with a compact camera or smartphone; 5. Imaging the Moon with a DSLR; 6. Imaging the Pleiades Cluster with a DSLR and small refractor; 7. Imaging the Orion Nebula, M42, with a modified Canon DSLR; 8. Telescopes and their accessories for use in astroimaging; 9. Towards stellar excellence; 10. Cooling a DSLR camera to reduce sensor noise; 11. Imaging the North American and Pelican Nebulae; 12. Combating light pollution - the bane of astrophotographers; 13. Imaging planets with an astronomical video camera or Canon DSLR; 14. Video imaging the Moon with a webcam or DSLR; 15. Imaging the Sun in white light; 16. Imaging the Sun in the light of its H-alpha emission; 17. Imaging meteors; 18. Imaging comets; 19. Using a cooled 'one shot colour' camera; 20. Using a cooled monochrome CCD camera; 21. LRGB colour imaging; 22. Narrow band colour imaging; Appendix A. Telescopes for imaging; Appendix B. Telescope mounts; Appendix C. The effects of the atmosphere; Appendix D. Auto guiding; Appendix E. Image calibration; Appendix F. Practical aspects of astroimaging.

  3. Comparison and evaluation of datasets for off-angle iris recognition

    NASA Astrophysics Data System (ADS)

    Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut

    2016-05-01

    In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.

  4. Use of Fc-Engineered Antibodies as Clearing Agents to Increase Contrast During PET

    PubMed Central

    Swiercz, Rafal; Chiguru, Srinivas; Tahmasbi, Amir; Ramezani, Saleh M.; Hao, Guiyang; Challa, Dilip K.; Lewis, Matthew A.; Kulkarni, Padmakar V.; Sun, Xiankai; Ober, Raimund J.; Mason, Ralph P.; Ward, E. Sally

    2015-01-01

    Despite promise for the use of antibodies as molecular imaging agents in PET, their long in vivo half-lives result in poor contrast and radiation damage to normal tissue. This study describes an approach to overcome these limitations. Methods Mice bearing human epidermal growth factor receptor type 2 (HER2)–overexpressing tumors were injected with radiolabeled (124I, 125I) HER2-specific antibody (pertuzumab). Pertuzumab injection was followed 8 h later by the delivery of an engineered, antibody-based inhibitor of the receptor, FcRn. Biodistribution analyses and PET were performed at 24 and 48 h after pertuzumab injection. Results The delivery of the engineered, antibody-based FcRn inhibitor (or Abdeg, for antibody that enhances IgG degradation) results in improved tumor-to-blood ratios, reduced systemic exposure to radiolabel, and increased contrast during PET. Conclusion Abdegs have considerable potential as agents to stringently regulate antibody dynamics in vivo, resulting in increased contrast during molecular imaging with PET. PMID:24868106

  5. Imaging of the Fibrous Cap in Atherosclerotic Carotid Plaque

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Saba, Luca, E-mail: lucasaba@tiscali.i; Potters, Fons; Lugt, Aad van der

    2010-08-15

    In the last two decades, a substantial number of articles have been published to provide diagnostic solutions for patients with carotid atherosclerotic disease. These articles have resulted in a shift of opinion regarding the identification of stroke risk in patients with carotid atherosclerotic disease. In the recent past, the degree of carotid artery stenosis was the sole determinant for performing carotid intervention (carotid endarterectomy or carotid stenting) in these patients. We now know that the degree of stenosis is only one marker for future cerebrovascular events. If one wants to determine the risk of these events more accurately, other parametersmore » must be taken into account; among these parameters are plaque composition, presence and state of the fibrous cap (FC), intraplaque haemorrhage, plaque ulceration, and plaque location. In particular, the FC is an important structure for the stability of the plaque, and its rupture is highly associated with a recent history of transient ischaemic attack or stroke. The subject of this review is imaging of the FC.« less

  6. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding cameras of both blocks have the same trend, but as usual for block adjustments with self calibration, they still show significant differences. Based on the very high number of image points the remaining image residuals can be safely determined by overlaying and averaging the image residuals corresponding to their image coordinates. The size of the systematic image errors, not covered by the used additional parameters, is in the range of a square mean of 0.1 pixels corresponding to 0.6μm. They are not the same for both blocks, but show some similarities for corresponding cameras. In general the bundle block adjustment with a satisfying set of additional parameters, checked by remaining systematic errors, is required for use of the whole geometric potential of the penta camera. Especially for object points on facades, often only in two images and taken with a limited base length, the correct handling of systematic image errors is important. At least in the analyzed data sets the self calibration of sub-cameras by bundle block adjustment suffers from the correlation of the inner to the exterior calibration due to missing crossing flight directions. As usual, the systematic image errors differ from block to block even without the influence of the correlation to the exterior orientation.

  7. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    NASA Astrophysics Data System (ADS)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  8. Characterizing Resting-State Brain Function Using Arterial Spin Labeling

    PubMed Central

    Jann, Kay; Wang, Danny J.J.

    2015-01-01

    Abstract Arterial spin labeling (ASL) is an increasingly established magnetic resonance imaging (MRI) technique that is finding broader applications in studying the healthy and diseased brain. This review addresses the use of ASL to assess brain function in the resting state. Following a brief technical description, we discuss the use of ASL in the following main categories: (1) resting-state functional connectivity (FC) measurement: the use of ASL-based cerebral blood flow (CBF) measurements as an alternative to the blood oxygen level-dependent (BOLD) technique to assess resting-state FC; (2) the link between network CBF and FC measurements: the use of network CBF as a surrogate of the metabolic activity within corresponding networks; and (3) the study of resting-state dynamic CBF-BOLD coupling and cerebral metabolism: the use of dynamic CBF information obtained using ASL to assess dynamic CBF-BOLD coupling and oxidative metabolism in the resting state. In addition, we summarize some future challenges and interesting research directions for ASL, including slice-accelerated (multiband) imaging as well as the effects of motion and other physiological confounds on perfusion-based FC measurement. In summary, this work reviews the state-of-the-art of ASL and establishes it as an increasingly viable MRI technique with high translational value in studying resting-state brain function. PMID:26106930

  9. New opportunities for quality enhancing of images captured by passive THz camera

    NASA Astrophysics Data System (ADS)

    Trofimov, Vyacheslav A.; Trofimov, Vladislav V.

    2014-10-01

    As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.

  10. Functional Connectivity of the Dorsal Attention Network Predicts Selective Attention in 4-7 year-old Girls.

    PubMed

    Rohr, Christiane S; Vinette, Sarah A; Parsons, Kari A L; Cho, Ivy Y K; Dimond, Dennis; Benischek, Alina; Lebel, Catherine; Dewey, Deborah; Bray, Signe

    2017-09-01

    Early childhood is a period of profound neural development and remodeling during which attention skills undergo rapid maturation. Attention networks have been extensively studied in the adult brain, yet relatively little is known about changes in early childhood, and their relation to cognitive development. We investigated the association between age and functional connectivity (FC) within the dorsal attention network (DAN) and the association between FC and attention skills in early childhood. Functional magnetic resonance imaging data was collected during passive viewing in 44 typically developing female children between 4 and 7 years whose sustained, selective, and executive attention skills were assessed. FC of the intraparietal sulcus (IPS) and the frontal eye fields (FEF) was computed across the entire brain and regressed against age. Age was positively associated with FC between core nodes of the DAN, the IPS and the FEF, and negatively associated with FC between the DAN and regions of the default-mode network. Further, controlling for age, FC between the IPS and FEF was significantly associated with selective attention. These findings add to our understanding of early childhood development of attention networks and suggest that greater FC within the DAN is associated with better selective attention skills. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please e-mail: journals.permissions@oup.com.

  11. How Many Pixels Does It Take to Make a Good 4"×6" Print? Pixel Count Wars Revisited

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    Digital still cameras emerged following the introduction of the Sony Mavica analog prototype camera in 1981. These early cameras produced poor image quality and did not challenge film cameras for overall quality. By 1995 digital still cameras in expensive SLR formats had 6 mega-pixels and produced high quality images (with significant image processing). In 2005 significant improvement in image quality was apparent and lower prices for digital still cameras (DSCs) started a rapid decline in film usage and film camera sells. By 2010 film usage was mostly limited to professionals and the motion picture industry. The rise of DSCs was marked by a “pixel war” where the driving feature of the cameras was the pixel count where even moderate cost, ˜120, DSCs would have 14 mega-pixels. The improvement of CMOS technology pushed this trend of lower prices and higher pixel counts. Only the single lens reflex cameras had large sensors and large pixels. The drive for smaller pixels hurt the quality aspects of the final image (sharpness, noise, speed, and exposure latitude). Only today are camera manufactures starting to reverse their course and producing DSCs with larger sensors and pixels. This paper will explore why larger pixels and sensors are key to the future of DSCs.

  12. Light field rendering with omni-directional camera

    NASA Astrophysics Data System (ADS)

    Todoroki, Hiroshi; Saito, Hideo

    2003-06-01

    This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.

  13. A telephoto camera system with shooting direction control by gaze detection

    NASA Astrophysics Data System (ADS)

    Teraya, Daiki; Hachisu, Takumi; Yendo, Tomohiro

    2015-05-01

    For safe driving, it is important for driver to check traffic conditions such as traffic lights, or traffic signs as early as soon. If on-vehicle camera takes image of important objects to understand traffic conditions from long distance and shows these to driver, driver can understand traffic conditions earlier. To take image of long distance objects clearly, the focal length of camera must be long. When the focal length is long, on-vehicle camera doesn't have enough field of view to check traffic conditions. Therefore, in order to get necessary images from long distance, camera must have long-focal length and controllability of shooting direction. In previous study, driver indicates shooting direction on displayed image taken by a wide-angle camera, a direction controllable camera takes telescopic image, and displays these to driver. However, driver uses a touch panel to indicate the shooting direction in previous study. It is cause of disturb driving. So, we propose a telephoto camera system for driving support whose shooting direction is controlled by driver's gaze to avoid disturbing drive. This proposed system is composed of a gaze detector and an active telephoto camera whose shooting direction is controlled. We adopt non-wear detecting method to avoid hindrance to drive. The gaze detector measures driver's gaze by image processing. The shooting direction of the active telephoto camera is controlled by galvanometer scanners and the direction can be switched within a few milliseconds. We confirmed that the proposed system takes images of gazing straight ahead of subject by experiments.

  14. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  15. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  16. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  17. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced. In contrast to monocular visual odometry approaches, due to the calibration of the individual depth maps, the scale of the scene can be observed. Furthermore, due to the light-field information better tracking capabilities compared to the monocular case can be expected. As result, the depth information gained by the plenoptic camera based visual odometry algorithm proposed in this paper has superior accuracy and reliability compared to the depth estimated from a single light-field image.

  18. High-Resolution Mars Camera Test Image of Moon Infrared

    NASA Image and Video Library

    2005-09-13

    This crescent view of Earth Moon in infrared wavelengths comes from a camera test by NASA Mars Reconnaissance Orbiter spacecraft on its way to Mars. This image was taken by taken by the High Resolution Imaging Science Experiment camera Sept. 8, 2005.

  19. Design of a MATLAB(registered trademark) Image Comparison and Analysis Tool for Augmentation of the Results of the Ann Arbor Distortion Test

    DTIC Science & Technology

    2016-06-25

    The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was

  20. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  1. Operation and Performance of the Mars Exploration Rover Imaging System on the Martian Surface

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Litwin, Todd; Herkenhoff, Ken

    2005-01-01

    This slide presentation details the Mars Exploration Rover (MER) imaging system. Over 144,000 images have been gathered from all Mars Missions, with 83.5% of them being gathered by MER. Each Rover has 9 cameras (Navcam, front and rear Hazcam, Pancam, Microscopic Image, Descent Camera, Engineering Camera, Science Camera) and produces 1024 x 1024 (1 Megapixel) images in the same format. All onboard image processing code is implemented in flight software and includes extensive processing capabilities such as autoexposure, flat field correction, image orientation, thumbnail generation, subframing, and image compression. Ground image processing is done at the Jet Propulsion Laboratory's Multimission Image Processing Laboratory using Video Image Communication and Retrieval (VICAR) while stereo processing (left/right pairs) is provided for raw image, radiometric correction; solar energy maps,triangulation (Cartesian 3-spaces) and slope maps.

  2. Blinded evaluation of the effects of high definition and magnification on perceived image quality in laryngeal imaging.

    PubMed

    Otto, Kristen J; Hapner, Edie R; Baker, Michael; Johns, Michael M

    2006-02-01

    Advances in commercial video technology have improved office-based laryngeal imaging. This study investigates the perceived image quality of a true high-definition (HD) video camera and the effect of magnification on laryngeal videostroboscopy. We performed a prospective, dual-armed, single-blinded analysis of a standard laryngeal videostroboscopic examination comparing 3 separate add-on camera systems: a 1-chip charge-coupled device (CCD) camera, a 3-chip CCD camera, and a true 720p (progressive scan) HD camera. Displayed images were controlled for magnification and image size (20-inch [50-cm] display, red-green-blue, and S-video cable for 1-chip and 3-chip cameras; digital visual interface cable and HD monitor for HD camera). Ten blinded observers were then asked to rate the following 5 items on a 0-to-100 visual analog scale: resolution, color, ability to see vocal fold vibration, sense of depth perception, and clarity of blood vessels. Eight unblinded observers were then asked to rate the difference in perceived resolution and clarity of laryngeal examination images when displayed on a 10-inch (25-cm) monitor versus a 42-inch (105-cm) monitor. A visual analog scale was used. These monitors were controlled for actual resolution capacity. For each item evaluated, randomized block design analysis demonstrated that the 3-chip camera scored significantly better than the 1-chip camera (p < .05). For the categories of color and blood vessel discrimination, the 3-chip camera scored significantly better than the HD camera (p < .05). For magnification alone, observers rated the 42-inch monitor statistically better than the 10-inch monitor. The expense of new medical technology must be judged against its added value. This study suggests that HD laryngeal imaging may not add significant value over currently available video systems, in perceived image quality, when a small monitor is used. Although differences in clarity between standard and HD cameras may not be readily apparent on small displays, a large display size coupled with HD technology may impart improved diagnosis of subtle vocal fold lesions and vibratory anomalies.

  3. Full-frame, high-speed 3D shape and deformation measurements using stereo-digital image correlation and a single color high-speed camera

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2017-08-01

    Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.

  4. A longitudinal model for functional connectivity networks using resting-state fMRI.

    PubMed

    Hart, Brian; Cribben, Ivor; Fiecas, Mark

    2018-06-04

    Many neuroimaging studies collect functional magnetic resonance imaging (fMRI) data in a longitudinal manner. However, the current fMRI literature lacks a general framework for analyzing functional connectivity (FC) networks in fMRI data obtained from a longitudinal study. In this work, we build a novel longitudinal FC model using a variance components approach. First, for all subjects' visits, we account for the autocorrelation inherent in the fMRI time series data using a non-parametric technique. Second, we use a generalized least squares approach to estimate 1) the within-subject variance component shared across the population, 2) the baseline FC strength, and 3) the FC's longitudinal trend. Our novel method for longitudinal FC networks seeks to account for the within-subject dependence across multiple visits, the variability due to the subjects being sampled from a population, and the autocorrelation present in fMRI time series data, while restricting the number of parameters in order to make the method computationally feasible and stable. We develop a permutation testing procedure to draw valid inference on group differences in the baseline FC network and change in FC over longitudinal time between a set of patients and a comparable set of controls. To examine performance, we run a series of simulations and apply the model to longitudinal fMRI data collected from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. Overall, we found no difference in the global FC network between Alzheimer's disease patients and healthy controls, but did find differing local aging patterns in the FC between the left hippocampus and the posterior cingulate cortex. Copyright © 2018 Elsevier Inc. All rights reserved.

  5. Local activity determines functional connectivity in the resting human brain: a simultaneous FDG-PET/fMRI study.

    PubMed

    Riedl, Valentin; Bienkowska, Katarzyna; Strobel, Carola; Tahmasian, Masoud; Grimmer, Timo; Förster, Stefan; Friston, Karl J; Sorg, Christian; Drzezga, Alexander

    2014-04-30

    Over the last decade, synchronized resting-state fluctuations of blood oxygenation level-dependent (BOLD) signals between remote brain areas [so-called BOLD resting-state functional connectivity (rs-FC)] have gained enormous relevance in systems and clinical neuroscience. However, the neural underpinnings of rs-FC are still incompletely understood. Using simultaneous positron emission tomography/magnetic resonance imaging we here directly investigated the relationship between rs-FC and local neuronal activity in humans. Computational models suggest a mechanistic link between the dynamics of local neuronal activity and the functional coupling among distributed brain regions. Therefore, we hypothesized that the local activity (LA) of a region at rest determines its rs-FC. To test this hypothesis, we simultaneously measured both LA (glucose metabolism) and rs-FC (via synchronized BOLD fluctuations) during conditions of eyes closed or eyes open. During eyes open, LA increased in the visual system, and the salience network (i.e., cingulate and insular cortices) and the pattern of elevated LA coincided almost exactly with the spatial pattern of increased rs-FC. Specifically, the voxelwise regional profile of LA in these areas strongly correlated with the regional pattern of rs-FC among the same regions (e.g., LA in primary visual cortex accounts for ∼ 50%, and LA in anterior cingulate accounts for ∼ 20% of rs-FC with the visual system). These data provide the first direct evidence in humans that local neuronal activity determines BOLD FC at rest. Beyond its relevance for the neuronal basis of coherent BOLD signal fluctuations, our procedure may translate into clinical research particularly to investigate potentially aberrant links between local dynamics and remote functional coupling in patients with neuropsychiatric disorders.

  6. Simultaneous resting-state functional MRI and electroencephalography recordings of functional connectivity in patients with schizophrenia.

    PubMed

    Kirino, Eiji; Tanaka, Shoji; Fukuta, Mayuko; Inami, Rie; Arai, Heii; Inoue, Reiichi; Aoki, Shigeki

    2017-04-01

    It remains unclear how functional connectivity (FC) may be related to specific cognitive domains in neuropsychiatric disorders. Here we used simultaneous resting-state functional magnetic resonance imaging (rsfMRI) and electroencephalography (EEG) recording in patients with schizophrenia, to evaluate FC within and outside the default mode network (DMN). Our study population included 14 patients with schizophrenia and 15 healthy control participants. From all participants, we acquired rsfMRI data, and simultaneously recorded EEG data using an MR-compatible amplifier. We analyzed the rsfMRI-EEG data, and used the CONN toolbox to calculate the FC between regions of interest. We also performed between-group comparisons of standardized low-resolution electromagnetic tomography-based intracortical lagged coherence for each EEG frequency band. FC within the DMN, as measured by rsfMRI and EEG, did not significantly differ between groups. Analysis of rsfMRI data showed that FC between the right posterior inferior temporal gyrus and medial prefrontal cortex was stronger among patients with schizophrenia compared to control participants. Analysis of FC within the DMN using rsfMRI and EEG data revealed no significant differences between patients with schizophrenia and control participants. However, rsfMRI data revealed over-modulated FC between the medial prefrontal cortex and right posterior inferior temporal gyrus in patients with schizophrenia compared to control participants, suggesting that the patients had altered FC, with higher correlations across nodes within and outside of the DMN. Further studies using simultaneous rsfMRI and EEG are required to determine whether altered FC within the DMN is associated with schizophrenia. © 2016 The Authors. Psychiatry and Clinical Neurosciences published by John Wiley & Sons Australia, Ltd on behalf of Japanese Society of Psychiatry and Neurology.

  7. Comparison of myocardial perfusion imaging between the new high-speed gamma camera and the standard anger camera.

    PubMed

    Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira

    2013-01-01

    Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.

  8. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.

  9. An image compression algorithm for a high-resolution digital still camera

    NASA Technical Reports Server (NTRS)

    Nerheim, Rosalee

    1989-01-01

    The Electronic Still Camera (ESC) project will provide for the capture and transmission of high-quality images without the use of film. The image quality will be superior to video and will approach the quality of 35mm film. The camera, which will have the same general shape and handling as a 35mm camera, will be able to send images to earth in near real-time. Images will be stored in computer memory (RAM) in removable cartridges readable by a computer. To save storage space, the image will be compressed and reconstructed at the time of viewing. Both lossless and loss-y image compression algorithms are studied, described, and compared.

  10. Structure-From for Calibration of a Vehicle Camera System with Non-Overlapping Fields-Of in AN Urban Environment

    NASA Astrophysics Data System (ADS)

    Hanel, A.; Stilla, U.

    2017-05-01

    Vehicle environment cameras observing traffic participants in the area around a car and interior cameras observing the car driver are important data sources for driver intention recognition algorithms. To combine information from both camera groups, a camera system calibration can be performed. Typically, there is no overlapping field-of-view between environment and interior cameras. Often no marked reference points are available in environments, which are a large enough to cover a car for the system calibration. In this contribution, a calibration method for a vehicle camera system with non-overlapping camera groups in an urban environment is described. A-priori images of an urban calibration environment taken with an external camera are processed with the structure-frommotion method to obtain an environment point cloud. Images of the vehicle interior, taken also with an external camera, are processed to obtain an interior point cloud. Both point clouds are tied to each other with images of both image sets showing the same real-world objects. The point clouds are transformed into a self-defined vehicle coordinate system describing the vehicle movement. On demand, videos can be recorded with the vehicle cameras in a calibration drive. Poses of vehicle environment cameras and interior cameras are estimated separately using ground control points from the respective point cloud. All poses of a vehicle camera estimated for different video frames are optimized in a bundle adjustment. In an experiment, a point cloud is created from images of an underground car park, as well as a point cloud of the interior of a Volkswagen test car is created. Videos of two environment and one interior cameras are recorded. Results show, that the vehicle camera poses are estimated successfully especially when the car is not moving. Position standard deviations in the centimeter range can be achieved for all vehicle cameras. Relative distances between the vehicle cameras deviate between one and ten centimeters from tachymeter reference measurements.

  11. The effect of microchannel plate gain depression on PAPA photon counting cameras

    NASA Astrophysics Data System (ADS)

    Sams, Bruce J., III

    1991-03-01

    PAPA (precision analog photon address) cameras are photon counting imagers which employ microchannel plates (MCPs) for image intensification. They have been used extensively in astronomical speckle imaging. The PAPA camera can produce artifacts when light incident on its MCP is highly concentrated. The effect is exacerbated by adjusting the strobe detection level too low, so that the camera accepts very small MCP pulses. The artifacts can occur even at low total count rates if the image has highly a concentrated bright spot. This paper describes how to optimize PAPA camera electronics, and describes six techniques which can avoid or minimize addressing errors.

  12. Multi-focused microlens array optimization and light field imaging study based on Monte Carlo method.

    PubMed

    Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping

    2017-04-03

    Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.

  13. Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chai, Kil-Byoung; Bellan, Paul M.

    2013-12-15

    An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.

  14. Space-variant restoration of images degraded by camera motion blur.

    PubMed

    Sorel, Michal; Flusser, Jan

    2008-02-01

    We examine the problem of restoration from multiple images degraded by camera motion blur. We consider scenes with significant depth variations resulting in space-variant blur. The proposed algorithm can be applied if the camera moves along an arbitrary curve parallel to the image plane, without any rotations. The knowledge of camera trajectory and camera parameters is not necessary. At the input, the user selects a region where depth variations are negligible. The algorithm belongs to the group of variational methods that estimate simultaneously a sharp image and a depth map, based on the minimization of a cost functional. To initialize the minimization, it uses an auxiliary window-based depth estimation algorithm. Feasibility of the algorithm is demonstrated by three experiments with real images.

  15. Method used to test the imaging consistency of binocular camera's left-right optical system

    NASA Astrophysics Data System (ADS)

    Liu, Meiying; Wang, Hu; Liu, Jie; Xue, Yaoke; Yang, Shaodong; Zhao, Hui

    2016-09-01

    To binocular camera, the consistency of optical parameters of the left and the right optical system is an important factor that will influence the overall imaging consistency. In conventional testing procedure of optical system, there lacks specifications suitable for evaluating imaging consistency. In this paper, considering the special requirements of binocular optical imaging system, a method used to measure the imaging consistency of binocular camera is presented. Based on this method, a measurement system which is composed of an integrating sphere, a rotary table and a CMOS camera has been established. First, let the left and the right optical system capture images in normal exposure time under the same condition. Second, a contour image is obtained based on the multiple threshold segmentation result and the boundary is determined using the slope of contour lines near the pseudo-contour line. Third, the constraint of gray level based on the corresponding coordinates of left-right images is established and the imaging consistency could be evaluated through standard deviation σ of the imaging grayscale difference D (x, y) between the left and right optical system. The experiments demonstrate that the method is suitable for carrying out the imaging consistency testing for binocular camera. When the standard deviation 3σ distribution of imaging gray difference D (x, y) between the left and right optical system of the binocular camera does not exceed 5%, it is believed that the design requirements have been achieved. This method could be used effectively and paves the way for the imaging consistency testing of the binocular camera.

  16. The adaptor protein Crk controls activation and inhibition of natural killer cells.

    PubMed

    Liu, Dongfang; Peterson, Mary E; Long, Eric O

    2012-04-20

    Natural killer (NK) cell inhibitory receptors recruit tyrosine phosphatases to prevent activation, induce phosphorylation and dissociation of the small adaptor Crk from cytoskeleton scaffold complexes, and maintain NK cells in a state of responsiveness to subsequent activation events. How Crk contributes to inhibition is unknown. We imaged primary NK cells over lipid bilayers carrying IgG1 Fc to stimulate CD16 and human leukocyte antigen (HLA)-E to inhibit through receptor CD94-NKG2A. HLA-E alone induced Crk phosphorylation in NKG2A(+) NK cells. At activating synapses with Fc alone, Crk was required for the movement of Fc microclusters and their ability to trigger activation signals. At inhibitory synapses, HLA-E promoted central accumulation of both Fc and phosphorylated Crk and blocked the Fc-induced buildup of F-actin. We propose a unified model for inhibitory receptor function: Crk phosphorylation prevents essential Crk-dependent activation signals and blocks F-actin network formation, thereby reducing constraints on subsequent engagement of activation receptors. Copyright © 2012 Elsevier Inc. All rights reserved.

  17. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).

  18. Imaging of breast cancer with mid- and long-wave infrared camera.

    PubMed

    Joro, R; Lääperi, A-L; Dastidar, P; Soimakallio, S; Kuukasjärvi, T; Toivonen, T; Saaristo, R; Järvenpää, R

    2008-01-01

    In this novel study the breasts of 15 women with palpable breast cancer were preoperatively imaged with three technically different infrared (IR) cameras - micro bolometer (MB), quantum well (QWIP) and photo voltaic (PV) - to compare their ability to differentiate breast cancer from normal tissue. The IR images were processed, the data for frequency analysis were collected from dynamic IR images by pixel-based analysis and from each image selectively windowed regional analysis was carried out, based on angiogenesis and nitric oxide production of cancer tissue causing vasomotor and cardiogenic frequency differences compared to normal tissue. Our results show that the GaAs QWIP camera and the InSb PV camera demonstrate the frequency difference between normal and cancerous breast tissue; the PV camera more clearly. With selected image processing operations more detailed frequency analyses could be applied to the suspicious area. The MB camera was not suitable for tissue differentiation, as the difference between noise and effective signal was unsatisfactory.

  19. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  20. Webcam network and image database for studies of phenological changes of vegetation and snow cover in Finland, image time series from 2014 to 2016

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Aurela, Mika; Böttcher, Kristin; Kolari, Pasi; Loehr, John; Karhu, Jouni; Linkosalmi, Maiju; Melih Tanis, Cemal; Tuovinen, Juha-Pekka; Nadir Arslan, Ali

    2018-01-01

    In recent years, monitoring of the status of ecosystems using low-cost web (IP) or time lapse cameras has received wide interest. With broad spatial coverage and high temporal resolution, networked cameras can provide information about snow cover and vegetation status, serve as ground truths to Earth observations and be useful for gap-filling of cloudy areas in Earth observation time series. Networked cameras can also play an important role in supplementing laborious phenological field surveys and citizen science projects, which also suffer from observer-dependent observation bias. We established a network of digital surveillance cameras for automated monitoring of phenological activity of vegetation and snow cover in the boreal ecosystems of Finland. Cameras were mounted at 14 sites, each site having 1-3 cameras. Here, we document the network, basic camera information and access to images in the permanent data repository (http://www.zenodo.org/communities/phenology_camera/). Individual DOI-referenced image time series consist of half-hourly images collected between 2014 and 2016 (https://doi.org/10.5281/zenodo.1066862). Additionally, we present an example of a colour index time series derived from images from two contrasting sites.

  1. The imaging system design of three-line LMCCD mapping camera

    NASA Astrophysics Data System (ADS)

    Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da

    2011-08-01

    In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.

  2. Automated Spectral System for Terrain Classification, Mineralogy of Vesta from the Dawn Framing Cameras

    NASA Astrophysics Data System (ADS)

    Reddy, V.; Le Corre, L.; Nathues, A.; Hall, I.; Gutierrez-Marques, P.; Hoffmann, M.

    2011-10-01

    The Dawn mission will rendezvous with asteroid (4) Vesta in July 2011. We have developed a set of equations for extracting mean pyroxene chemistry (Ferrosilite and Wollastonite) for classifying terrains on Vesta by using the Dawn Framing Camera (FC) multi-color bands. The Automated Spectral System (ASS) utilizes pseudo-Band I minima to estimate the mean pyroxene chemistry of diogenites, and basaltic eucrites. The mean pyroxene chemistries of cumulate eucrites, and howardites overlap each other on the pyroxene quadrilateral and hence are harder to distinguish. We expect our ASS to carry a bulk of the terrain classification and mineralogy workload utilizing these equations and complement the work of DawnKey (Le Corre et al., 2011, DPS/EPSC 2011). The system will also provide surface mineral chemistry layers that can be used for mapping Vesta's surface.

  3. Aberrant striatal functional connectivity in children with autism.

    PubMed

    Di Martino, Adriana; Kelly, Clare; Grzadzinski, Rebecca; Zuo, Xi-Nian; Mennes, Maarten; Mairena, Maria Angeles; Lord, Catherine; Castellanos, F Xavier; Milham, Michael P

    2011-05-01

    Models of autism spectrum disorders (ASD) as neural disconnection syndromes have been predominantly supported by examinations of abnormalities in corticocortical networks in adults with autism. A broader body of research implicates subcortical structures, particularly the striatum, in the physiopathology of autism. Resting state functional magnetic resonance imaging has revealed detailed maps of striatal circuitry in healthy and psychiatric populations and vividly captured maturational changes in striatal circuitry during typical development. Using resting state functional magnetic resonance imaging, we examined striatal functional connectivity (FC) in 20 children with ASD and 20 typically developing children between the ages of 7.6 and 13.5 years. Whole-brain voxelwise statistical maps quantified within-group striatal FC and between-group differences for three caudate and three putamen seeds for each hemisphere. Children with ASD mostly exhibited prominent patterns of ectopic striatal FC (i.e., functional connectivity present in ASD but not in typically developing children), with increased functional connectivity between nearly all striatal subregions and heteromodal associative and limbic cortex previously implicated in the physiopathology of ASD (e.g., insular and right superior temporal gyrus). Additionally, we found striatal functional hyperconnectivity with the pons, thus expanding the scope of functional alterations implicated in ASD. Secondary analyses revealed ASD-related hyperconnectivity between the pons and insula cortex. Examination of FC of striatal networks in children with ASD revealed abnormalities in circuits involving early developing areas, such as the brainstem and insula, with a pattern of increased FC in ectopic circuits that likely reflects developmental derangement rather than immaturity of functional circuits. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  4. An improved parallel fuzzy connected image segmentation method based on CUDA.

    PubMed

    Wang, Liansheng; Li, Dong; Huang, Shaohui

    2016-05-12

    Fuzzy connectedness method (FC) is an effective method for extracting fuzzy objects from medical images. However, when FC is applied to large medical image datasets, its running time will be greatly expensive. Therefore, a parallel CUDA version of FC (CUDA-kFOE) was proposed by Ying et al. to accelerate the original FC. Unfortunately, CUDA-kFOE does not consider the edges between GPU blocks, which causes miscalculation of edge points. In this paper, an improved algorithm is proposed by adding a correction step on the edge points. The improved algorithm can greatly enhance the calculation accuracy. In the improved method, an iterative manner is applied. In the first iteration, the affinity computation strategy is changed and a look up table is employed for memory reduction. In the second iteration, the error voxels because of asynchronism are updated again. Three different CT sequences of hepatic vascular with different sizes were used in the experiments with three different seeds. NVIDIA Tesla C2075 is used to evaluate our improved method over these three data sets. Experimental results show that the improved algorithm can achieve a faster segmentation compared to the CPU version and higher accuracy than CUDA-kFOE. The calculation results were consistent with the CPU version, which demonstrates that it corrects the edge point calculation error of the original CUDA-kFOE. The proposed method has a comparable time cost and has less errors compared to the original CUDA-kFOE as demonstrated in the experimental results. In the future, we will focus on automatic acquisition method and automatic processing.

  5. A Comparative Study of Microscopic Images Captured by a Box Type Digital Camera Versus a Standard Microscopic Photography Camera Unit

    PubMed Central

    Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai

    2014-01-01

    Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350

  6. Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System

    NASA Astrophysics Data System (ADS)

    Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki

    In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.

  7. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function.

    PubMed

    Li, Jin; Liu, Zilong

    2017-07-24

    Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.

  8. Rapid assessment of forest canopy and light regime using smartphone hemispherical photography.

    PubMed

    Bianchi, Simone; Cahalan, Christine; Hale, Sophie; Gibbons, James Michael

    2017-12-01

    Hemispherical photography (HP), implemented with cameras equipped with "fisheye" lenses, is a widely used method for describing forest canopies and light regimes. A promising technological advance is the availability of low-cost fisheye lenses for smartphone cameras. However, smartphone camera sensors cannot record a full hemisphere. We investigate whether smartphone HP is a cheaper and faster but still adequate operational alternative to traditional cameras for describing forest canopies and light regimes. We collected hemispherical pictures with both smartphone and traditional cameras in 223 forest sample points, across different overstory species and canopy densities. The smartphone image acquisition followed a faster and simpler protocol than that for the traditional camera. We automatically thresholded all images. We processed the traditional camera images for Canopy Openness (CO) and Site Factor estimation. For smartphone images, we took two pictures with different orientations per point and used two processing protocols: (i) we estimated and averaged total canopy gap from the two single pictures, and (ii) merging the two pictures together, we formed images closer to full hemispheres and estimated from them CO and Site Factors. We compared the same parameters obtained from different cameras and estimated generalized linear mixed models (GLMMs) between them. Total canopy gap estimated from the first processing protocol for smartphone pictures was on average significantly higher than CO estimated from traditional camera images, although with a consistent bias. Canopy Openness and Site Factors estimated from merged smartphone pictures of the second processing protocol were on average significantly higher than those from traditional cameras images, although with relatively little absolute differences and scatter. Smartphone HP is an acceptable alternative to HP using traditional cameras, providing similar results with a faster and cheaper methodology. Smartphone outputs can be directly used as they are for ecological studies, or converted with specific models for a better comparison to traditional cameras.

  9. Frontal Hyperconnectivity Related to Discounting and Reversal Learning in Cocaine Subjects

    PubMed Central

    Camchong, Jazmin; MacDonald, Angus W; Nelson, Brent; Bell, Christopher; Mueller, Bryon A; Specker, Sheila; Lim, Kelvin O

    2011-01-01

    BACKGROUND Functional neuroimaging studies suggest that chronic cocaine use is associated with frontal lobe abnormalities. Functional connectivity (FC) alterations of cocaine dependent individuals (CD), however, are not yet clear. This is the first study to our knowledge that examines resting FC of anterior cingulate cortex (ACC) in CD. Because ACC is known to integrate inputs from different brain regions to regulate behavior, we hypothesize that CD will have connectivity abnormalities in ACC networks. In addition, we hypothesized that abnormalities would be associated with poor performance in delayed discounting and reversal learning tasks. METHODS Resting functional magnetic resonance imaging data were collected to look for FC differences between twenty-seven cocaine dependent individuals (CD) (5 females, age: M=39.73, SD=6.14) and twenty-four controls (5 females, age: M=39.76, SD = 7.09). Participants were assessed with delayed discounting and reversal learning tasks. Using seed-based FC measures, we examined FC in CD and controls within five ACC connectivity networks with seeds in subgenual, caudal, dorsal, rostral, and perigenual ACC. RESULTS CD showed increased FC within the perigenual ACC network in left middle frontal gyrus, ACC and middle temporal gyrus when compared to controls. FC abnormalities were significantly positively correlated with task performance in delayed discounting and reversal learning tasks in CD. CONCLUSIONS The present study shows that participants with chronic cocaine-dependency have hyperconnectivity within an ACC network known to be involved in social processing and mentalizing. In addition, FC abnormalities found in CD were associated with difficulties with delay rewards and slower adaptive learning. PMID:21371689

  10. Local functional connectivity alterations in schizophrenia, bipolar disorder, and major depressive disorder.

    PubMed

    Wei, Yange; Chang, Miao; Womer, Fay Y; Zhou, Qian; Yin, Zhiyang; Wei, Shengnan; Zhou, Yifang; Jiang, Xiaowei; Yao, Xudong; Duan, Jia; Xu, Ke; Zuo, Xi-Nian; Tang, Yanqing; Wang, Fei

    2018-08-15

    Local functional connectivity (FC) indicates local or short-distance functional interactions and may serve as a neuroimaging marker to investigate the human brain connectome. Local FC alterations suggest a disrupted balance in the local functionality of the whole brain network and are increasingly implicated in schizophrenia (SZ), bipolar disorder (BD), and major depressive disorder (MDD). We aim to examine the similarities and differences in the local FC across SZ, BD, and MDD. In total, 537 participants (SZ, 126; BD, 97; MDD, 126; and healthy controls, 188) completed resting-state functional magnetic resonance imaging at a single site. The local FC at resting state was calculated and compared across SZ, BD, and MDD. The local FC increased across SZ, BD, and MDD within the bilateral orbital frontal cortex (OFC) and additional region in the left OFC extending to putamen and decreased in the primary visual, auditory, and motor cortices, right supplemental motor area, and bilateral thalami. There was a gradient in the extent of alterations such that SZ > BD > MDD. This cross-sectional study cannot consider medications and other clinical variables. These findings indicate a disrupted balance between network integration and segregation in SZ, BD, and MDD, including over-integration via increased local FC in the OFC and diminished segregation of neural processing with the weakening of the local FC in the primary sensory cortices and thalamus. The shared local FC abnormalities across SZ, BD, and MDD may shed new light on the potential biological mechanisms underlying these disorders. Copyright © 2018 Elsevier B.V. All rights reserved.

  11. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2012-01-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  12. Video auto stitching in multicamera surveillance system

    NASA Astrophysics Data System (ADS)

    He, Bin; Zhao, Gang; Liu, Qifang; Li, Yangyang

    2011-12-01

    This paper concerns the problem of video stitching automatically in a multi-camera surveillance system. Previous approaches have used multiple calibrated cameras for video mosaic in large scale monitoring application. In this work, we formulate video stitching as a multi-image registration and blending problem, and not all cameras are needed to be calibrated except a few selected master cameras. SURF is used to find matched pairs of image key points from different cameras, and then camera pose is estimated and refined. Homography matrix is employed to calculate overlapping pixels and finally implement boundary resample algorithm to blend images. The result of simulation demonstrates the efficiency of our method.

  13. Video monitoring in the Gadria debris flow catchment: preliminary results of large scale particle image velocimetry (LSPIV)

    NASA Astrophysics Data System (ADS)

    Theule, Joshua; Crema, Stefano; Comiti, Francesco; Cavalli, Marco; Marchi, Lorenzo

    2015-04-01

    Large scale particle image velocimetry (LSPIV) is a technique mostly used in rivers to measure two dimensional velocities from high resolution images at high frame rates. This technique still needs to be thoroughly explored in the field of debris flow studies. The Gadria debris flow monitoring catchment in Val Venosta (Italian Alps) has been equipped with four MOBOTIX M12 video cameras. Two cameras are located in a sediment trap located close to the alluvial fan apex, one looking upstream and the other looking down and more perpendicular to the flow. The third camera is in the next reach upstream from the sediment trap at a closer proximity to the flow. These three cameras are connected to a field shelter equipped with power supply and a server collecting all the monitoring data. The fourth camera is located in an active gully, the camera is activated by a rain gauge when there is one minute of rainfall. Before LSPIV can be used, the highly distorted images need to be corrected and accurate reference points need to be made. We decided to use IMGRAFT (an opensource image georectification toolbox) which can correct distorted images using reference points and camera location, and then finally rectifies the batch of images onto a DEM grid (or the DEM grid onto the image coordinates). With the orthorectified images, we used the freeware Fudaa-LSPIV (developed by EDF, IRSTEA, and DeltaCAD Company) to generate the LSPIV calculations of the flow events. Calculated velocities can easily be checked manually because of the already orthorectified images. During the monitoring program (since 2011) we recorded three debris flow events at the sediment trap area (each with very different surge dynamics). The camera in the gully was in operation in 2014 which managed to record granular flows and rockfalls, which particle tracking may be more appropriate for velocity measurements. The four cameras allows us to explore the limitations of camera distance, angle, frame rate, and image quality.

  14. Engineering design criteria for an image intensifier/image converter camera

    NASA Technical Reports Server (NTRS)

    Sharpsteen, J. T.; Lund, D. L.; Stoap, L. J.; Solheim, C. D.

    1976-01-01

    The design, display, and evaluation of an image intensifier/image converter camera which can be utilized in various requirements of spaceshuttle experiments are described. An image intensifier tube was utilized in combination with two brassboards as power supply and used for evaluation of night photography in the field. Pictures were obtained showing field details which would have been undistinguishable to the naked eye or to an ordinary camera.

  15. Ultra-fast framing camera tube

    DOEpatents

    Kalibjian, Ralph

    1981-01-01

    An electronic framing camera tube features focal plane image dissection and synchronized restoration of the dissected electron line images to form two-dimensional framed images. Ultra-fast framing is performed by first streaking a two-dimensional electron image across a narrow slit, thereby dissecting the two-dimensional electron image into sequential electron line images. The dissected electron line images are then restored into a framed image by a restorer deflector operated synchronously with the dissector deflector. The number of framed images on the tube's viewing screen is equal to the number of dissecting slits in the tube. The distinguishing features of this ultra-fast framing camera tube are the focal plane dissecting slits, and the synchronously-operated restorer deflector which restores the dissected electron line images into a two-dimensional framed image. The framing camera tube can produce image frames having high spatial resolution of optical events in the sub-100 picosecond range.

  16. Spontaneous brain activity predicts learning ability of foreign sounds.

    PubMed

    Ventura-Campos, Noelia; Sanjuán, Ana; González, Julio; Palomar-García, María-Ángeles; Rodríguez-Pujadas, Aina; Sebastián-Gallés, Núria; Deco, Gustavo; Ávila, César

    2013-05-29

    Can learning capacity of the human brain be predicted from initial spontaneous functional connectivity (FC) between brain areas involved in a task? We combined task-related functional magnetic resonance imaging (fMRI) and resting-state fMRI (rs-fMRI) before and after training with a Hindi dental-retroflex nonnative contrast. Previous fMRI results were replicated, demonstrating that this learning recruited the left insula/frontal operculum and the left superior parietal lobe, among other areas of the brain. Crucially, resting-state FC (rs-FC) between these two areas at pretraining predicted individual differences in learning outcomes after distributed (Experiment 1) and intensive training (Experiment 2). Furthermore, this rs-FC was reduced at posttraining, a change that may also account for learning. Finally, resting-state network analyses showed that the mechanism underlying this reduction of rs-FC was mainly a transfer in intrinsic activity of the left frontal operculum/anterior insula from the left frontoparietal network to the salience network. Thus, rs-FC may contribute to predict learning ability and to understand how learning modifies the functioning of the brain. The discovery of this correspondence between initial spontaneous brain activity in task-related areas and posttraining performance opens new avenues to find predictors of learning capacities in the brain using task-related fMRI and rs-fMRI combined.

  17. Novel polyubiquitin imaging system, PolyUb-FC, reveals that K33-linked polyubiquitin is recruited by SQSTM1/p62.

    PubMed

    Nibe, Yoichi; Oshima, Shigeru; Kobayashi, Masanori; Maeyashiki, Chiaki; Matsuzawa, Yu; Otsubo, Kana; Matsuda, Hiroki; Aonuma, Emi; Nemoto, Yasuhiro; Nagaishi, Takashi; Okamoto, Ryuichi; Tsuchiya, Kiichiro; Nakamura, Tetsuya; Nakada, Shinichiro; Watanabe, Mamoru

    2018-01-01

    Ubiquitin chains are formed with 8 structurally and functionally distinct polymers. However, the functions of each polyubiquitin remain poorly understood. We developed a polyubiquitin-mediated fluorescence complementation (PolyUb-FC) assay using Kusabira Green (KG) as a split fluorescent protein. The PolyUb-FC assay has the advantage that monoubiquitination is nonfluorescent and chain-specific polyubiquitination can be directly visualized in living cells without using antibodies. We applied the PolyUb-FC assay to examine K33-linked polyubiquitin. We demonstrated that SQSTM1/p62 puncta colocalized with K33-linked polyubiquitin and this interaction was modulated by the ZRANB1/TRABID-K29 and -K33 linkage-specific deubiquitinase (DUB). We further showed that the colocalization of K33-linked polyubiquitin and MAP1LC3/LC3 (microtubule associated protein 1 light chain 3) puncta was impaired by SQSTM1/p62 deficiency. Taken together, these findings provide novel insights into how atypical polyubiquitin is recruited by SQSTM1/p62. Finally, we developed an inducible-PolyUb-FC system for visualizing chain-specific polyubiquitin. The PolyUb-FC will be a useful tool for analyzing the dynamics of atypical polyubiquitin chain generation.

  18. Nociceptive neuronal Fc-gamma receptor I is involved in IgG immune complex induced pain in the rat.

    PubMed

    Jiang, Haowu; Shen, Xinhua; Chen, Zhiyong; Liu, Fan; Wang, Tao; Xie, Yikuan; Ma, Chao

    2017-05-01

    Antigen-specific immune diseases such as rheumatoid arthritis are often accompanied by pain and hyperalgesia. Our previous studies have demonstrated that Fc-gamma-receptor type I (FcγRI) is expressed in a subpopulation of rat dorsal root ganglion (DRG) neurons and can be directly activated by IgG immune complex (IgG-IC). In this study we investigated whether neuronal FcγRI contributes to antigen-specific pain in the naïve and rheumatoid arthritis model rats. In vitro calcium imaging and whole-cell patch clamp recordings in dissociated DRG neurons revealed that only the small-, but not medium- or large-sized DRG neurons responded to IgG-IC. Accordingly, in vivo electrophysiological recordings showed that intradermal injection of IgG-IC into the peripheral receptive field could sensitize only the C- (but not A-) type sensory neurons and evoke action potential discharges. Pain-related behavioral tests showed that intradermal injection of IgG-IC dose-dependently produced mechanical and thermal hyperalgesia in the hindpaw of rats. These behavioral effects could be alleviated by localized administration of non-specific IgG or an FcγRI antibody, but not by mast cell stabilizer or histamine antagonist. In a rat model of antigen-induced arthritis (AIA) produced by methylated bovine serum albumin, FcγRI were found upregulated exclusively in the small-sized DRG neurons. In vitro calcium imaging revealed that significantly more small-sized DRG neurons responded to IgG-IC in the AIA rats, although there was no significant difference between the AIA and control rats in the magnitude of calcium changes in the DRG neurons. Moreover, in vivo electrophysiological recordings showed that C-nociceptive neurons in the AIA rats exhibited a greater incidence of action potential discharges and stronger responses to mechanical stimuli after IgG-IC was injected to the receptive fields. These results suggest that FcγRI expressed in the peripheral nociceptors might be directly activated by IgG-IC and contribute to antigen-specific pain in pathological conditions. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. a Spatio-Spectral Camera for High Resolution Hyperspectral Imaging

    NASA Astrophysics Data System (ADS)

    Livens, S.; Pauly, K.; Baeck, P.; Blommaert, J.; Nuyts, D.; Zender, J.; Delauré, B.

    2017-08-01

    Imaging with a conventional frame camera from a moving remotely piloted aircraft system (RPAS) is by design very inefficient. Less than 1 % of the flying time is used for collecting light. This unused potential can be utilized by an innovative imaging concept, the spatio-spectral camera. The core of the camera is a frame sensor with a large number of hyperspectral filters arranged on the sensor in stepwise lines. It combines the advantages of frame cameras with those of pushbroom cameras. By acquiring images in rapid succession, such a camera can collect detailed hyperspectral information, while retaining the high spatial resolution offered by the sensor. We have developed two versions of a spatio-spectral camera and used them in a variety of conditions. In this paper, we present a summary of three missions with the in-house developed COSI prototype camera (600-900 nm) in the domains of precision agriculture (fungus infection monitoring in experimental wheat plots), horticulture (crop status monitoring to evaluate irrigation management in strawberry fields) and geology (meteorite detection on a grassland field). Additionally, we describe the characteristics of the 2nd generation, commercially available ButterflEYE camera offering extended spectral range (475-925 nm), and we discuss future work.

  20. Hydrogen peroxide plasma sterilization of a waterproof, high-definition video camera case for intraoperative imaging in veterinary surgery.

    PubMed

    Adin, Christopher A; Royal, Kenneth D; Moore, Brandon; Jacob, Megan

    2018-06-13

    To evaluate the safety and usability of a wearable, waterproof high-definition camera/case for acquisition of surgical images by sterile personnel. An in vitro study to test the efficacy of biodecontamination of camera cases. Usability for intraoperative image acquisition was assessed in clinical procedures. Two waterproof GoPro Hero4 Silver camera cases were inoculated by immersion in media containing Staphylococcus pseudointermedius or Escherichia coli at ≥5.50E+07 colony forming units/mL. Cases were biodecontaminated by manual washing and hydrogen peroxide plasma sterilization. Cultures were obtained by swab and by immersion in enrichment broth before and after each contamination/decontamination cycle (n = 4). The cameras were then applied by a surgeon in clinical procedures by using either a headband or handheld mode and were assessed for usability according to 5 user characteristics. Cultures of all poststerilization swabs were negative. One of 8 cultures was positive in enrichment broth, consistent with a low level of contamination in 1 sample. Usability of the camera was considered poor in headband mode, with limited battery life, inability to control camera functions, and lack of zoom function affecting image quality. Handheld operation of the camera by the primary surgeon improved usability, allowing close-up still and video intraoperative image acquisition. Vaporized hydrogen peroxide sterilization of this camera case was considered effective for biodecontamination. Handheld operation improved usability for intraoperative image acquisition. Vaporized hydrogen peroxide sterilization and thorough manual washing of a waterproof camera may provide cost effective intraoperative image acquisition for documentation purposes. © 2018 The American College of Veterinary Surgeons.

  1. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  2. An Inverse U-Shaped Curve of Resting-State Networks in Individuals at High Risk of Alzheimer's Disease.

    PubMed

    Ye, Qing; Chen, Haifeng; Su, Fan; Shu, Hao; Gong, Liang; Xie, Chunming; Zhou, Hong; Bai, Feng

    Higher functional connectivity (FC) in resting-state networks has been shown in individuals at risk of Alzheimer's disease (AD) by many studies. However, the longitudinal trajectories of the FC remain unknown. The present 35-month follow-up study aimed to explore longitudinal changes in higher FC in multiple resting-state networks in subjects with the apolipoprotein E ε4 allele (ApoE4) and/or amnestic mild cognitive impairment (aMCI). Fifty-one subjects with aMCI and 64 cognitively normal (CN) subjects underwent neuropsychological tests and resting-state functional magnetic resonance imaging (fMRI) scans twice from April 2011 to June 2015. Subjects were divided into 4 groups according to diagnosis and ApoE4 status. The CN non-ApoE4 group served as a control group, and other groups served as AD risk groups. The cross-sectional and longitudinal patterns of multiple resting-state networks, including default mode network, hippocampus network, executive control network, and salience network, were explored by comparing FC data between groups and between time points, respectively. At baseline, compared with the control group, the AD risk groups showed higher FC with 8 regions in multiple networks. At follow-up, 6 of the regions displayed longitudinally decreased FC in AD risk groups. In contrast, the FC with all of these regions was maintained in the control group. Notably, among the 3 risk groups, most of the higher FC at baseline (5 of the 8 regions) and longitudinally decreased FC at follow-up (4 of the 6 regions) were shown in the aMCI ApoE4 group. Higher resting-state FC is followed by a decline in subjects at AD risk, and this inverse U-shaped trajectory is more notable in subjects with higher risk. © Copyright 2018 Physicians Postgraduate Press, Inc.

  3. Targeting Mast Cells and Basophils with Anti-FcεRIα Fab-Conjugated Celastrol-Loaded Micelles Suppresses Allergic Inflammation.

    PubMed

    Peng, Xia; Wang, Juan; Li, Xianyang; Lin, Lihui; Xie, Guogang; Cui, Zelin; Li, Jia; Wang, Yuping; Li, Li

    2015-12-01

    Mast cells and basophils are effector cells in the pathophysiology of allergic diseases. Targeted elimination of these cells may be a promising strategy for the treatment of allergic disorders. Our present study aims at targeted delivery of anti-FcεRIα Fab-conjugated celastrol-loaded micelles toward FcεRIα receptors expressed on mast cells and basophils to have enhanced anti-allergic effect. To achieve this aim, we prepared celastrol-loaded (PEO-block-PPO-block-PEO, Pluronic) polymeric nanomicelles using thin-film hydration method. The anti-FcεRIα Fab Fragment was then conjugated to carboxyl groups on drug-loaded micelles via EDC amidation reaction. The anti-FcεRIα Fab-conjugated celastrol-loaded micelles revealed uniform particle size (93.43 ± 12.93 nm) with high loading percentage (21.2 ± 1.5% w/w). The image of micelles showed oval and rod like. The anti-FcεRIα Fab-conjugated micelles demonstrated enhanced cellular uptake and cytotoxity toward target KU812 cells than non-conjugated micelles in vitro. Furthermore, diffusion of the drug into the cells allowed an efficient induction of cell apoptosis. In mouse model of allergic asthma, treatment with anti-FcεRIα Fab-conjugated micelles increased lung accumulation of micelles, and significantly reduced OVA-sIgE, histamine and Th2 cytokines (IL-4, IL-5, TNF-α) levels, eosinophils infiltration and mucus production. In addition, in mouse model of passive cutaneous anaphylaxis, anti-FcεRIα Fab-conjugated celastrol-loaded micelles treatment significantly decreased extravasated evan's in the ear. These results indicate that anti-FcεRIα Fab-conjugated celastrol-loaded micelles can target and selectively kill mast cells and basophils which express FcεRIα, and may be efficient reagents for the treatment of allergic disorders and mast cell related diseases.

  4. Sky camera geometric calibration using solar observations

    DOE PAGES

    Urquhart, Bryan; Kurtz, Ben; Kleissl, Jan

    2016-09-05

    A camera model and associated automated calibration procedure for stationary daytime sky imaging cameras is presented. The specific modeling and calibration needs are motivated by remotely deployed cameras used to forecast solar power production where cameras point skyward and use 180° fisheye lenses. Sun position in the sky and on the image plane provides a simple and automated approach to calibration; special equipment or calibration patterns are not required. Sun position in the sky is modeled using a solar position algorithm (requiring latitude, longitude, altitude and time as inputs). Sun position on the image plane is detected using a simple image processing algorithm. Themore » performance evaluation focuses on the calibration of a camera employing a fisheye lens with an equisolid angle projection, but the camera model is general enough to treat most fixed focal length, central, dioptric camera systems with a photo objective lens. Calibration errors scale with the noise level of the sun position measurement in the image plane, but the calibration is robust across a large range of noise in the sun position. In conclusion, calibration performance on clear days ranged from 0.94 to 1.24 pixels root mean square error.« less

  5. Alternative images for perpendicular parking : a usability test of a multi-camera parking assistance system.

    DOT National Transportation Integrated Search

    2004-10-01

    The parking assistance system evaluated consisted of four outward facing cameras whose images could be presented on a monitor on the center console. The images presented varied in the location of the virtual eye point of the camera (the height above ...

  6. Camera artifacts in IUE spectra

    NASA Technical Reports Server (NTRS)

    Bruegman, O. W.; Crenshaw, D. M.

    1994-01-01

    This study of emission line mimicking features in the IUE cameras has produced an atlas of artifiacts in high-dispersion images with an accompanying table of prominent artifacts and a table of prominent artifacts in the raw images along with a medium image of the sky background for each IUE camera.

  7. A low-cost dual-camera imaging system for aerial applicators

    USDA-ARS?s Scientific Manuscript database

    Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...

  8. Left Panorama of Spirit's Landing Site

    NASA Technical Reports Server (NTRS)

    2004-01-01

    Left Panorama of Spirit's Landing Site

    This is a version of the first 3-D stereo image from the rover's navigation camera, showing only the view from the left stereo camera onboard the Mars Exploration Rover Spirit. The left and right camera images are combined to produce a 3-D image.

  9. Generating Stereoscopic Television Images With One Camera

    NASA Technical Reports Server (NTRS)

    Coan, Paul P.

    1996-01-01

    Straightforward technique for generating stereoscopic television images involves use of single television camera translated laterally between left- and right-eye positions. Camera acquires one of images (left- or right-eye image), and video signal from image delayed while camera translated to position where it acquires other image. Length of delay chosen so both images displayed simultaneously or as nearly simultaneously as necessary to obtain stereoscopic effect. Technique amenable to zooming in on small areas within broad scenes. Potential applications include three-dimensional viewing of geological features and meteorological events from spacecraft and aircraft, inspection of workpieces moving along conveyor belts, and aiding ground and water search-and-rescue operations. Also used to generate and display imagery for public education and general information, and possible for medical purposes.

  10. UCXp camera imaging principle and key technologies of data post-processing

    NASA Astrophysics Data System (ADS)

    Yuan, Fangyan; Li, Guoqing; Zuo, Zhengli; Liu, Jianmin; Wu, Liang; Yu, Xiaoping; Zhao, Haitao

    2014-03-01

    The large format digital aerial camera product UCXp was introduced into the Chinese market in 2008, the image consists of 17310 columns and 11310 rows with a pixel size of 6 mm. The UCXp camera has many advantages compared with the same generation camera, with multiple lenses exposed almost at the same time and no oblique lens. The camera has a complex imaging process whose principle will be detailed in this paper. On the other hand, the UCXp image post-processing method, including data pre-processing and orthophoto production, will be emphasized in this article. Based on the data of new Beichuan County, this paper will describe the data processing and effects.

  11. A mobile device-based imaging spectrometer for environmental monitoring by attaching a lightweight small module to a commercial digital camera.

    PubMed

    Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing

    2017-11-15

    Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.

  12. Single-snapshot 2D color measurement by plenoptic imaging system

    NASA Astrophysics Data System (ADS)

    Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana

    2014-03-01

    Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.

  13. Hippocampus-precuneus functional connectivity as an early sign of Alzheimer's disease: a preliminary study using structural and functional magnetic resonance imaging data.

    PubMed

    Kim, Junghoe; Kim, Yong-Hwan; Lee, Jong-Hwan

    2013-02-07

    Alzheimer's disease (AD) is characterized by structural atrophies in the hippocampus (HP) and aberrant patterns of functional connectivities (FC) between the hippocampus and the rest of the brain. However, the relationship between cortical atrophy levels and corresponding degrees of aberrant FC patterns has not been systematically examined. In this study, we investigated whether there was an explicit link between structural abnormalities and corresponding functional aberrances associated with AD using structural and functional magnetic resonance imaging (fMRI) data. To this end, brain regions with cortical atrophies that are associated with AD were identified in the HP in the left (L) and right (R) hemispheres using structural MRI data from volume analyses (p<0.03 for L-HP; p<0.04 for R-HP) and voxel-based morphometry analyses (p<4×10(-4) for L-HP; p<2×10(-3) for R-HP). Aberrantly reduced FC levels between the HP (with atrophy) and precuneus were also consistently observed in fMRI data from AD than HC brains that were analyzed by the Pearson's correlation coefficients (p<3×10(-4) for L-HP; and p<8×10(-5) for R-HP). In addition, the substantial negative FC levels from the HC brains between the precuneus and post central gyrus (PoCG) without structural atrophy were also significantly diminished from the AD brains (p<5×10(-5) for L-PoCG; and p<6×10(-5) for R-PoCG). The effect sizes of these aberrant FC levels associated with AD were greater than that of cortical atrophy levels when comparing using normalized Z score and Cohen's d measures, which indicates that an aberrant FC level may precede cortical atrophy. Copyright © 2012 Elsevier B.V. All rights reserved.

  14. Effect of PICALM rs3851179 polymorphism on the default mode network function in mild cognitive impairment.

    PubMed

    Sun, Ding-Ming; Chen, Hai-Feng; Zuo, Qi-Long; Su, Fan; Bai, Feng; Liu, Chun-Feng

    2017-07-28

    Alterations in default mode network (DMN) functional connectivity (FC) might accompany the dysfunction of Alzheimer's disease (AD). Indeed, episodic memory impairment is a hallmark of AD, and mild cognitive impairment (MCI) has been associated with a high risk for AD. Phosphatidylinositol-binding clathrin assembly protein (PICALM) (rs3851179) has been associated with AD; in particular, the A allele may serve a protective role, while the G allele serves as a strong genetic risk factor. Therefore, the identification of genetic polymorphisms associated with the DMN is required in MCI subjects. In all, 32 MCI subjects and 32 healthy controls (HCs) underwent resting-state functional magnetic resonance imaging (rs-fMRI) and a genetic imaging approach. Subjects were divided into four groups according to the diagnosis (i.e., MCI and HCs) and the PICALM rs3851179 polymorphism (i.e., AA/AG genotype and GG genotype). The differences in FC within the DMN between the four subgroups were explored. Furthermore, we examined the relationship between our neuroimaging measures and cognitive performance. The regions associated with the genotype-by-disease interaction were in the left middle temporal gyrus (LMTG) and left middle frontal gyrus (LMFG). These changes in LMFG FC were generally manifested as an "inverse U-shaped curve", while a "U-shaped curve" was associated with the LMTG FC between these four subgroups (all P<0.05). Furthermore, higher FC within the LMFG was related to better episodic memory performance (i.e., AVLT 20min DR, rho=0.72, P=0.044) for the MCI subgroups with the GG genotype. The PICALM rs3851179 polymorphism significantly affects the DMN network in MCI. The LMFG and LMTG may be associated with opposite patterns. However, the altered LMFG FC in MCI patients with the GG genotype was more sensitive to episodic memory impairment, which is more likely to lead to a high risk of AD. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Camera-Model Identification Using Markovian Transition Probability Matrix

    NASA Astrophysics Data System (ADS)

    Xu, Guanshuo; Gao, Shang; Shi, Yun Qing; Hu, Ruimin; Su, Wei

    Detecting the (brands and) models of digital cameras from given digital images has become a popular research topic in the field of digital forensics. As most of images are JPEG compressed before they are output from cameras, we propose to use an effective image statistical model to characterize the difference JPEG 2-D arrays of Y and Cb components from the JPEG images taken by various camera models. Specifically, the transition probability matrices derived from four different directional Markov processes applied to the image difference JPEG 2-D arrays are used to identify statistical difference caused by image formation pipelines inside different camera models. All elements of the transition probability matrices, after a thresholding technique, are directly used as features for classification purpose. Multi-class support vector machines (SVM) are used as the classification tool. The effectiveness of our proposed statistical model is demonstrated by large-scale experimental results.

  16. Comparison of Sheath Power Transmission Factor for Neutral Beam Injection and Electron Cyclotron Heated Discharges in DIII-D

    NASA Astrophysics Data System (ADS)

    Donovan, D. C.; Buchenauer, D. A.; Watkins, J. G.; Leonard, A. W.; Lasnier, C. J.; Stangeby, P. C.

    2011-10-01

    The sheath power transmission factor (SPTF) is examined in DIII-D with a new IR camera, a more thermally robust Langmuir probe array, fast thermocouples, and a unique probe configuration on the Divertor Materials Evaluation System (DiMES). Past data collected from the fixed Langmuir Probes and Infrared Camera on DIII-D have indicated a SPTF near 1 at the strike point. Theory indicates that the SPTF should be approximately 7 and cannot be less than 5. SPTF values are calculated using independent measurements from the IR camera and fast thermocouples. Experiments have been performed with varying levels of electron cyclotron heating and neutral beam power. The ECH power does not involve fast ions, so the SPTF can be calculated and compared to previous experiments to determine the extent to which fast ions may be influencing the SPTF measurements, and potentially offer insight into the disagreement with the theory. Work supported in part by US DOE under DE-AC04-94AL85000, DE-FC02-04ER54698, and DE-AC52-07NA27344.

  17. Automatic Calibration of an Airborne Imaging System to an Inertial Navigation Unit

    NASA Technical Reports Server (NTRS)

    Ansar, Adnan I.; Clouse, Daniel S.; McHenry, Michael C.; Zarzhitsky, Dimitri V.; Pagdett, Curtis W.

    2013-01-01

    This software automatically calibrates a camera or an imaging array to an inertial navigation system (INS) that is rigidly mounted to the array or imager. In effect, it recovers the coordinate frame transformation between the reference frame of the imager and the reference frame of the INS. This innovation can automatically derive the camera-to-INS alignment using image data only. The assumption is that the camera fixates on an area while the aircraft flies on orbit. The system then, fully automatically, solves for the camera orientation in the INS frame. No manual intervention or ground tie point data is required.

  18. Plenoptic Image Motion Deblurring.

    PubMed

    Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo

    2018-04-01

    We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.

  19. Dispositional use of emotion regulation strategies and resting-state cortico-limbic functional connectivity.

    PubMed

    Picó-Pérez, Maria; Alonso, Pino; Contreras-Rodríguez, Oren; Martínez-Zalacaín, Ignacio; López-Solà, Clara; Jiménez-Murcia, Susana; Verdejo-García, Antonio; Menchón, José M; Soriano-Mas, Carles

    2017-09-02

    Neuroimaging functional connectivity (FC) analyses have shown that the negative coupling between the amygdala and cortical regions is linked to better emotion regulation in experimental settings. Nevertheless, no studies have examined the association between resting-state cortico-amygdalar FC and the dispositional use of emotion regulation strategies. We aim at assessing the relationship between the resting-state FC patterns of two different amygdala territories, with different functions in the emotion response process, and trait-like measures of cognitive reappraisal and expressive suppression. Forty-eight healthy controls completed the Emotion Regulation Questionnaire (ERQ) and underwent a resting-state functional magnetic resonance imaging acquisition. FC maps of basolateral and centromedial amygdala (BLA/CMA) with different cortical areas were estimated with a seed-based approach, and were then correlated with reappraisal and suppression scores from the ERQ. FC between left BLA and left insula and right BLA and the supplementary motor area (SMA) correlated inversely with reappraisal scores. Conversely, FC between left BLA and the dorsal anterior cingulate cortex correlated directly with suppression scores. Finally, FC between left CMA and the SMA was inversely correlated with suppression. Top-down regulation from the SMA seems to account for the dispositional use of both reappraisal and suppression depending on the specific amygdala nucleus being modulated. In addition, modulation of amygdala activity from cingulate and insular cortices seem to also account for the habitual use of the different emotion regulation strategies.

  20. Hubs of Anticorrelation in High-Resolution Resting-State Functional Connectivity Network Architecture.

    PubMed

    Gopinath, Kaundinya; Krishnamurthy, Venkatagiri; Cabanban, Romeo; Crosson, Bruce A

    2015-06-01

    A major focus of brain research recently has been to map the resting-state functional connectivity (rsFC) network architecture of the normal brain and pathology through functional magnetic resonance imaging. However, the phenomenon of anticorrelations in resting-state signals between different brain regions has not been adequately examined. The preponderance of studies on resting-state fMRI (rsFMRI) have either ignored anticorrelations in rsFC networks or adopted methods in data analysis, which have rendered anticorrelations in rsFC networks uninterpretable. The few studies that have examined anticorrelations in rsFC networks using conventional methods have found anticorrelations to be weak in strength and not very reproducible across subjects. Anticorrelations in rsFC network architecture could reflect mechanisms that subserve a number of important brain processes. In this preliminary study, we examined the properties of anticorrelated rsFC networks by systematically focusing on negative cross-correlation coefficients (CCs) among rsFMRI voxel time series across the brain with graph theory-based network analysis. A number of methods were implemented to enhance the neuronal specificity of resting-state functional connections that yield negative CCs, although at the cost of decreased sensitivity. Hubs of anticorrelation were seen in a number of cortical and subcortical brain regions. Examination of the anticorrelation maps of these hubs indicated that negative CCs in rsFC network architecture highlight a number of regulatory interactions between brain networks and regions, including reciprocal modulations, suppression, inhibition, and neurofeedback.

  1. Abnormal Resting-State Functional Connectivity of the Anterior Cingulate Cortex in Unilateral Chronic Tinnitus Patients

    PubMed Central

    Chen, Yu-Chen; Liu, Shenghua; Lv, Han; Bo, Fan; Feng, Yuan; Chen, Huiyou; Xu, Jin-Jing; Yin, Xindao; Wang, Shukui; Gu, Jian-Ping

    2018-01-01

    Purpose: The anterior cingulate cortex (ACC) has been suggested to be involved in chronic subjective tinnitus. Tinnitus may arise from aberrant functional coupling between the ACC and cerebral cortex. To explore this hypothesis, we used resting-state functional magnetic resonance imaging (fMRI) to illuminate the functional connectivity (FC) network of the ACC subregions in chronic tinnitus patients. Methods: Resting-state fMRI scans were obtained from 31 chronic right-sided tinnitus patients and 40 healthy controls (age, sex, and education well-matched) in this study. Rostral ACC and dorsal ACC were selected as seed regions to investigate the intrinsic FC with the whole brain. The resulting FC patterns were correlated with clinical tinnitus characteristics including the tinnitus duration and tinnitus distress. Results: Compared with healthy controls, chronic tinnitus patients showed disrupted FC patterns of ACC within several brain networks, including the auditory cortex, prefrontal cortex, visual cortex, and default mode network (DMN). The Tinnitus Handicap Questionnaires (THQ) scores showed positive correlations with increased FC between the rostral ACC and left precuneus (r = 0.507, p = 0.008) as well as the dorsal ACC and right inferior parietal lobe (r = 0.447, p = 0.022). Conclusions: Chronic tinnitus patients have abnormal FC networks originating from ACC to other selected brain regions that are associated with specific tinnitus characteristics. Resting-state ACC-cortical FC disturbances may play an important role in neuropathological features underlying chronic tinnitus. PMID:29410609

  2. The Small Bodies Imager Browser --- finding asteroid and comet images without pain

    NASA Astrophysics Data System (ADS)

    Palmer, E.; Sykes, M.; Davis, D.; Neese, C.

    2014-07-01

    To facilitate accessing and downloading spatially resolved imagery of asteroids and comets in the NASA Planetary Data System (PDS), we have created the Small Bodies Image Browser. It is a HTML5 webpage that runs inside a standard web browser needing no installation (http://sbn.psi.edu/sbib/). The volume of data returned by spacecraft missions has grown substantially over the last decade. While this wealth of data provides scientists with ample support for research, it has greatly increased the difficulty of managing, accessing and processing these data. Further, the complexity necessary for a long-term archive results in an architecture that is efficient for computers, but not user friendly. The Small Bodies Image Browser (SBIB) is tied into the PDS archive of the Small Bodies Asteroid Subnode hosted at the Planetary Science Institute [1]. Currently, the tool contains the entire repository of the Dawn mission's encounter with Vesta [2], and we will be adding other datasets in the future. For Vesta, this includes both the level 1A and 1B images for the Framing Camera (FC) and the level 1B spectral cubes from the Visual and Infrared (VIR) spectrometer, providing over 30,000 individual images. A key strength of the tool is providing quick and easy access of these data. The tool allows for searches based on clicking on a map or typing in coordinates. The SBIB can show an entire mission phase (such as cycle 7 of the Low Altitude Mapping Orbit) and the associated footprints, as well as search by image name. It can focus the search by mission phase, resolution or instrument. Imagery archived in the PDS are generally provided by missions in a single or narrow range of formats. To enhance the value and usability of this data to researchers, SBIB makes these available in these original formats as well as PNG, JPEG and ArcGIS compatible ISIS cubes [3]. Additionally, we provide header files for the VIR cubes so they can be read into ENVI without additional processing. Finally, we also provide both camera-based and map-projected products with geometric data embedded for use within ArcGIS and ISIS. We use the Gaskell shape model for terrain projections [4]. There are several other outstanding data analysis tools that have access to asteroid and comet data: JAsteroid (a derivative of JMARS [5]) and the Applied Physics Laboratory's Small Body Mapping Tool [6]. The SBIB has specifically focused on providing data in the easiest manner possible rather than trying to be an analytical tool.

  3. Dense depth maps from correspondences derived from perceived motion

    NASA Astrophysics Data System (ADS)

    Kirby, Richard; Whitaker, Ross

    2017-01-01

    Many computer vision applications require finding corresponding points between images and using the corresponding points to estimate disparity. Today's correspondence finding algorithms primarily use image features or pixel intensities common between image pairs. Some 3-D computer vision applications, however, do not produce the desired results using correspondences derived from image features or pixel intensities. Two examples are the multimodal camera rig and the center region of a coaxial camera rig. We present an image correspondence finding technique that aligns pairs of image sequences using optical flow fields. The optical flow fields provide information about the structure and motion of the scene, which are not available in still images but can be used in image alignment. We apply the technique to a dual focal length stereo camera rig consisting of a visible light-infrared camera pair and to a coaxial camera rig. We test our method on real image sequences and compare our results with the state-of-the-art multimodal and structure from motion (SfM) algorithms. Our method produces more accurate depth and scene velocity reconstruction estimates than the state-of-the-art multimodal and SfM algorithms.

  4. Relating transverse ray error and light fields in plenoptic camera images

    NASA Astrophysics Data System (ADS)

    Schwiegerling, Jim; Tyo, J. Scott

    2013-09-01

    Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.

  5. Comparison of the effectiveness of three retinal camera technologies for malarial retinopathy detection in Malawi

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.

    2016-03-01

    The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.

  6. Advanced imaging system

    NASA Technical Reports Server (NTRS)

    1992-01-01

    This document describes the Advanced Imaging System CCD based camera. The AIS1 camera system was developed at Photometric Ltd. in Tucson, Arizona as part of a Phase 2 SBIR contract No. NAS5-30171 from the NASA/Goddard Space Flight Center in Greenbelt, Maryland. The camera project was undertaken as a part of the Space Telescope Imaging Spectrograph (STIS) project. This document is intended to serve as a complete manual for the use and maintenance of the camera system. All the different parts of the camera hardware and software are discussed and complete schematics and source code listings are provided.

  7. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  8. Ranging Apparatus and Method Implementing Stereo Vision System

    NASA Technical Reports Server (NTRS)

    Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.

  9. A 3D photographic capsule endoscope system with full field of view

    NASA Astrophysics Data System (ADS)

    Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng

    2013-09-01

    Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.

  10. The effect of APOE ε4 allele on cholinesterase inhibitors in patients with Alzheimer disease: evaluation of the feasibility of resting state functional connectivity magnetic resonance imaging.

    PubMed

    Wang, Liang; Day, Jonathan; Roe, Catherine M; Brier, Matthew R; Thomas, Jewell B; Benzinger, Tammie L; Morris, John C; Ances, Beau M

    2014-01-01

    This work is to determine whether apolipoprotein E (APOE) genotype modulates the effect of cholinesterase inhibitor (ChEI) treatment on resting state functional connectivity magnetic resonance imaging (rs-fcMRI) in patients with Alzheimer disease (AD). We retrospectively studied very mild and mild AD participants who were treated (N=25) or untreated (N=19) with ChEIs with respect to rs-fcMRI measure of 5 resting state networks (RSNs): default mode, dorsal attention (DAN), control (CON), salience (SAL), and sensory motor. For each network, a composite score was computed as the mean of Pearson correlations between pairwise time courses extracted from areas comprising this network. The composite scores were analyzed as a function of ChEI treatment and APOE ε4 allele. Across all participants, significant interactions between ChEI treatment and APOE ε4 allele were observed for all 5 RSNs. Within APOE ε4 carriers, significantly greater composite scores were observed in the DAN, CON, and SAL for treated compared with untreated participants. Within APOE ε4 noncarriers, treated and untreated participants did not have significantly different composite scores for all RSNs. These data suggest that APOE genotype affects the response to ChEI using rs-fcMRI. Rs-fcMRI may be useful for assessing the therapeutic effect of medications in AD clinical trials.

  11. A detailed comparison of single-camera light-field PIV and tomographic PIV

    NASA Astrophysics Data System (ADS)

    Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.

    2018-03-01

    This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.

  12. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  13. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor.

    PubMed

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-03-23

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works.

  14. Convolutional Neural Network-Based Shadow Detection in Images Using Visible Light Camera Sensor

    PubMed Central

    Kim, Dong Seop; Arsalan, Muhammad; Park, Kang Ryoung

    2018-01-01

    Recent developments in intelligence surveillance camera systems have enabled more research on the detection, tracking, and recognition of humans. Such systems typically use visible light cameras and images, in which shadows make it difficult to detect and recognize the exact human area. Near-infrared (NIR) light cameras and thermal cameras are used to mitigate this problem. However, such instruments require a separate NIR illuminator, or are prohibitively expensive. Existing research on shadow detection in images captured by visible light cameras have utilized object and shadow color features for detection. Unfortunately, various environmental factors such as illumination change and brightness of background cause detection to be a difficult task. To overcome this problem, we propose a convolutional neural network-based shadow detection method. Experimental results with a database built from various outdoor surveillance camera environments, and from the context-aware vision using image-based active recognition (CAVIAR) open database, show that our method outperforms previous works. PMID:29570690

  15. SU-D-BRC-07: System Design for a 3D Volumetric Scintillation Detector Using SCMOS Cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Darne, C; Robertson, D; Alsanea, F

    2016-06-15

    Purpose: The purpose of this project is to build a volumetric scintillation detector for quantitative imaging of 3D dose distributions of proton beams accurately in near real-time. Methods: The liquid scintillator (LS) detector consists of a transparent acrylic tank (20×20×20 cm{sup 3}) filled with a liquid scintillator that when irradiated with protons generates scintillation light. To track rapid spatial and dose variations in spot scanning proton beams we used three scientific-complementary metal-oxide semiconductor (sCMOS) imagers (2560×2160 pixels). The cameras collect optical signal from three orthogonal projections. To reduce system footprint two mirrors oriented at 45° to the tank surfaces redirectmore » scintillation light to cameras for capturing top and right views. Selection of fixed focal length objective lenses for these cameras was based on their ability to provide large depth of field (DoF) and required field of view (FoV). Multiple cross-hairs imprinted on the tank surfaces allow for image corrections arising from camera perspective and refraction. Results: We determined that by setting sCMOS to 16-bit dynamic range, truncating its FoV (1100×1100 pixels) to image the entire volume of the LS detector, and using 5.6 msec integration time imaging rate can be ramped up to 88 frames per second (fps). 20 mm focal length lens provides a 20 cm imaging DoF and 0.24 mm/pixel resolution. Master-slave camera configuration enable the slaves to initiate image acquisition instantly (within 2 µsec) after receiving a trigger signal. A computer with 128 GB RAM was used for spooling images from the cameras and can sustain a maximum recording time of 2 min per camera at 75 fps. Conclusion: The three sCMOS cameras are capable of high speed imaging. They can therefore be used for quick, high-resolution, and precise mapping of dose distributions from scanned spot proton beams in three dimensions.« less

  16. Remote camera observations of lava dome growth at Mount St. Helens, Washington, October 2004 to February 2006: Chapter 11 in A volcano rekindled: the renewed eruption of Mount St. Helens, 2004-2006

    USGS Publications Warehouse

    Poland, Michael P.; Dzurisin, Daniel; LaHusen, Richard G.; Major, John J.; Lapcewich, Dennis; Endo, Elliot T.; Gooding, Daniel J.; Schilling, Steve P.; Janda, Christine G.; Sherrod, David R.; Scott, William E.; Stauffer, Peter H.

    2008-01-01

    Images from a Web-based camera (Webcam) located 8 km north of Mount St. Helens and a network of remote, telemetered digital cameras were used to observe eruptive activity at the volcano between October 2004 and February 2006. The cameras offered the advantages of low cost, low power, flexibility in deployment, and high spatial and temporal resolution. Images obtained from the cameras provided important insights into several aspects of dome extrusion, including rockfalls, lava extrusion rates, and explosive activity. Images from the remote, telemetered digital cameras were assembled into time-lapse animations of dome extrusion that supported monitoring, research, and outreach efforts. The wide-ranging utility of remote camera imagery should motivate additional work, especially to develop the three-dimensional quantitative capabilities of terrestrial camera networks.

  17. Overview of Digital Forensics Algorithms in Dslr Cameras

    NASA Astrophysics Data System (ADS)

    Aminova, E.; Trapeznikov, I.; Priorov, A.

    2017-05-01

    The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.

  18. Pulsed-neutron imaging by a high-speed camera and center-of-gravity processing

    NASA Astrophysics Data System (ADS)

    Mochiki, K.; Uragaki, T.; Koide, J.; Kushima, Y.; Kawarabayashi, J.; Taketani, A.; Otake, Y.; Matsumoto, Y.; Su, Y.; Hiroi, K.; Shinohara, T.; Kai, T.

    2018-01-01

    Pulsed-neutron imaging is attractive technique in the research fields of energy-resolved neutron radiography and RANS (RIKEN) and RADEN (J-PARC/JAEA) are small and large accelerator-driven pulsed-neutron facilities for its imaging, respectively. To overcome the insuficient spatial resolution of the conunting type imaging detectors like μ NID, nGEM and pixelated detectors, camera detectors combined with a neutron color image intensifier were investigated. At RANS center-of-gravity technique was applied to spots image obtained by a CCD camera and the technique was confirmed to be effective for improving spatial resolution. At RADEN a high-frame-rate CMOS camera was used and super resolution technique was applied and it was recognized that the spatial resolution was futhermore improved.

  19. Automatic Calibration of Stereo-Cameras Using Ordinary Chess-Board Patterns

    NASA Astrophysics Data System (ADS)

    Prokos, A.; Kalisperakis, I.; Petsa, E.; Karras, G.

    2012-07-01

    Automation of camera calibration is facilitated by recording coded 2D patterns. Our toolbox for automatic camera calibration using images of simple chess-board patterns is freely available on the Internet. But it is unsuitable for stereo-cameras whose calibration implies recovering camera geometry and their true-to-scale relative orientation. In contrast to all reported methods requiring additional specific coding to establish an object space coordinate system, a toolbox for automatic stereo-camera calibration relying on ordinary chess-board patterns is presented here. First, the camera calibration algorithm is applied to all image pairs of the pattern to extract nodes of known spacing, order them in rows and columns, and estimate two independent camera parameter sets. The actual node correspondences on stereo-pairs remain unknown. Image pairs of a textured 3D scene are exploited for finding the fundamental matrix of the stereo-camera by applying RANSAC to point matches established with the SIFT algorithm. A node is then selected near the centre of the left image; its match on the right image is assumed as the node closest to the corresponding epipolar line. This yields matches for all nodes (since these have already been ordered), which should also satisfy the 2D epipolar geometry. Measures for avoiding mismatching are taken. With automatically estimated initial orientation values, a bundle adjustment is performed constraining all pairs on a common (scaled) relative orientation. Ambiguities regarding the actual exterior orientations of the stereo-camera with respect to the pattern are irrelevant. Results from this automatic method show typical precisions not above 1/4 pixels for 640×480 web cameras.

  20. Frontal hyperconnectivity related to discounting and reversal learning in cocaine subjects.

    PubMed

    Camchong, Jazmin; MacDonald, Angus W; Nelson, Brent; Bell, Christopher; Mueller, Bryon A; Specker, Sheila; Lim, Kelvin O

    2011-06-01

    Functional neuroimaging studies suggest that chronic cocaine use is associated with frontal lobe abnormalities. Functional connectivity (FC) alterations of cocaine-dependent individuals (CD), however, are not yet clear. This is the first study to our knowledge that examines resting FC of anterior cingulate cortex (ACC) in CD. Because ACC is known to integrate inputs from different brain regions to regulate behavior, we hypothesized that CD will have connectivity abnormalities in ACC networks. In addition, we hypothesized that abnormalities would be associated with poor performance in delayed discounting and reversal learning tasks. Resting functional magnetic resonance imaging data were collected to look for FC differences between 27 CD (5 women, age: M = 39.73, SD = 6.14 years) and 24 control subjects (5 women, age: M = 39.76, SD = 7.09 years). Participants were assessed with delayed discounting and reversal learning tasks. With seed-based FC measures, we examined FC in CD and control subjects within five ACC connectivity networks with seeds in subgenual, caudal, dorsal, rostral, and perigenual ACC. The CD showed increased FC within the perigenual ACC network in left middle frontal gyrus, ACC, and middle temporal gyrus when compared with control subjects. The FC abnormalities were significantly positively correlated with task performance in delayed discounting and reversal learning tasks in CD. The present study shows that participants with chronic cocaine-dependency have hyperconnectivity within an ACC network known to be involved in social processing and "mentalizing." In addition, FC abnormalities found in CD were associated with difficulties with delay rewards and slower adaptive learning. Copyright © 2011 Society of Biological Psychiatry. Published by Elsevier Inc. All rights reserved.

  1. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  2. Image quality assessment for selfies with and without super resolution

    NASA Astrophysics Data System (ADS)

    Kubota, Aya; Gohshi, Seiichi

    2018-04-01

    With the advent of cellphone cameras, in particular, on smartphones, many people now take photos of themselves alone and with others in the frame; such photos are popularly known as "selfies". Most smartphones are equipped with two cameras: the front-facing and rear cameras. The camera located on the back of the smartphone is referred to as the "out-camera," whereas the one located on the front of the smartphone is called the "in-camera." In-cameras are mainly used for selfies. Some smartphones feature high-resolution cameras. However, the original image quality cannot be obtained because smartphone cameras often have low-performance lenses. Super resolution (SR) is one of the recent technological advancements that has increased image resolution. We developed a new SR technology that can be processed on smartphones. Smartphones with new SR technology are currently available in the market have already registered sales. However, the effective use of new SR technology has not yet been verified. Comparing the image quality with and without SR on smartphone display is necessary to confirm the usefulness of this new technology. Methods that are based on objective and subjective assessments are required to quantitatively measure image quality. It is known that the typical object assessment value, such as Peak Signal to Noise Ratio (PSNR), does not go together with how we feel when we assess image/video. When digital broadcast started, the standard was determined using subjective assessment. Although subjective assessment usually comes at high cost because of personnel expenses for observers, the results are highly reproducible when they are conducted under right conditions and statistical analysis. In this study, the subjective assessment results for selfie images are reported.

  3. An efficient multiple exposure image fusion in JPEG domain

    NASA Astrophysics Data System (ADS)

    Hebbalaguppe, Ramya; Kakarala, Ramakrishna

    2012-01-01

    In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.

  4. Thermal Effects on Camera Focal Length in Messenger Star Calibration and Orbital Imaging

    NASA Astrophysics Data System (ADS)

    Burmeister, S.; Elgner, S.; Preusker, F.; Stark, A.; Oberst, J.

    2018-04-01

    We analyse images taken by the MErcury Surface, Space ENviorment, GEochemistry, and Ranging (MESSENGER) spacecraft for the camera's thermal response in the harsh thermal environment near Mercury. Specifically, we study thermally induced variations in focal length of the Mercury Dual Imaging System (MDIS). Within the several hundreds of images of star fields, the Wide Angle Camera (WAC) typically captures up to 250 stars in one frame of the panchromatic channel. We measure star positions and relate these to the known star coordinates taken from the Tycho-2 catalogue. We solve for camera pointing, the focal length parameter and two non-symmetrical distortion parameters for each image. Using data from the temperature sensors on the camera focal plane we model a linear focal length function in the form of f(T) = A0 + A1 T. Next, we use images from MESSENGER's orbital mapping mission. We deal with large image blocks, typically used for the production of a high-resolution digital terrain models (DTM). We analyzed images from the combined quadrangles H03 and H07, a selected region, covered by approx. 10,600 images, in which we identified about 83,900 tiepoints. Using bundle block adjustments, we solved for the unknown coordinates of the control points, the pointing of the camera - as well as the camera's focal length. We then fit the above linear function with respect to the focal plane temperature. As a result, we find a complex response of the camera to thermal conditions of the spacecraft. To first order, we see a linear increase by approx. 0.0107 mm per degree temperature for the Narrow-Angle Camera (NAC). This is in agreement with the observed thermal response seen in images of the panchromatic channel of the WAC. Unfortunately, further comparisons of results from the two methods, both of which use different portions of the available image data, are limited. If leaving uncorrected, these effects may pose significant difficulties in the photogrammetric analysis, specifically these may be responsible for erroneous longwavelength trends in topographic models.

  5. Abnormal functional connectivity and cortical integrity influence dominant hand motor disability in multiple sclerosis: a multimodal analysis.

    PubMed

    Zhong, Jidan; Nantes, Julia C; Holmes, Scott A; Gallant, Serge; Narayanan, Sridar; Koski, Lisa

    2016-12-01

    Functional reorganization and structural damage occur in the brains of people with multiple sclerosis (MS) throughout the disease course. However, the relationship between resting-state functional connectivity (FC) reorganization in the sensorimotor network and motor disability in MS is not well understood. This study used resting-state fMRI, T1-weighted and T2-weighted, and magnetization transfer (MT) imaging to investigate the relationship between abnormal FC in the sensorimotor network and upper limb motor disability in people with MS, as well as the impact of disease-related structural abnormalities within this network. Specifically, the differences in FC of the left hemisphere hand motor region between MS participants with preserved (n = 17) and impaired (n = 26) right hand function, compared with healthy controls (n = 20) was investigated. Differences in brain atrophy and MT ratio measured at the global and regional levels were also investigated between the three groups. Motor preserved MS participants had stronger FC in structurally intact visual information processing regions relative to motor impaired MS participants. Motor impaired MS participants showed weaker FC in the sensorimotor and somatosensory association cortices and more severe structural damage throughout the brain compared with the other groups. Logistic regression analysis showed that regional MTR predicted motor disability beyond the impact of global atrophy whereas regional grey matter volume did not. More importantly, as the first multimodal analysis combining resting-state fMRI, T1-weighted, T2-weighted and MTR images in MS, we demonstrate how a combination of structural and functional changes may contribute to motor impairment or preservation in MS. Hum Brain Mapp 37:4262-4275, 2016. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  6. Association of ventral striatum monoamine oxidase-A binding and functional connectivity in antisocial personality disorder with high impulsivity: A positron emission tomography and functional magnetic resonance imaging study.

    PubMed

    Kolla, Nathan J; Dunlop, Katharine; Downar, Jonathan; Links, Paul; Bagby, R Michael; Wilson, Alan A; Houle, Sylvain; Rasquinha, Fawn; Simpson, Alexander I; Meyer, Jeffrey H

    2016-04-01

    Impulsivity is a core feature of antisocial personality disorder (ASPD) associated with abnormal brain function and neurochemical alterations. The ventral striatum (VS) is a key region of the neural circuitry mediating impulsive behavior, and low monoamine oxidase-A (MAO-A) level in the VS has shown a specific relationship to the impulsivity of ASPD. Because it is currently unknown whether phenotypic MAO-A markers can influence brain function in ASPD, we investigated VS MAO-A level and the functional connectivity (FC) of two seed regions, superior and inferior VS (VSs, VSi). Nineteen impulsive ASPD males underwent [(11)C] harmine positron emission tomography scanning to measure VS MAO-A VT, an index of MAO-A density, and resting-state functional magnetic resonance imaging that assessed the FC of bilateral seed regions in the VSi and VSs. Subjects also completed self-report impulsivity measures. Results revealed functional coupling of the VSs with bilateral dorsomedial prefrontal cortex (DMPFC) that was correlated with VS MAO-A VT (r=0.47, p=0.04), and functional coupling of the VSi with right hippocampus that was anti-correlated with VS MAO-A VT (r=-0.55, p=0.01). Additionally, VSs-DMPFC FC was negatively correlated with NEO Personality Inventory-Revised impulsivity (r=-0.49, p=0.03), as was VSi-hippocampus FC with Barratt Impulsiveness Scale-11 motor impulsiveness (r=-0.50, p=0.03). These preliminary results highlight an association of VS MAO-A level with the FC of striatal regions linked to impulsive behavior in ASPD and suggest that phenotype-based brain markers of ASPD have relevance to understanding brain function. Copyright © 2016 Elsevier B.V. and ECNP. All rights reserved.

  7. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  8. Color reproduction software for a digital still camera

    NASA Astrophysics Data System (ADS)

    Lee, Bong S.; Park, Du-Sik; Nam, Byung D.

    1998-04-01

    We have developed a color reproduction software for a digital still camera. The image taken by the camera was colorimetrically reproduced on the monitor after characterizing the camera and the monitor, and color matching between two devices. The reproduction was performed at three levels; level processing, gamma correction, and color transformation. The image contrast was increased after the level processing adjusting the level of dark and bright portions of the image. The relationship between the level processed digital values and the measured luminance values of test gray samples was calculated, and the gamma of the camera was obtained. The method for getting the unknown monitor gamma was proposed. As a result, the level processed values were adjusted by the look-up table created by the camera and the monitor gamma correction. For a color transformation matrix for the camera, 3 by 3 or 3 by 4 matrix was used, which was calculated by the regression between the gamma corrected values and the measured tristimulus values of each test color samples the various reproduced images were displayed on the dialogue box implemented in our software, which were generated according to four illuminations for the camera and three color temperatures for the monitor. An user can easily choose he best reproduced image comparing each others.

  9. Blur spot limitations in distal endoscope sensors

    NASA Astrophysics Data System (ADS)

    Yaron, Avi; Shechterman, Mark; Horesh, Nadav

    2006-02-01

    In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.

  10. Mapping cell-specific functional connections in the mouse brain using ChR2-evoked hemodynamics (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Bauer, Adam Q.; Kraft, Andrew; Baxter, Grant A.; Bruchas, Michael; Lee, Jin-Moo; Culver, Joseph P.

    2017-02-01

    Functional magnetic resonance imaging (fMRI) has transformed our understanding of the brain's functional organization. However, mapping subunits of a functional network using hemoglobin alone presents several disadvantages. Evoked and spontaneous hemodynamic fluctuations reflect ensemble activity from several populations of neurons making it difficult to discern excitatory vs inhibitory network activity. Still, blood-based methods of brain mapping remain powerful because hemoglobin provides endogenous contrast in all mammalian brains. To add greater specificity to hemoglobin assays, we integrated optical intrinsic signal(OIS) imaging with optogenetic stimulation to create an Opto-OIS mapping tool that combines the cell-specificity of optogenetics with label-free, hemoglobin imaging. Before mapping, titrated photostimuli determined which stimulus parameters elicited linear hemodynamic responses in the cortex. Optimized stimuli were then scanned over the left hemisphere to create a set of optogenetically-defined effective connectivity (Opto-EC) maps. For many sites investigated, Opto-EC maps exhibited higher spatial specificity than those determined using spontaneous hemodynamic fluctuations. For example, resting-state functional connectivity (RS-FC) patterns exhibited widespread ipsilateral connectivity while Opto-EC maps contained distinct short- and long-range constellations of ipsilateral connectivity. Further, RS-FC maps were usually symmetric about midline while Opto-EC maps displayed more heterogeneous contralateral homotopic connectivity. Both Opto-EC and RS-FC patterns were compared to mouse connectivity data from the Allen Institute. Unlike RS-FC maps, Thy1-based maps collected in awake, behaving mice closely recapitulated the connectivity structure derived using ex vivo anatomical tracer methods. Opto-OIS mapping could be a powerful tool for understanding cellular and molecular contributions to network dynamics and processing in the mouse brain.

  11. Automatic Orientation of Large Blocks of Oblique Images

    NASA Astrophysics Data System (ADS)

    Rupnik, E.; Nex, F.; Remondino, F.

    2013-05-01

    Nowadays, multi-camera platforms combining nadir and oblique cameras are experiencing a revival. Due to their advantages such as ease of interpretation, completeness through mitigation of occluding areas, as well as system accessibility, they have found their place in numerous civil applications. However, automatic post-processing of such imagery still remains a topic of research. Configuration of cameras poses a challenge on the traditional photogrammetric pipeline used in commercial software and manual measurements are inevitable. For large image blocks it is certainly an impediment. Within theoretical part of the work we review three common least square adjustment methods and recap on possible ways for a multi-camera system orientation. In the practical part we present an approach that successfully oriented a block of 550 images acquired with an imaging system composed of 5 cameras (Canon Eos 1D Mark III) with different focal lengths. Oblique cameras are rotated in the four looking directions (forward, backward, left and right) by 45° with respect to the nadir camera. The workflow relies only upon open-source software: a developed tool to analyse image connectivity and Apero to orient the image block. The benefits of the connectivity tool are twofold: in terms of computational time and success of Bundle Block Adjustment. It exploits the georeferenced information provided by the Applanix system in constraining feature point extraction to relevant images only, and guides the concatenation of images during the relative orientation. Ultimately an absolute transformation is performed resulting in mean re-projection residuals equal to 0.6 pix.

  12. Error modeling and analysis of star cameras for a class of 1U spacecraft

    NASA Astrophysics Data System (ADS)

    Fowler, David M.

    As spacecraft today become increasingly smaller, the demand for smaller components and sensors rises as well. The smartphone, a cutting edge consumer technology, has impressive collections of both sensors and processing capabilities and may have the potential to fill this demand in the spacecraft market. If the technologies of a smartphone can be used in space, the cost of building miniature satellites would drop significantly and give a boost to the aerospace and scientific communities. Concentrating on the problem of spacecraft orientation, this study sets ground to determine the capabilities of a smartphone camera when acting as a star camera. Orientations determined from star images taken from a smartphone camera are compared to those of higher quality cameras in order to determine the associated accuracies. The results of the study reveal the abilities of low-cost off-the-shelf imagers in space and give a starting point for future research in the field. The study began with a complete geometric calibration of each analyzed imager such that all comparisons start from the same base. After the cameras were calibrated, image processing techniques were introduced to correct for atmospheric, lens, and image sensor effects. Orientations for each test image are calculated through methods of identifying the stars exposed on each image. Analyses of these orientations allow the overall errors of each camera to be defined and provide insight into the abilities of low-cost imagers.

  13. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  14. Center for Coastline Security Technology, Year 3

    DTIC Science & Technology

    2008-05-01

    Polarization control for 3D Imaging with the Sony SRX-R105 Digital Cinema Projectors 3.4 HDMAX Camera and Sony SRX-R105 Projector Configuration for 3D...HDMAX Camera Pair Figure 3.2 Sony SRX-R105 Digital Cinema Projector Figure 3.3 Effect of camera rotation on projected overlay image. Figure 3.4...system that combines a pair of FAU’s HD-MAX video cameras with a pair of Sony SRX-R105 digital cinema projectors for stereo imaging and projection

  15. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  16. The Effect of Camera Angle and Image Size on Source Credibility and Interpersonal Attraction.

    ERIC Educational Resources Information Center

    McCain, Thomas A.; Wakshlag, Jacob J.

    The purpose of this study was to examine the effects of two nonverbal visual variables (camera angle and image size) on variables developed in a nonmediated context (source credibility and interpersonal attraction). Camera angle and image size were manipulated in eight video taped television newscasts which were subsequently presented to eight…

  17. GPU-based relative fuzzy connectedness image segmentation.

    PubMed

    Zhuge, Ying; Ciesielski, Krzysztof C; Udupa, Jayaram K; Miller, Robert W

    2013-01-01

    Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. The most common FC segmentations, optimizing an [script-l](∞)-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA's Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology.

  18. GPU-based relative fuzzy connectedness image segmentation

    PubMed Central

    Zhuge, Ying; Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Miller, Robert W.

    2013-01-01

    Purpose: Recently, clinical radiological research and practice are becoming increasingly quantitative. Further, images continue to increase in size and volume. For quantitative radiology to become practical, it is crucial that image segmentation algorithms and their implementations are rapid and yield practical run time on very large data sets. The purpose of this paper is to present a parallel version of an algorithm that belongs to the family of fuzzy connectedness (FC) algorithms, to achieve an interactive speed for segmenting large medical image data sets. Methods: The most common FC segmentations, optimizing an ℓ∞-based energy, are known as relative fuzzy connectedness (RFC) and iterative relative fuzzy connectedness (IRFC). Both RFC and IRFC objects (of which IRFC contains RFC) can be found via linear time algorithms, linear with respect to the image size. The new algorithm, P-ORFC (for parallel optimal RFC), which is implemented by using NVIDIA’s Compute Unified Device Architecture (CUDA) platform, considerably improves the computational speed of the above mentioned CPU based IRFC algorithm. Results: Experiments based on four data sets of small, medium, large, and super data size, achieved speedup factors of 32.8×, 22.9×, 20.9×, and 17.5×, correspondingly, on the NVIDIA Tesla C1060 platform. Although the output of P-ORFC need not precisely match that of IRFC output, it is very close to it and, as the authors prove, always lies between the RFC and IRFC objects. Conclusions: A parallel version of a top-of-the-line algorithm in the family of FC has been developed on the NVIDIA GPUs. An interactive speed of segmentation has been achieved, even for the largest medical image data set. Such GPU implementations may play a crucial role in automatic anatomy recognition in clinical radiology. PMID:23298094

  19. Development of a piecewise linear omnidirectional 3D image registration method

    NASA Astrophysics Data System (ADS)

    Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo

    2016-12-01

    This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.

  20. SpectraCAM SPM: a camera system with high dynamic range for scientific and medical applications

    NASA Astrophysics Data System (ADS)

    Bhaskaran, S.; Baiko, D.; Lungu, G.; Pilon, M.; VanGorden, S.

    2005-08-01

    A scientific camera system having high dynamic range designed and manufactured by Thermo Electron for scientific and medical applications is presented. The newly developed CID820 image sensor with preamplifier-per-pixel technology is employed in this camera system. The 4 Mega-pixel imaging sensor has a raw dynamic range of 82dB. Each high-transparent pixel is based on a preamplifier-per-pixel architecture and contains two photogates for non-destructive readout of the photon-generated charge (NDRO). Readout is achieved via parallel row processing with on-chip correlated double sampling (CDS). The imager is capable of true random pixel access with a maximum operating speed of 4MHz. The camera controller consists of a custom camera signal processor (CSP) with an integrated 16-bit A/D converter and a PowerPC-based CPU running a Linux embedded operating system. The imager is cooled to -40C via three-stage cooler to minimize dark current. The camera housing is sealed and is designed to maintain the CID820 imager in the evacuated chamber for at least 5 years. Thermo Electron has also developed custom software and firmware to drive the SpectraCAM SPM camera. Included in this firmware package is the new Extreme DRTM algorithm that is designed to extend the effective dynamic range of the camera by several orders of magnitude up to 32-bit dynamic range. The RACID Exposure graphical user interface image analysis software runs on a standard PC that is connected to the camera via Gigabit Ethernet.

  1. A combined microphone and camera calibration technique with application to acoustic imaging.

    PubMed

    Legg, Mathew; Bradley, Stuart

    2013-10-01

    We present a calibration technique for an acoustic imaging microphone array, combined with a digital camera. Computer vision and acoustic time of arrival data are used to obtain microphone coordinates in the camera reference frame. Our new method allows acoustic maps to be plotted onto the camera images without the need for additional camera alignment or calibration. Microphones and cameras may be placed in an ad-hoc arrangement and, after calibration, the coordinates of the microphones are known in the reference frame of a camera in the array. No prior knowledge of microphone positions, inter-microphone spacings, or air temperature is required. This technique is applied to a spherical microphone array and a mean difference of 3 mm was obtained between the coordinates obtained with this calibration technique and those measured using a precision mechanical method.

  2. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  3. The contributions of resting state and task-based functional connectivity studies to our understanding of adolescent brain network maturation.

    PubMed

    Stevens, Michael C

    2016-11-01

    This review summarizes functional magnetic resonance imaging (fMRI) research done over the past decade that examined changes in the function and organization of brain networks across human adolescence. Its over-arching goal is to highlight how both resting state functional connectivity (rs-fcMRI) and task-based functional connectivity (t-fcMRI) have jointly contributed - albeit in different ways - to our understanding of the scope and types of network organization changes that occur from puberty until young adulthood. These two approaches generally have tested different types of hypotheses using different analysis techniques. This has hampered the convergence of findings. Although much has been learned about system-wide changes to adolescents' neural network organization, if both rs-fcMRI and t-fcMRI approaches draw upon each other's methodology and ask broader questions, it will produce a more detailed connectome-informed theory of adolescent neurodevelopment to guide physiological, clinical, and other lines of research. Copyright © 2016 Elsevier Ltd. All rights reserved.

  4. Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.

    PubMed

    Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K

    2010-09-01

    We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.

  5. Handheld hyperspectral imager system for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-08-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  6. Hand-held hyperspectral imager for chemical/biological and environmental applications

    NASA Astrophysics Data System (ADS)

    Hinnrichs, Michele; Piatek, Bob

    2004-03-01

    A small, hand held, battery operated imaging infrared spectrometer, Sherlock, has been developed by Pacific Advanced Technology and was field tested in early 2003. The Sherlock spectral imaging camera has been designed for remote gas leak detection, however, the architecture of the camera is versatile enough that it can be applied to numerous other applications such as homeland security, chemical/biological agent detection, medical and pharmaceutical applications as well as standard research and development. This paper describes the Sherlock camera, theory of operations, shows current applications and touches on potential future applications for the camera. The Sherlock has an embedded Power PC and performs real-time-image processing function in an embedded FPGA. The camera has a built in LCD display as well as output to a standard monitor, or NTSC display. It has several I/O ports, ethernet, firewire, RS232 and thus can be easily controlled from a remote location. In addition, software upgrades can be performed over the ethernet eliminating the need to send the camera back to the factory for a retrofit. Using the USB port a mouse and key board can be connected and the camera can be used in a laboratory environment as a stand alone imaging spectrometer.

  7. Image dynamic range test and evaluation of Gaofen-2 dual cameras

    NASA Astrophysics Data System (ADS)

    Zhang, Zhenhua; Gan, Fuping; Wei, Dandan

    2015-12-01

    In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.

  8. Disrupted functional connectivity of the periaqueductal gray in chronic low back pain

    PubMed Central

    Yu, Rongjun; Gollub, Randy L.; Spaeth, Rosa; Napadow, Vitaly; Wasan, Ajay; Kong, Jian

    2014-01-01

    Chronic low back pain is a common neurological disorder. The periaqueductal gray (PAG) plays a key role in the descending modulation of pain. In this study, we investigated brain resting state PAG functional connectivity (FC) differences between patients with chronic low back pain (cLBP) in low pain or high pain condition and matched healthy controls (HCs). PAG seed based functional connectivity (FC) analysis of the functional MR imaging data was performed to investigate the difference among the connectivity maps in the cLBP in the low or high pain condition and HC groups as well as within the cLBP at differing endogenous back pain intensities. Results showed that FC between the PAG and the ventral medial prefrontal cortex (vmPFC)/rostral anterior cingulate cortex (rACC) increased in cLBP patients compared to matched controls. In addition, we also found significant negative correlations between pain ratings and PAG–vmPFC/rACC FC in cLBP patients after pain-inducing maneuver. The duration of cLBP was negatively correlated with PAG–insula and PAG–amygdala FC before pain-inducing maneuver in the patient group. These findings are in line with the impairments of the descending pain modulation reported in patients with cLBP. Our results provide evidence showing that cLBP patients have abnormal FC in PAG centered pain modulation network during rest. PMID:25379421

  9. The incomplete anti-Rh antibody agglutination mechanism of trypsinized ORh+ red cells.

    PubMed Central

    Margni, R A; Leoni, J; Bazzurro, M

    1977-01-01

    The capacity for binding to trypsinized and non-trypsinized ORh+ red cells, of the IgG incomplete anti-Rh antibody and its F(ab')2 and Fc fragments has been investigated. An analysis has also been made of the capacity of non-specific human IgG, aggregated non-specific human IgG, human IgM (19S) and IgM (7S), and of fragments Fcgamma, Fcmu and Fc5mu to inhibit the agglutination of trypsinized ORh+ red cells by the IgG incomplete anti-Rh antibody. The results obtained indicate that these antibodies behave in a similar manner to that of nonprecipitating antibodies, and that the agglutination of trypsinized red cells seems to be a mixed reaction due to the interaction of an Fab fragment with its Rh antigenic determinant present in the surface of a red cell and the Fc of the same molecule with a receptor for Fc present in adjacent red cells. The trypsin treatment apparently results in the liberation of occult Fc receptors. It has also been demonstrated that in the agglutination of ORh+ red cells by IgG incomplete anti-Rh antibody in the presence of albumin, interaction must occur in some manner between the albumin and the Fc fragment since the F(ab')2 fragment does not give rise to agglutination under such conditions. Images Figure 1 PMID:415968

  10. OPG-Fc but Not Zoledronic Acid Discontinuation Reverses Osteonecrosis of the Jaws (ONJ) in Mice

    PubMed Central

    de Molon, Rafael Scaf; Shimamoto, Hiroaki; Bezouglaia, Olga; Pirih, Flavia Q; Dry, Sarah M; Kostenuik, Paul; Boyce, Rogely W; Dwyer, Denise; Aghaloo, Tara L; Tetradis, Sotirios

    2016-01-01

    Osteonecrosis of the jaws (ONJ) is a significant complication of antiresorptive medications, such as bisphosphonates and denosumab. Antiresorptive discontinuation to promote healing of ONJ lesions remains highly controversial and understudied. Here, we investigated whether antiresorptive discontinuation alters ONJ features in mice, employing the potent bisphosphonate zoledronic acid (ZA) or the receptor activator of NF-κB ligand (RANKL) inhibitor OPG-Fc, utilizing previously published ONJ animal models. Mice were treated with vehicle (veh), ZA, or OPG-Fc for 11 weeks to induce ONJ, and antiresorptives were discontinued for 6 or 10 weeks. Maxillae and mandibles were examined by µCT imaging and histologically. ONJ features in ZA and OPG-Fc groups included periosteal bone deposition, empty osteocyte lacunae, osteonecrotic areas, and bone exposure, each of which substantially resolved 10 weeks after discontinuing OPG-Fc but not ZA. Full recovery of tartrate-resistant acid phosphatase-positive (TRAP+) osteoclast numbers occurred after discontinuing OPG-Fc but not ZA. Our data provide the first experimental evidence demonstrating that discontinuation of a RANKL inhibitor, but not a bisphosphonate, reverses features of osteonecrosis in mice. It remains unclear whether antiresorptive discontinuation increases the risk of skeletal-related events in patients with bone metastases or fracture risk in osteoporosis patients, but these preclinical data may nonetheless help to inform discussions on the rationale for a “drug holiday” in managing the ONJ patient. PMID:25727550

  11. Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.

    PubMed

    Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua

    2017-05-01

    In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.

  12. The Limited Duty/Chief Warrant Officer Professional Guidebook

    DTIC Science & Technology

    1985-01-01

    subsurface imaging . They plan and manage the operation of imaging commands and activities, combat camera groups and aerial reconnaissance imaging...picture and video systems used in aerial, surface and subsurface imaging . They supervise the operation of imaging commands and activities, combat camera

  13. Test Image of Earth Rocks by Mars Camera Stereo

    NASA Image and Video Library

    2010-11-16

    This stereo view of terrestrial rocks combines two images taken by a testing twin of the Mars Hand Lens Imager MAHLI camera on NASA Mars Science Laboratory. 3D glasses are necessary to view this image.

  14. High-frame rate multiport CCD imager and camera

    NASA Astrophysics Data System (ADS)

    Levine, Peter A.; Patterson, David R.; Esposito, Benjamin J.; Tower, John R.; Lawler, William B.

    1993-01-01

    A high frame rate visible CCD camera capable of operation up to 200 frames per second is described. The camera produces a 256 X 256 pixel image by using one quadrant of a 512 X 512 16-port, back illuminated CCD imager. Four contiguous outputs are digitally reformatted into a correct, 256 X 256 image. This paper details the architecture and timing used for the CCD drive circuits, analog processing, and the digital reformatter.

  15. Impact of 36 h of total sleep deprivation on resting-state dynamic functional connectivity.

    PubMed

    Xu, Huaze; Shen, Hui; Wang, Lubin; Zhong, Qi; Lei, Yu; Yang, Liu; Zeng, Ling-Li; Zhou, Zongtan; Hu, Dewen; Yang, Zheng

    2018-06-01

    Resting-state functional magnetic resonance imaging (fMRI) studies using static functional connectivity (sFC) measures have shown that the brain function is severely disrupted after long-term sleep deprivation (SD). However, increasing evidence has suggested that resting-state functional connectivity (FC) is dynamic and exhibits spontaneous fluctuation on a smaller timescale. The process by which long-term SD can influence dynamic functional connectivity (dFC) remains unclear. In this study, 37 healthy subjects participated in the SD experiment, and they were scanned both during rested wakefulness (RW) and after 36 h of SD. A sliding-window based approach and a spectral clustering algorithm were used to evaluate the effects of SD on dFC based on the 26 qualified subjects' data. The outcomes showed that time-averaging FC across specific regions as well as temporal properties of the FC states, such as the dwell time and transition probability, was strongly influenced after SD in contrast to the RW condition. Based on the occurrences of FC states, we further identified some RW-dominant states characterized by anti-correlation between the default mode network (DMN) and other cortices, and some SD-dominant states marked by significantly decreased thalamocortical connectivity. In particular, the temporal features of these FC states were negatively correlated with the correlation coefficients between the DMN and dorsal attention network (dATN) and demonstrated high potential in classification of sleep state (with 10-fold cross-validation accuracy of 88.6% for dwell time and 88.1% for transition probability). Collectively, our results suggested that the temporal properties of the FC states greatly account for changes in the resting-state brain networks following SD, which provides new insights into the impact of SD on the resting-state functional organization in the human brain. Copyright © 2017. Published by Elsevier B.V.

  16. The advantages of using a Lucky Imaging camera for observations of microlensing events

    NASA Astrophysics Data System (ADS)

    Sajadian, Sedighe; Rahvar, Sohrab; Dominik, Martin; Hundertmark, Markus

    2016-05-01

    In this work, we study the advantages of using a Lucky Imaging camera for the observations of potential planetary microlensing events. Our aim is to reduce the blending effect and enhance exoplanet signals in binary lensing systems composed of an exoplanet and the corresponding parent star. We simulate planetary microlensing light curves based on present microlensing surveys and follow-up telescopes where one of them is equipped with a Lucky Imaging camera. This camera is used at the Danish 1.54-m follow-up telescope. Using a specific observational strategy, for an Earth-mass planet in the resonance regime, where the detection probability in crowded fields is smaller, Lucky Imaging observations improve the detection efficiency which reaches 2 per cent. Given the difficulty of detecting the signal of an Earth-mass planet in crowded-field imaging even in the resonance regime with conventional cameras, we show that Lucky Imaging can substantially improve the detection efficiency.

  17. Suppressing the image smear of the vibration modulation transfer function for remote-sensing optical cameras.

    PubMed

    Li, Jin; Liu, Zilong; Liu, Si

    2017-02-20

    In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.

  18. Digital fundus image grading with the non-mydriatic Visucam(PRO NM) versus the FF450(plus) camera in diabetic retinopathy.

    PubMed

    Neubauer, Aljoscha S; Rothschuh, Antje; Ulbig, Michael W; Blum, Marcus

    2008-03-01

    Grading diabetic retinopathy in clinical trials is frequently based on 7-field stereo photography of the fundus in diagnostic mydriasis. In terms of image quality, the FF450(plus) camera (Carl Zeiss Meditec AG, Jena, Germany) defines a high-quality reference. The aim of the study was to investigate if the fully digital fundus camera Visucam(PRO NM) could serve as an alternative in clinical trials requiring 7-field stereo photography. A total of 128 eyes of diabetes patients were enrolled in the randomized, controlled, prospective trial. Seven-field stereo photography was performed with the Visucam(PRO NM) and the FF450(plus) camera, in random order, both in diagnostic mydriasis. The resulting 256 image sets from the two camera systems were graded for retinopathy levels and image quality (on a scale of 1-5); both were anonymized and blinded to the image source. On FF450(plus) stereoscopic imaging, 20% of the patients had no or mild diabetic retinopathy (ETDRS level < or = 20) and 29% had no macular oedema. No patient had to be excluded as a result of image quality. Retinopathy level did not influence the quality of grading or of images. Excellent overall correspondence was obtained between the two fundus cameras regarding retinopathy levels (kappa 0.87) and macular oedema (kappa 0.80). In diagnostic mydriasis the image quality of the Visucam was graded slightly as better than that of the FF450(plus) (2.20 versus 2.41; p < 0.001), especially for pupils < 7 mm in mydriasis. The non-mydriatic Visucam(PRO NM) offers good image quality and is suitable as a more cost-efficient and easy-to-operate camera for applications and clinical trials requiring 7-field stereo photography.

  19. A projector calibration method for monocular structured light system based on digital image correlation

    NASA Astrophysics Data System (ADS)

    Feng, Zhixin

    2018-02-01

    Projector calibration is crucial for a camera-projector three-dimensional (3-D) structured light measurement system, which has one camera and one projector. In this paper, a novel projector calibration method is proposed based on digital image correlation. In the method, the projector is viewed as an inverse camera, and a plane calibration board with feature points is used to calibrate the projector. During the calibration processing, a random speckle pattern is projected onto the calibration board with different orientations to establish the correspondences between projector images and camera images. Thereby, dataset for projector calibration are generated. Then the projector can be calibrated using a well-established camera calibration algorithm. The experiment results confirm that the proposed method is accurate and reliable for projector calibration.

  20. Volunteers Help Decide Where to Point Mars Camera

    NASA Image and Video Library

    2015-07-22

    This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823

  1. Photometric Calibration and Image Stitching for a Large Field of View Multi-Camera System

    PubMed Central

    Lu, Yu; Wang, Keyi; Fan, Gongshu

    2016-01-01

    A new compact large field of view (FOV) multi-camera system is introduced. The camera is based on seven tiny complementary metal-oxide-semiconductor sensor modules covering over 160° × 160° FOV. Although image stitching has been studied extensively, sensor and lens differences have not been considered in previous multi-camera devices. In this study, we have calibrated the photometric characteristics of the multi-camera device. Lenses were not mounted on the sensor in the process of radiometric response calibration to eliminate the influence of the focusing effect of uniform light from an integrating sphere. Linearity range of the radiometric response, non-linearity response characteristics, sensitivity, and dark current of the camera response function are presented. The R, G, and B channels have different responses for the same illuminance. Vignetting artifact patterns have been tested. The actual luminance of the object is retrieved by sensor calibration results, and is used to blend images to make panoramas reflect the objective luminance more objectively. This compensates for the limitation of stitching images that are more realistic only through the smoothing method. The dynamic range limitation of can be resolved by using multiple cameras that cover a large field of view instead of a single image sensor with a wide-angle lens. The dynamic range is expanded by 48-fold in this system. We can obtain seven images in one shot with this multi-camera system, at 13 frames per second. PMID:27077857

  2. Multispectral image dissector camera flight test

    NASA Technical Reports Server (NTRS)

    Johnson, B. L.

    1973-01-01

    It was demonstrated that the multispectral image dissector camera is able to provide composite pictures of the earth surface from high altitude overflights. An electronic deflection feature was used to inject the gyro error signal into the camera for correction of aircraft motion.

  3. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  4. Evaluation of Suppression of Hydroprocessed Renewable Jet (HRJ) Fuel Fires with Aqueous Film Forming Foam (AFFF)

    DTIC Science & Technology

    2011-07-01

    cameras were installed around the test pan and an underwater GoPro ® video camera recorded the fire from below the layer of fuel. 3.2.2. Camera Images...Distribution A: Approved for public release; distribution unlimited. 3.2.3. Video Images A GoPro video camera with a wide angle lens recorded the tests...camera and the GoPro ® video camera were not used for fire suppression experiments. 3.3.2. Test Pans Two ¼-in thick stainless steel test pans were

  5. Supramolecular latching system based on ultrastable synthetic binding pairs as versatile tools for protein imaging.

    PubMed

    Kim, Kyung Lock; Sung, Gihyun; Sim, Jaehwan; Murray, James; Li, Meng; Lee, Ara; Shrinidhi, Annadka; Park, Kyeng Min; Kim, Kimoon

    2018-04-27

    Here we report ultrastable synthetic binding pairs between cucurbit[7]uril (CB[7]) and adamantyl- (AdA) or ferrocenyl-ammonium (FcA) as a supramolecular latching system for protein imaging, overcoming the limitations of protein-based binding pairs. Cyanine 3-conjugated CB[7] (Cy3-CB[7]) can visualize AdA- or FcA-labeled proteins to provide clear fluorescence images for accurate and precise analysis of proteins. Furthermore, controllability of the system is demonstrated by treating with a stronger competitor guest. At low temperature, this allows us to selectively detach Cy3-CB[7] from guest-labeled proteins on the cell surface, while leaving Cy3-CB[7] latched to the cytosolic proteins for spatially conditional visualization of target proteins. This work represents a non-protein-based bioimaging tool which has inherent advantages over the widely used protein-based techniques, thereby demonstrating the great potential of this synthetic system.

  6. Imagers for digital still photography

    NASA Astrophysics Data System (ADS)

    Bosiers, Jan; Dillen, Bart; Draijer, Cees; Manoury, Erik-Jan; Meessen, Louis; Peters, Inge

    2006-04-01

    This paper gives an overview of the requirements for, and current state-of-the-art of, CCD and CMOS imagers for use in digital still photography. Four market segments will be reviewed: mobile imaging, consumer "point-and-shoot cameras", consumer digital SLR cameras and high-end professional camera systems. The paper will also present some challenges and innovations with respect to packaging, testing, and system integration.

  7. Altered default network resting state functional connectivity in patients with a first episode of psychosis.

    PubMed

    Alonso-Solís, Anna; Corripio, Iluminada; de Castro-Manglano, Pilar; Duran-Sindreu, Santiago; Garcia-Garcia, Manuel; Proal, Erika; Nuñez-Marín, Fidel; Soutullo, Cesar; Alvarez, Enric; Gómez-Ansón, Beatriz; Kelly, Clare; Castellanos, F Xavier

    2012-08-01

    Default network (DN) abnormalities have been identified in patients with chronic schizophrenia using "resting state" functional magnetic resonance imaging (R-fMRI). Here, we examined the integrity of the DN in patients experiencing their first episode of psychosis (FEP) compared with sex- and age-matched healthy controls. We collected R-fMRI data from 19 FEP patients (mean age 24.9 ± 4.8 yrs, 14 males) and 19 healthy controls (26.1 ± 4.8 yrs, 14 males) at 3T. Following standard preprocessing, we examined the functional connectivity (FC) of two DN subsystems and the two DN hubs (P<0.0045, corrected). Patients with FEP exhibited abnormal FC that appeared largely restricted to the dorsomedial prefrontal cortex (dMPFC) DN subsystem. Relative to controls, FEP patients exhibited weaker positive FC between dMPFC and posterior cingulate cortex (PCC) and precuneus, extending laterally through the parietal lobe to the posterior angular gyrus. Patients with FEP exhibited weaker negative FC between the lateral temporal cortex and the intracalcarine cortex, bilaterally. The PCC and temporo-parietal junction also exhibited weaker negative FC with the right fusiform gyrus extending to the lingual gyrus and lateral occipital cortex, in FEP patients, compared to controls. By contrast, patients with FEP showed stronger negative FC between the temporal pole and medial motor cortex, anterior precuneus and posterior mid-cingulate cortex. Abnormalities in the dMPFC DN subsystem in patients with a FEP suggest that FC patterns are altered even in the early stages of psychosis. Copyright © 2012 Elsevier B.V. All rights reserved.

  8. Dynamic functional connectivity of the default mode network tracks daydreaming.

    PubMed

    Kucyi, Aaron; Davis, Karen D

    2014-10-15

    Humans spend much of their time engaged in stimulus-independent thoughts, colloquially known as "daydreaming" or "mind-wandering." A fundamental question concerns how awake, spontaneous brain activity represents the ongoing cognition of daydreaming versus unconscious processes characterized as "intrinsic." Since daydreaming involves brief cognitive events that spontaneously fluctuate, we tested the hypothesis that the dynamics of brain network functional connectivity (FC) are linked with daydreaming. We determined the general tendency to daydream in healthy adults based on a daydreaming frequency scale (DDF). Subjects then underwent both resting state functional magnetic resonance imaging (rs-fMRI) and fMRI during sensory stimulation with intermittent thought probes to determine the occurrences of mind-wandering events. Brain regions within the default mode network (DMN), purported to be involved in daydreaming, were assessed for 1) static FC across the entire fMRI scans, and 2) dynamic FC based on FC variability (FCV) across 30s progressively sliding windows of 2s increments within each scan. We found that during both resting and sensory stimulation states, individual differences in DDF were negatively correlated with static FC between the posterior cingulate cortex and a ventral DMN subsystem involved in future-oriented thought. Dynamic FC analysis revealed that DDF was positively correlated with FCV within the same DMN subsystem in the resting state but not during stimulation. However, dynamic but not static FC, in this subsystem, was positively correlated with an individual's degree of self-reported mind-wandering during sensory stimulation. These findings identify temporal aspects of spontaneous DMN activity that reflect conscious and unconscious processes. Copyright © 2014 Elsevier Inc. All rights reserved.

  9. Altered functional connectivity of amygdala underlying the neuromechanism of migraine pathogenesis.

    PubMed

    Chen, Zhiye; Chen, Xiaoyan; Liu, Mengqi; Dong, Zhao; Ma, Lin; Yu, Shengyuan

    2017-12-01

    The amygdala is a large grey matter complex in the limbic system, and it may contribute in the neurolimbic pain network in migraine. However, the detailed neuromechanism remained to be elucidated. The objective of this study is to investigate the amygdala structural and functional changes in migraine and to elucidate the mechanism of neurolimbic pain-modulating in the migraine pathogenesis. Conventional MRI, 3D structure images and resting state functional MRI were performed in 18 normal controls (NC), 18 patients with episodic migraine (EM), and 16 patients with chronic migraine (CM). The amygdala volume was measured using FreeSurfer software and the functional connectivity (FC) of bilateral amygdala was computed over the whole brain. Analysis of covariance was performed on the individual FC maps among groups. The increased FC of left amygdala was observed in EM compared with NC, and the decreased of right amygdala was revealed in CM compared with NC. The increased FC of bilateral amygdala was observed in CM compared with EM. The correlation analysis showed a negative correlation between the score of sleep quality (0, normal; 1, mild sleep disturbance; 2, moderate sleep disturbance; 3, serious sleep disturbance) and the increased FC strength of left amygdala in EM compared with NC, and a positive correlation between the score of sleep quality and the increased FC strength of left amygdala in CM compared with EM, and other clinical variables showed no significant correlation with altered FC of amygdala. The altered functional connectivity of amygdala demonstrated that neurolimbic pain network contribute in the EM pathogenesis and CM chronicization.

  10. Topological Reorganization of the Default Mode Network in Severe Male Obstructive Sleep Apnea

    PubMed Central

    Chen, Liting; Fan, Xiaole; Li, Haijun; Ye, Chenglong; Yu, Honghui; Gong, Honghan; Zeng, Xianjun; Peng, Dechang; Yan, Liping

    2018-01-01

    Impaired spontaneous regional activity and altered topology of the brain network have been observed in obstructive sleep apnea (OSA). However, the mechanisms of disrupted functional connectivity (FC) and topological reorganization of the default mode network (DMN) in patients with OSA remain largely unknown. We explored whether the FC is altered within the DMN and examined topological changes occur in the DMN in patients with OSA using a graph theory analysis of resting-state functional magnetic resonance imaging data and evaluated the relationship between neuroimaging measures and clinical variables. Resting-state data were obtained from 46 male patients with untreated severe OSA and 46 male good sleepers (GSs). We specifically selected 20 DMN subregions to construct the DMN architecture. The disrupted FC and topological properties of the DMN in patients with OSA were characterized using graph theory. The OSA group showed significantly decreased FC of the anterior–posterior DMN and within the posterior DMN, and also showed increased FC within the DMN. The DMN exhibited small-world topology in both OSA and GS groups. Compared to GSs, patients with OSA showed a decreased clustering coefficient (Cp) and local efficiency, and decreased nodal centralities in the left posterior cingulate cortex and dorsal medial prefrontal cortex, and increased nodal centralities in the ventral medial prefrontal cortex and the right parahippocampal cortex. Finally, the abnormal DMN FC was significantly related to Cp, path length, global efficiency, and Montreal cognitive assessment score. OSA showed disrupted FC within the DMN, which may have contributed to the observed topological reorganization. These findings may provide further evidence of cognitive deficits in patients with OSA.

  11. Resting-State Brain Activity in Adult Males Who Stutter

    PubMed Central

    Zhu, Chaozhe; Wang, Liang; Yan, Qian; Lin, Chunlan; Yu, Chunshui

    2012-01-01

    Although developmental stuttering has been extensively studied with structural and task-based functional magnetic resonance imaging (fMRI), few studies have focused on resting-state brain activity in this disorder. We investigated resting-state brain activity of stuttering subjects by analyzing the amplitude of low-frequency fluctuation (ALFF), region of interest (ROI)-based functional connectivity (FC) and independent component analysis (ICA)-based FC. Forty-four adult males with developmental stuttering and 46 age-matched fluent male controls were scanned using resting-state fMRI. ALFF, ROI-based FCs and ICA-based FCs were compared between male stuttering subjects and fluent controls in a voxel-wise manner. Compared with fluent controls, stuttering subjects showed increased ALFF in left brain areas related to speech motor and auditory functions and bilateral prefrontal cortices related to cognitive control. However, stuttering subjects showed decreased ALFF in the left posterior language reception area and bilateral non-speech motor areas. ROI-based FC analysis revealed decreased FC between the posterior language area involved in the perception and decoding of sensory information and anterior brain area involved in the initiation of speech motor function, as well as increased FC within anterior or posterior speech- and language-associated areas and between the prefrontal areas and default-mode network (DMN) in stuttering subjects. ICA showed that stuttering subjects had decreased FC in the DMN and increased FC in the sensorimotor network. Our findings support the concept that stuttering subjects have deficits in multiple functional systems (motor, language, auditory and DMN) and in the connections between them. PMID:22276215

  12. Altered Default Network Resting State Functional Connectivity in Patients with a First Episode of Psychosis

    PubMed Central

    Alonso-Solís, Anna; Corripio, Iluminada; de Castro-Manglano, Pilar; Duran-Sindreu, Santiago; Garcia-Garcia, Manuel; Proal, Erika; Nuñez-Marín, Fidel; Soutullo, Cesar; Alvarez, Enric; Gómez-Ansón, Beatriz; Kelly, Clare; Castellanos, F. Xavier

    2012-01-01

    Background Default network (DN) abnormalities have been identified in patients with chronic schizophrenia using “resting state” functional magnetic resonance imaging (R-fMRI). Here, we examined the integrity of the DN in patients experiencing their first episode of psychosis (FEP) compared with sex- and age-matched healthy controls. Methods We collected R-fMRI data from 19 FEP patients (mean age 24.9±4.8 yrs, 14 males) and 19 healthy controls (26.1±4.8 yrs, 14 males) at 3 Tesla. Following standard preprocessing, we examined the functional connectivity (FC) of two DN subsystems and the two DN hubs (P<0.0045, corrected). Results Patients with FEP exhibited abnormal FC that appeared largely restricted to the dorsomedial prefrontal cortex (dMPFC) DN subsystem. Relative to controls, FEP patients exhibited weaker positive FC between dMPFC and posterior cingulate cortex (PCC) and precuneus, extending laterally through the parietal lobe to the posterior angular gyrus. Patients with FEP exhibited weaker negative FC between the lateral temporal cortex and the intracalcarine cortex, bilaterally. The PCC and temporo-parietal junction also exhibited weaker negative FC with the right fusiform gyrus extending to the lingual gyrus and lateral occipital cortex, in FEP patients, compared to controls. By contrast, patients with FEP showed stronger negative FC between the temporal pole and medial motor cortex, anterior precuneus and posterior mid-cingulate cortex. Conclusions Abnormalities in the dMPFC DN subsystem in patients with a FEP suggest that FC patterns are altered even in the early stages of psychosis. PMID:22633527

  13. Mapping the Apollo 17 landing site area based on Lunar Reconnaissance Orbiter Camera images and Apollo surface photography

    NASA Astrophysics Data System (ADS)

    Haase, I.; Oberst, J.; Scholten, F.; Wählisch, M.; Gläser, P.; Karachevtseva, I.; Robinson, M. S.

    2012-05-01

    Newly acquired high resolution Lunar Reconnaissance Orbiter Camera (LROC) images allow accurate determination of the coordinates of Apollo hardware, sampling stations, and photographic viewpoints. In particular, the positions from where the Apollo 17 astronauts recorded panoramic image series, at the so-called “traverse stations”, were precisely determined for traverse path reconstruction. We analyzed observations made in Apollo surface photography as well as orthorectified orbital images (0.5 m/pixel) and Digital Terrain Models (DTMs) (1.5 m/pixel and 100 m/pixel) derived from LROC Narrow Angle Camera (NAC) and Wide Angle Camera (WAC) images. Key features captured in the Apollo panoramic sequences were identified in LROC NAC orthoimages. Angular directions of these features were measured in the panoramic images and fitted to the NAC orthoimage by applying least squares techniques. As a result, we obtained the surface panoramic camera positions to within 50 cm. At the same time, the camera orientations, North azimuth angles and distances to nearby features of interest were also determined. Here, initial results are shown for traverse station 1 (northwest of Steno Crater) as well as the Apollo Lunar Surface Experiment Package (ALSEP) area.

  14. Person re-identification over camera networks using multi-task distance metric learning.

    PubMed

    Ma, Lianyang; Yang, Xiaokang; Tao, Dacheng

    2014-08-01

    Person reidentification in a camera network is a valuable yet challenging problem to solve. Existing methods learn a common Mahalanobis distance metric by using the data collected from different cameras and then exploit the learned metric for identifying people in the images. However, the cameras in a camera network have different settings and the recorded images are seriously affected by variability in illumination conditions, camera viewing angles, and background clutter. Using a common metric to conduct person reidentification tasks on different camera pairs overlooks the differences in camera settings; however, it is very time-consuming to label people manually in images from surveillance videos. For example, in most existing person reidentification data sets, only one image of a person is collected from each of only two cameras; therefore, directly learning a unique Mahalanobis distance metric for each camera pair is susceptible to over-fitting by using insufficiently labeled data. In this paper, we reformulate person reidentification in a camera network as a multitask distance metric learning problem. The proposed method designs multiple Mahalanobis distance metrics to cope with the complicated conditions that exist in typical camera networks. We address the fact that these Mahalanobis distance metrics are different but related, and learned by adding joint regularization to alleviate over-fitting. Furthermore, by extending, we present a novel multitask maximally collapsing metric learning (MtMCML) model for person reidentification in a camera network. Experimental results demonstrate that formulating person reidentification over camera networks as multitask distance metric learning problem can improve performance, and our proposed MtMCML works substantially better than other current state-of-the-art person reidentification methods.

  15. Photography in Dermatologic Surgery: Selection of an Appropriate Camera Type for a Particular Clinical Application.

    PubMed

    Chen, Brian R; Poon, Emily; Alam, Murad

    2017-08-01

    Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.

  16. Depth measurements through controlled aberrations of projected patterns.

    PubMed

    Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim

    2012-03-12

    Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.

  17. An Example-Based Super-Resolution Algorithm for Selfie Images

    PubMed Central

    William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep

    2016-01-01

    A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500

  18. Accuracy Analysis for Automatic Orientation of a Tumbling Oblique Viewing Sensor System

    NASA Astrophysics Data System (ADS)

    Stebner, K.; Wieden, A.

    2014-03-01

    Dynamic camera systems with moving parts are difficult to handle in photogrammetric workflow, because it is not ensured that the dynamics are constant over the recording period. Minimum changes of the camera's orientation greatly influence the projection of oblique images. In this publication these effects - originating from the kinematic chain of a dynamic camera system - are analysed and validated. A member of the Modular Airborne Camera System family - MACS-TumbleCam - consisting of a vertical viewing and a tumbling oblique camera was used for this investigation. Focus is on dynamic geometric modeling and the stability of the kinematic chain. To validate the experimental findings, the determined parameters are applied to the exterior orientation of an actual aerial image acquisition campaign using MACS-TumbleCam. The quality of the parameters is sufficient for direct georeferencing of oblique image data from the orientation information of a synchronously captured vertical image dataset. Relative accuracy for the oblique data set ranges from 1.5 pixels when using all images of the image block to 0.3 pixels when using only adjacent images.

  19. Using the Standard Deviation of a Region of Interest in an Image to Estimate Camera to Emitter Distance

    PubMed Central

    Cano-García, Angel E.; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information. PMID:22778608

  20. Using the standard deviation of a region of interest in an image to estimate camera to emitter distance.

    PubMed

    Cano-García, Angel E; Lazaro, José Luis; Infante, Arturo; Fernández, Pedro; Pompa-Chacón, Yamilet; Espinoza, Felipe

    2012-01-01

    In this study, a camera to infrared diode (IRED) distance estimation problem was analyzed. The main objective was to define an alternative to measures depth only using the information extracted from pixel grey levels of the IRED image to estimate the distance between the camera and the IRED. In this paper, the standard deviation of the pixel grey level in the region of interest containing the IRED image is proposed as an empirical parameter to define a model for estimating camera to emitter distance. This model includes the camera exposure time, IRED radiant intensity and the distance between the camera and the IRED. An expression for the standard deviation model related to these magnitudes was also derived and calibrated using different images taken under different conditions. From this analysis, we determined the optimum parameters to ensure the best accuracy provided by this alternative. Once the model calibration had been carried out, a differential method to estimate the distance between the camera and the IRED was defined and applied, considering that the camera was aligned with the IRED. The results indicate that this method represents a useful alternative for determining the depth information.

  1. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  2. Gate simulation of Compton Ar-Xe gamma-camera for radionuclide imaging in nuclear medicine

    NASA Astrophysics Data System (ADS)

    Dubov, L. Yu; Belyaev, V. N.; Berdnikova, A. K.; Bolozdynia, A. I.; Akmalova, Yu A.; Shtotsky, Yu V.

    2017-01-01

    Computer simulations of cylindrical Compton Ar-Xe gamma camera are described in the current report. Detection efficiency of cylindrical Ar-Xe Compton camera with internal diameter of 40 cm is estimated as1-3%that is 10-100 times higher than collimated Anger’s camera. It is shown that cylindrical Compton camera can image Tc-99m radiotracer distribution with uniform spatial resolution of 20 mm through the whole field of view.

  3. A method and results of color calibration for the Chang'e-3 terrain camera and panoramic camera

    NASA Astrophysics Data System (ADS)

    Ren, Xin; Li, Chun-Lai; Liu, Jian-Jun; Wang, Fen-Fei; Yang, Jian-Feng; Liu, En-Hai; Xue, Bin; Zhao, Ru-Jin

    2014-12-01

    The terrain camera (TCAM) and panoramic camera (PCAM) are two of the major scientific payloads installed on the lander and rover of the Chang'e 3 mission respectively. They both use a Bayer color filter array covering CMOS sensor to capture color images of the Moon's surface. RGB values of the original images are related to these two kinds of cameras. There is an obvious color difference compared with human visual perception. This paper follows standards published by the International Commission on Illumination to establish a color correction model, designs the ground calibration experiment and obtains the color correction coefficient. The image quality has been significantly improved and there is no obvious color difference in the corrected images. Ground experimental results show that: (1) Compared with uncorrected images, the average color difference of TCAM is 4.30, which has been reduced by 62.1%. (2) The average color differences of the left and right cameras in PCAM are 4.14 and 4.16, which have been reduced by 68.3% and 67.6% respectively.

  4. Semi-autonomous wheelchair system using stereoscopic cameras.

    PubMed

    Nguyen, Jordan S; Nguyen, Thanh H; Nguyen, Hung T

    2009-01-01

    This paper is concerned with the design and development of a semi-autonomous wheelchair system using stereoscopic cameras to assist hands-free control technologies for severely disabled people. The stereoscopic cameras capture an image from both the left and right cameras, which are then processed with a Sum of Absolute Differences (SAD) correlation algorithm to establish correspondence between image features in the different views of the scene. This is used to produce a stereo disparity image containing information about the depth of objects away from the camera in the image. A geometric projection algorithm is then used to generate a 3-Dimensional (3D) point map, placing pixels of the disparity image in 3D space. This is then converted to a 2-Dimensional (2D) depth map allowing objects in the scene to be viewed and a safe travel path for the wheelchair to be planned and followed based on the user's commands. This assistive technology utilising stereoscopic cameras has the purpose of automated obstacle detection, path planning and following, and collision avoidance during navigation. Experimental results obtained in an indoor environment displayed the effectiveness of this assistive technology.

  5. Accuracy evaluation of optical distortion calibration by digital image correlation

    NASA Astrophysics Data System (ADS)

    Gao, Zeren; Zhang, Qingchuan; Su, Yong; Wu, Shangquan

    2017-11-01

    Due to its convenience of operation, the camera calibration algorithm, which is based on the plane template, is widely used in image measurement, computer vision and other fields. How to select a suitable distortion model is always a problem to be solved. Therefore, there is an urgent need for an experimental evaluation of the accuracy of camera distortion calibrations. This paper presents an experimental method for evaluating camera distortion calibration accuracy, which is easy to implement, has high precision, and is suitable for a variety of commonly used lens. First, we use the digital image correlation method to calculate the in-plane rigid body displacement field of an image displayed on a liquid crystal display before and after translation, as captured with a camera. Next, we use a calibration board to calibrate the camera to obtain calibration parameters which are used to correct calculation points of the image before and after deformation. The displacement field before and after correction is compared to analyze the distortion calibration results. Experiments were carried out to evaluate the performance of two commonly used industrial camera lenses for four commonly used distortion models.

  6. The Mast Cameras and Mars Descent Imager (MARDI) for the 2009 Mars Science Laboratory

    NASA Technical Reports Server (NTRS)

    Malin, M. C.; Bell, J. F.; Cameron, J.; Dietrich, W. E.; Edgett, K. S.; Hallet, B.; Herkenhoff, K. E.; Lemmon, M. T.; Parker, T. J.; Sullivan, R. J.

    2005-01-01

    Based on operational experience gained during the Mars Exploration Rover (MER) mission, we proposed and were selected to conduct two related imaging experiments: (1) an investigation of the geology and short-term atmospheric vertical wind profile local to the Mars Science Laboratory (MSL) landing site using descent imaging, and (2) a broadly-based scientific investigation of the MSL locale employing visible and very near infra-red imaging techniques from a pair of mast-mounted, high resolution cameras. Both instruments share a common electronics design, a design also employed for the MSL Mars Hand Lens Imager (MAHLI) [1]. The primary differences between the cameras are in the nature and number of mechanisms and specific optics tailored to each camera s requirements.

  7. Advances in Gamma-Ray Imaging with Intensified Quantum-Imaging Detectors

    NASA Astrophysics Data System (ADS)

    Han, Ling

    Nuclear medicine, an important branch of modern medical imaging, is an essential tool for both diagnosis and treatment of disease. As the fundamental element of nuclear medicine imaging, the gamma camera is able to detect gamma-ray photons emitted by radiotracers injected into a patient and form an image of the radiotracer distribution, reflecting biological functions of organs or tissues. Recently, an intensified CCD/CMOS-based quantum detector, called iQID, was developed in the Center for Gamma-Ray Imaging. Originally designed as a novel type of gamma camera, iQID demonstrated ultra-high spatial resolution (< 100 micron) and many other advantages over traditional gamma cameras. This work focuses on advancing this conceptually-proven gamma-ray imaging technology to make it ready for both preclinical and clinical applications. To start with, a Monte Carlo simulation of the key light-intensification device, i.e. the image intensifier, was developed, which revealed the dominating factor(s) that limit energy resolution performance of the iQID cameras. For preclinical imaging applications, a previously-developed iQID-based single-photon-emission computed-tomography (SPECT) system, called FastSPECT III, was fully advanced in terms of data acquisition software, system sensitivity and effective FOV by developing and adopting a new photon-counting algorithm, thicker columnar scintillation detectors, and system calibration method. Originally designed for mouse brain imaging, the system is now able to provide full-body mouse imaging with sub-350-micron spatial resolution. To further advance the iQID technology to include clinical imaging applications, a novel large-area iQID gamma camera, called LA-iQID, was developed from concept to prototype. Sub-mm system resolution in an effective FOV of 188 mm x 188 mm has been achieved. The camera architecture, system components, design and integration, data acquisition, camera calibration, and performance evaluation are presented in this work. Mounted on a castered counter-weighted clinical cart, the camera also features portable and mobile capabilities for easy handling and on-site applications at remote locations where hospital facilities are not available.

  8. Flat-panel detector, CCD cameras, and electron-beam-tube-based video for use in portal imaging

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Tang, Chuankun; Cheng, Chee-Way; Dallas, William J.

    1998-07-01

    This paper provides a comparison of some imaging parameters of four portal imaging systems at 6 MV: a flat panel detector, two CCD cameras and an electron beam tube based video camera. Measurements were made of signal and noise and consequently of signal-to-noise per pixel as a function of the exposure. All systems have a linear response with respect to exposure, and with the exception of the electron beam tube based video camera, the noise is proportional to the square-root of the exposure, indicating photon-noise limitation. The flat-panel detector has a signal-to-noise ratio, which is higher than that observed with both CCD-Cameras or with the electron beam tube based video camera. This is expected because most portal imaging systems using optical coupling with a lens exhibit severe quantum-sinks. The measurements of signal-and noise were complemented by images of a Las Vegas-type aluminum contrast detail phantom, located at the ISO-Center. These images were generated at an exposure of 1 MU. The flat-panel detector permits detection of Aluminum holes of 1.2 mm diameter and 1.6 mm depth, indicating the best signal-to-noise ratio. The CCD-cameras rank second and third in signal-to- noise ratio, permitting detection of Aluminum-holes of 1.2 mm diameter and 2.2 mm depth (CCD_1) and of 1.2 mm diameter and 3.2 mm depth (CCD_2) respectively, while the electron beam tube based video camera permits detection of only a hole of 1.2 mm diameter and 4.6 mm depth. Rank Order Filtering was applied to the raw images from the CCD-based systems in order to remove the direct hits. These are camera responses to scattered x-ray photons which interact directly with the CCD of the CCD-Camera and generate 'Salt and Pepper type noise,' which interferes severely with attempts to determine accurate estimates of the image noise. The paper also presents data on the metal-phosphor's photon gain (the number of light-photons per interacting x-ray photon).

  9. Volumetric particle image velocimetry with a single plenoptic camera

    NASA Astrophysics Data System (ADS)

    Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.

    2015-11-01

    A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.

  10. Performance evaluation and clinical applications of 3D plenoptic cameras

    NASA Astrophysics Data System (ADS)

    Decker, Ryan; Shademan, Azad; Opfermann, Justin; Leonard, Simon; Kim, Peter C. W.; Krieger, Axel

    2015-06-01

    The observation and 3D quantification of arbitrary scenes using optical imaging systems is challenging, but increasingly necessary in many fields. This paper provides a technical basis for the application of plenoptic cameras in medical and medical robotics applications, and rigorously evaluates camera integration and performance in the clinical setting. It discusses plenoptic camera calibration and setup, assesses plenoptic imaging in a clinically relevant context, and in the context of other quantitative imaging technologies. We report the methods used for camera calibration, precision and accuracy results in an ideal and simulated surgical setting. Afterwards, we report performance during a surgical task. Test results showed the average precision of the plenoptic camera to be 0.90mm, increasing to 1.37mm for tissue across the calibrated FOV. The ideal accuracy was 1.14mm. The camera showed submillimeter error during a simulated surgical task.

  11. Micro-Imagers for Spaceborne Cell-Growth Experiments

    NASA Technical Reports Server (NTRS)

    Behar, Alberto; Matthews, Janet; SaintAnge, Beverly; Tanabe, Helen

    2006-01-01

    A document discusses selected aspects of a continuing effort to develop five micro-imagers for both still and video monitoring of cell cultures to be grown aboard the International Space Station. The approach taken in this effort is to modify and augment pre-existing electronic micro-cameras. Each such camera includes an image-detector integrated-circuit chip, signal-conditioning and image-compression circuitry, and connections for receiving power from, and exchanging data with, external electronic equipment. Four white and four multicolor light-emitting diodes are to be added to each camera for illuminating the specimens to be monitored. The lens used in the original version of each camera is to be replaced with a shorter-focal-length, more-compact singlet lens to make it possible to fit the camera into the limited space allocated to it. Initially, the lenses in the five cameras are to have different focal lengths: the focal lengths are to be 1, 1.5, 2, 2.5, and 3 cm. Once one of the focal lengths is determined to be the most nearly optimum, the remaining four cameras are to be fitted with lenses of that focal length.

  12. High-performance camera module for fast quality inspection in industrial printing applications

    NASA Astrophysics Data System (ADS)

    Fürtler, Johannes; Bodenstorfer, Ernst; Mayer, Konrad J.; Brodersen, Jörg; Heiss, Dorothea; Penz, Harald; Eckel, Christian; Gravogl, Klaus; Nachtnebel, Herbert

    2007-02-01

    Today, printing products which must meet highest quality standards, e.g., banknotes, stamps, or vouchers, are automatically checked by optical inspection systems. Typically, the examination of fine details of the print or security features demands images taken from various perspectives, with different spectral sensitivity (visible, infrared, ultraviolet), and with high resolution. Consequently, the inspection system is equipped with several cameras and has to cope with an enormous data rate to be processed in real-time. Hence, it is desirable to move image processing tasks into the camera to reduce the amount of data which has to be transferred to the (central) image processing system. The idea is to transfer relevant information only, i.e., features of the image instead of the raw image data from the sensor. These features are then further processed. In this paper a color line-scan camera for line rates up to 100 kHz is presented. The camera is based on a commercial CMOS (complementary metal oxide semiconductor) area image sensor and a field programmable gate array (FPGA). It implements extraction of image features which are well suited to detect print flaws like blotches of ink, color smears, splashes, spots and scratches. The camera design and several image processing methods implemented on the FPGA are described, including flat field correction, compensation of geometric distortions, color transformation, as well as decimation and neighborhood operations.

  13. Frontoparietal Structural Connectivity in Childhood Predicts Development of Functional Connectivity and Reasoning Ability: A Large-Scale Longitudinal Investigation.

    PubMed

    Wendelken, Carter; Ferrer, Emilio; Ghetti, Simona; Bailey, Stephen K; Cutting, Laurie; Bunge, Silvia A

    2017-08-30

    Prior research points to a positive concurrent relationship between reasoning ability and both frontoparietal structural connectivity (SC) as measured by diffusion tensor imaging (Tamnes et al., 2010) and frontoparietal functional connectivity (FC) as measured by fMRI (Cocchi et al., 2014). Further, recent research demonstrates a link between reasoning ability and FC of two brain regions in particular: rostrolateral prefrontal cortex (RLPFC) and the inferior parietal lobe (IPL) (Wendelken et al., 2016). Here, we sought to investigate the concurrent and dynamic, lead-lag relationships among frontoparietal SC, FC, and reasoning ability in humans. To this end, we combined three longitudinal developmental datasets with behavioral and neuroimaging data from 523 male and female participants between 6 and 22 years of age. Cross-sectionally, reasoning ability was most strongly related to FC between RLPFC and IPL in adolescents and adults, but to frontoparietal SC in children. Longitudinal analysis revealed that RLPFC-IPL SC, but not FC, was a positive predictor of future changes in reasoning ability. Moreover, we found that RLPFC-IPL SC at one time point positively predicted future changes in RLPFC-IPL FC, whereas, in contrast, FC did not predict future changes in SC. Our results demonstrate the importance of strong white matter connectivity between RLPFC and IPL during middle childhood for the subsequent development of both robust FC and good reasoning ability. SIGNIFICANCE STATEMENT The human capacity for reasoning develops substantially during childhood and has a profound impact on achievement in school and in cognitively challenging careers. Reasoning ability depends on communication between lateral prefrontal and parietal cortices. Therefore, to understand how this capacity develops, we examined the dynamic relationships over time among white matter tracts connecting frontoparietal cortices (i.e., structural connectivity, SC), coordinated frontoparietal activation (functional connectivity, FC), and reasoning ability in a large longitudinal sample of subjects 6-22 years of age. We found that greater frontoparietal SC in childhood predicts future increases in both FC and reasoning ability, demonstrating the importance of white matter development during childhood for subsequent brain and cognitive functioning. Copyright © 2017 the authors 0270-6474/17/378549-10$15.00/0.

  14. Frontoparietal Structural Connectivity in Childhood Predicts Development of Functional Connectivity and Reasoning Ability: A Large-Scale Longitudinal Investigation

    PubMed Central

    Ferrer, Emilio; Cutting, Laurie

    2017-01-01

    Prior research points to a positive concurrent relationship between reasoning ability and both frontoparietal structural connectivity (SC) as measured by diffusion tensor imaging (Tamnes et al., 2010) and frontoparietal functional connectivity (FC) as measured by fMRI (Cocchi et al., 2014). Further, recent research demonstrates a link between reasoning ability and FC of two brain regions in particular: rostrolateral prefrontal cortex (RLPFC) and the inferior parietal lobe (IPL) (Wendelken et al., 2016). Here, we sought to investigate the concurrent and dynamic, lead–lag relationships among frontoparietal SC, FC, and reasoning ability in humans. To this end, we combined three longitudinal developmental datasets with behavioral and neuroimaging data from 523 male and female participants between 6 and 22 years of age. Cross-sectionally, reasoning ability was most strongly related to FC between RLPFC and IPL in adolescents and adults, but to frontoparietal SC in children. Longitudinal analysis revealed that RLPFC–IPL SC, but not FC, was a positive predictor of future changes in reasoning ability. Moreover, we found that RLPFC–IPL SC at one time point positively predicted future changes in RLPFC–IPL FC, whereas, in contrast, FC did not predict future changes in SC. Our results demonstrate the importance of strong white matter connectivity between RLPFC and IPL during middle childhood for the subsequent development of both robust FC and good reasoning ability. SIGNIFICANCE STATEMENT The human capacity for reasoning develops substantially during childhood and has a profound impact on achievement in school and in cognitively challenging careers. Reasoning ability depends on communication between lateral prefrontal and parietal cortices. Therefore, to understand how this capacity develops, we examined the dynamic relationships over time among white matter tracts connecting frontoparietal cortices (i.e., structural connectivity, SC), coordinated frontoparietal activation (functional connectivity, FC), and reasoning ability in a large longitudinal sample of subjects 6–22 years of age. We found that greater frontoparietal SC in childhood predicts future increases in both FC and reasoning ability, demonstrating the importance of white matter development during childhood for subsequent brain and cognitive functioning. PMID:28821657

  15. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  16. Real-time vehicle matching for multi-camera tunnel surveillance

    NASA Astrophysics Data System (ADS)

    Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried

    2011-03-01

    Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.

  17. The use of consumer depth cameras for 3D surface imaging of people with obesity: A feasibility study.

    PubMed

    Wheat, J S; Clarkson, S; Flint, S W; Simpson, C; Broom, D R

    2018-05-21

    Three dimensional (3D) surface imaging is a viable alternative to traditional body morphology measures, but the feasibility of using this technique with people with obesity has not been fully established. Therefore, the aim of this study was to investigate the validity, repeatability and acceptability of a consumer depth camera 3D surface imaging system in imaging people with obesity. The concurrent validity of the depth camera based system was investigated by comparing measures of mid-trunk volume to a gold-standard. The repeatability and acceptability of the depth camera system was assessed in people with obesity at a clinic. There was evidence of a fixed systematic difference between the depth camera system and the gold standard but excellent correlation between volume estimates (r 2 =0.997), with little evidence of proportional bias. The depth camera system was highly repeatable - low typical error (0.192L), high intraclass correlation coefficient (>0.999) and low technical error of measurement (0.64%). Depth camera based 3D surface imaging was also acceptable to people with obesity. It is feasible (valid, repeatable and acceptable) to use a low cost, flexible 3D surface imaging system to monitor the body size and shape of people with obesity in a clinical setting. Copyright © 2018 Asia Oceania Association for the Study of Obesity. Published by Elsevier Ltd. All rights reserved.

  18. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test.

    PubMed

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-11-17

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces.

  19. iPhone 4s and iPhone 5s Imaging of the Eye.

    PubMed

    Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L

    2017-01-01

    To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.

  20. A time-resolved image sensor for tubeless streak cameras

    NASA Astrophysics Data System (ADS)

    Yasutomi, Keita; Han, SangMan; Seo, Min-Woong; Takasawa, Taishi; Kagawa, Keiichiro; Kawahito, Shoji

    2014-03-01

    This paper presents a time-resolved CMOS image sensor with draining-only modulation (DOM) pixels for tube-less streak cameras. Although the conventional streak camera has high time resolution, the device requires high voltage and bulky system due to the structure with a vacuum tube. The proposed time-resolved imager with a simple optics realize a streak camera without any vacuum tubes. The proposed image sensor has DOM pixels, a delay-based pulse generator, and a readout circuitry. The delay-based pulse generator in combination with an in-pixel logic allows us to create and to provide a short gating clock to the pixel array. A prototype time-resolved CMOS image sensor with the proposed pixel is designed and implemented using 0.11um CMOS image sensor technology. The image array has 30(Vertical) x 128(Memory length) pixels with the pixel pitch of 22.4um. .

  1. Plenoptic Imager for Automated Surface Navigation

    NASA Technical Reports Server (NTRS)

    Zollar, Byron; Milder, Andrew; Milder, Andrew; Mayo, Michael

    2010-01-01

    An electro-optical imaging device is capable of autonomously determining the range to objects in a scene without the use of active emitters or multiple apertures. The novel, automated, low-power imaging system is based on a plenoptic camera design that was constructed as a breadboard system. Nanohmics proved feasibility of the concept by designing an optical system for a prototype plenoptic camera, developing simulated plenoptic images and range-calculation algorithms, constructing a breadboard prototype plenoptic camera, and processing images (including range calculations) from the prototype system. The breadboard demonstration included an optical subsystem comprised of a main aperture lens, a mechanical structure that holds an array of micro lenses at the focal distance from the main lens, and a structure that mates a CMOS imaging sensor the correct distance from the micro lenses. The demonstrator also featured embedded electronics for camera readout, and a post-processor executing image-processing algorithms to provide ranging information.

  2. Multiplane and Spectrally-Resolved Single Molecule Localization Microscopy with Industrial Grade CMOS cameras.

    PubMed

    Babcock, Hazen P

    2018-01-29

    This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.

  3. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  4. Correction And Use Of Jitter In Television Images

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B.; Fender, Derek H.; Fender, Antony R. H.

    1989-01-01

    Proposed system stabilizes jittering television image and/or measures jitter to extract information on motions of objects in image. Alternative version, system controls lateral motion on camera to generate stereoscopic views to measure distances to objects. In another version, motion of camera controlled to keep object in view. Heart of system is digital image-data processor called "jitter-miser", which includes frame buffer and logic circuits to correct for jitter in image. Signals from motion sensors on camera sent to logic circuits and processed into corrections for motion along and across line of sight.

  5. Prediction of individual brain maturity using fMRI.

    PubMed

    Dosenbach, Nico U F; Nardos, Binyam; Cohen, Alexander L; Fair, Damien A; Power, Jonathan D; Church, Jessica A; Nelson, Steven M; Wig, Gagan S; Vogel, Alecia C; Lessov-Schlaggar, Christina N; Barnes, Kelly Anne; Dubis, Joseph W; Feczko, Eric; Coalson, Rebecca S; Pruett, John R; Barch, Deanna M; Petersen, Steven E; Schlaggar, Bradley L

    2010-09-10

    Group functional connectivity magnetic resonance imaging (fcMRI) studies have documented reliable changes in human functional brain maturity over development. Here we show that support vector machine-based multivariate pattern analysis extracts sufficient information from fcMRI data to make accurate predictions about individuals' brain maturity across development. The use of only 5 minutes of resting-state fcMRI data from 238 scans of typically developing volunteers (ages 7 to 30 years) allowed prediction of individual brain maturity as a functional connectivity maturation index. The resultant functional maturation curve accounted for 55% of the sample variance and followed a nonlinear asymptotic growth curve shape. The greatest relative contribution to predicting individual brain maturity was made by the weakening of short-range functional connections between the adult brain's major functional networks.

  6. Relative Panoramic Camera Position Estimation for Image-Based Virtual Reality Networks in Indoor Environments

    NASA Astrophysics Data System (ADS)

    Nakagawa, M.; Akano, K.; Kobayashi, T.; Sekiguchi, Y.

    2017-09-01

    Image-based virtual reality (VR) is a virtual space generated with panoramic images projected onto a primitive model. In imagebased VR, realistic VR scenes can be generated with lower rendering cost, and network data can be described as relationships among VR scenes. The camera network data are generated manually or by an automated procedure using camera position and rotation data. When panoramic images are acquired in indoor environments, network data should be generated without Global Navigation Satellite Systems (GNSS) positioning data. Thus, we focused on image-based VR generation using a panoramic camera in indoor environments. We propose a methodology to automate network data generation using panoramic images for an image-based VR space. We verified and evaluated our methodology through five experiments in indoor environments, including a corridor, elevator hall, room, and stairs. We confirmed that our methodology can automatically reconstruct network data using panoramic images for image-based VR in indoor environments without GNSS position data.

  7. Fluorescent image tracking velocimeter

    DOEpatents

    Shaffer, Franklin D.

    1994-01-01

    A multiple-exposure fluorescent image tracking velocimeter (FITV) detects and measures the motion (trajectory, direction and velocity) of small particles close to light scattering surfaces. The small particles may follow the motion of a carrier medium such as a liquid, gas or multi-phase mixture, allowing the motion of the carrier medium to be observed, measured and recorded. The main components of the FITV include: (1) fluorescent particles; (2) a pulsed fluorescent excitation laser source; (3) an imaging camera; and (4) an image analyzer. FITV uses fluorescing particles excited by visible laser light to enhance particle image detectability near light scattering surfaces. The excitation laser light is filtered out before reaching the imaging camera allowing the fluoresced wavelengths emitted by the particles to be detected and recorded by the camera. FITV employs multiple exposures of a single camera image by pulsing the excitation laser light for producing a series of images of each particle along its trajectory. The time-lapsed image may be used to determine trajectory and velocity and the exposures may be coded to derive directional information.

  8. HERCULES/MSI: a multispectral imager with geolocation for STS-70

    NASA Astrophysics Data System (ADS)

    Simi, Christopher G.; Kindsfather, Randy; Pickard, Henry; Howard, William, III; Norton, Mark C.; Dixon, Roberta

    1995-11-01

    A multispectral intensified CCD imager combined with a ring laser gyroscope based inertial measurement unit was flown on the Space Shuttle Discovery from July 13-22, 1995 (Space Transport System Flight No. 70, STS-70). The camera includes a six position filter wheel, a third generation image intensifier, and a CCD camera. The camera is integrated with a laser gyroscope system that determines the ground position of the imagery to an accuracy of better than three nautical miles. The camera has two modes of operation; a panchromatic mode for high-magnification imaging [ground sample distance (GSD) of 4 m], or a multispectral mode consisting of six different user-selectable spectral ranges at reduced magnification (12 m GSD). This paper discusses the system hardware and technical trade-offs involved with camera optimization, and presents imagery observed during the shuttle mission.

  9. Investigation into the use of photoanthropometry in facial image comparison.

    PubMed

    Moreton, Reuben; Morley, Johanna

    2011-10-10

    Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.

  10. Joint estimation of high resolution images and depth maps from light field cameras

    NASA Astrophysics Data System (ADS)

    Ohashi, Kazuki; Takahashi, Keita; Fujii, Toshiaki

    2014-03-01

    Light field cameras are attracting much attention as tools for acquiring 3D information of a scene through a single camera. The main drawback of typical lenselet-based light field cameras is the limited resolution. This limitation comes from the structure where a microlens array is inserted between the sensor and the main lens. The microlens array projects 4D light field on a single 2D image sensor at the sacrifice of the resolution; the angular resolution and the position resolution trade-off under the fixed resolution of the image sensor. This fundamental trade-off remains after the raw light field image is converted to a set of sub-aperture images. The purpose of our study is to estimate a higher resolution image from low resolution sub-aperture images using a framework of super-resolution reconstruction. In this reconstruction, these sub-aperture images should be registered as accurately as possible. This registration is equivalent to depth estimation. Therefore, we propose a method where super-resolution and depth refinement are performed alternatively. Most of the process of our method is implemented by image processing operations. We present several experimental results using a Lytro camera, where we increased the resolution of a sub-aperture image by three times horizontally and vertically. Our method can produce clearer images compared to the original sub-aperture images and the case without depth refinement.

  11. A novel super-resolution camera model

    NASA Astrophysics Data System (ADS)

    Shao, Xiaopeng; Wang, Yi; Xu, Jie; Wang, Lin; Liu, Fei; Luo, Qiuhua; Chen, Xiaodong; Bi, Xiangli

    2015-05-01

    Aiming to realize super resolution(SR) to single image and video reconstruction, a super resolution camera model is proposed for the problem that the resolution of the images obtained by traditional cameras behave comparatively low. To achieve this function we put a certain driving device such as piezoelectric ceramics in the camera. By controlling the driving device, a set of continuous low resolution(LR) images can be obtained and stored instantaneity, which reflect the randomness of the displacements and the real-time performance of the storage very well. The low resolution image sequences have different redundant information and some particular priori information, thus it is possible to restore super resolution image factually and effectively. The sample method is used to derive the reconstruction principle of super resolution, which analyzes the possible improvement degree of the resolution in theory. The super resolution algorithm based on learning is used to reconstruct single image and the variational Bayesian algorithm is simulated to reconstruct the low resolution images with random displacements, which models the unknown high resolution image, motion parameters and unknown model parameters in one hierarchical Bayesian framework. Utilizing sub-pixel registration method, a super resolution image of the scene can be reconstructed. The results of 16 images reconstruction show that this camera model can increase the image resolution to 2 times, obtaining images with higher resolution in currently available hardware levels.

  12. Multiparametric imaging of brain hemodynamics and function using gas-inhalation MRI.

    PubMed

    Liu, Peiying; Welch, Babu G; Li, Yang; Gu, Hong; King, Darlene; Yang, Yihong; Pinho, Marco; Lu, Hanzhang

    2017-02-01

    Diagnosis and treatment monitoring of cerebrovascular diseases routinely require hemodynamic imaging of the brain. Current methods either only provide part of the desired information or require the injection of multiple exogenous agents. In this study, we developed a multiparametric imaging scheme for the imaging of brain hemodynamics and function using gas-inhalation MRI. The proposed technique uses a single MRI scan to provide simultaneous measurements of baseline venous cerebral blood volume (vCBV), cerebrovascular reactivity (CVR), bolus arrival time (BAT), and resting-state functional connectivity (fcMRI). This was achieved with a novel, concomitant O 2 and CO 2 gas inhalation paradigm, rapid MRI image acquisition with a 9.3min BOLD sequence, and an advanced algorithm to extract multiple hemodynamic information from the same dataset. In healthy subjects, CVR and vCBV values were 0.23±0.03%/mmHg and 0.0056±0.0006%/mmHg, respectively, with a strong correlation (r=0.96 for CVR and r=0.91 for vCBV) with more conventional, separate acquisitions that take twice the scan time. In patients with Moyamoya syndrome, CVR in the stenosis-affected flow territories (typically anterior-cerebral-artery, ACA, and middle-cerebral-artery, MCA, territories) was significantly lower than that in posterior-cerebral-artery (PCA), which typically has minimal stenosis, flow territories (0.12±0.06%/mmHg vs. 0.21±0.05%/mmHg, p<0.001). BAT of the gas bolus was significantly longer (p=0.008) in ACA/MCA territories, compared to PCA, and the maps were consistent with the conventional contrast-enhanced CT perfusion method. FcMRI networks were robustly identified from the gas-inhalation MRI data after factoring out the influence of CO 2 and O 2 on the signal time course. The spatial correspondence between the gas-data-derived fcMRI maps and those using a separate, conventional fcMRI scan was excellent, showing a spatial correlation of 0.58±0.17 and 0.64±0.20 for default mode network and primary visual network, respectively. These findings suggest that advanced gas-inhalation MRI provides reliable measurements of multiple hemodynamic parameters within a clinically acceptable imaging time and is suitable for patient examinations. Copyright © 2016 Elsevier Inc. All rights reserved.

  13. Multiparametric imaging of brain hemodynamics and function using gas-inhalation MRI

    PubMed Central

    Liu, Peiying; Welch, Babu G.; Li, Yang; Gu, Hong; King, Darlene; Yang, Yihong; Pinho, Marco; Lu, Hanzhang

    2016-01-01

    Diagnosis and treatment monitoring of cerebrovascular diseases routinely require hemodynamic imaging of the brain. Current methods either only provide part of the desired information or require the injection of multiple exogenous agents. In this study, we developed a multiparametric imaging scheme for the imaging of brain hemodynamics and function using gas-inhalation MRI. The proposed technique uses a single MRI scan to provide simultaneous measurements of baseline venous cerebral blood volume (vCBV), cerebrovascular reactivity (CVR), bolus arrival time (BAT), and resting-state functional connectivity (fcMRI). This was achieved with a novel, concomitant O2 and CO2 gas inhalation paradigm, rapid MRI image acquisition with a 9.3 min BOLD sequence, and an advanced algorithm to extract multiple hemodynamic information from the same dataset. In healthy subjects, CVR and vCBV values were 0.23±0.03 %/mmHg and 0.0056±0.0006 %/mmHg, respectively, with a strong correlation (r=0.96 for CVR and r=0.91 for vCBV) with more conventional, separate acquisitions that take twice the scan time. In patients with Moyamoya syndrome, CVR in the stenosis-affected flow territories (typically anterior-cerebral-artery, ACA, and middle-cerebral-artery, MCA, territories) was significantly lower than that in posterior-cerebral-artery (PCA), which typically has minimal stenosis, flow territories (0.12±0.06 %/mmHg vs. 0.21±0.05 %/mmHg, p<0.001). BAT of the gas bolus was significantly longer (p=0.008) in ACA/MCA territories, compared to PCA, and the maps were consistent with the conventional contrast-enhanced CT perfusion method. FcMRI networks were robustly identified from the gas-inhalation MRI data after factoring out the influence of CO2 and O2 on the signal time course. The spatial correspondence between the gas-data-derived fcMRI maps and those using a separate, conventional fcMRI scan was excellent, showing a spatial correlation of 0.58±0.17 and 0.64±0.20 for default mode network and primary visual network, respectively. These findings suggest that advanced gas-inhalation MRI provides reliable measurements of multiple hemodynamic parameters within a clinically acceptable imaging time and is suitable for patient examinations. PMID:27693197

  14. Mitigation of Atmospheric Effects on Imaging Systems

    DTIC Science & Technology

    2004-03-31

    focal length. The imaging system had two cameras: an Electrim camera sensitive in the visible (0.6 µ m) waveband and an Amber QWIP infrared camera...sensitive in the 9–micron region. The Amber QWIP infrared camera had 256x256 pixels, pixel pitch 38 mµ , focal length of 1.8 m, FOV of 5.4 x5.4 mr...each day. Unfortunately, signals from the different read ports of the Electrim camera picked up noise on their way to the digitizer, and this resulted

  15. Imaging Emission Spectra with Handheld and Cellphone Cameras

    NASA Astrophysics Data System (ADS)

    Sitar, David

    2012-12-01

    As point-and-shoot digital camera technology advances it is becoming easier to image spectra in a laboralory setting on a shoestring budget and get immediale results. With this in mind, I wanted to test three cameras to see how their results would differ. Two undergraduate physics students and I used one handheld 7.1 megapixel (MP) digital Cannon point-and-shoot auto focusing camera and two different cellphone cameras: one at 6.1 MP and the other at 5.1 MP.

  16. Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.

    ERIC Educational Resources Information Center

    Mills, David A.; Kelley, Kevin; Jones, Michael

    2001-01-01

    Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)

  17. Lincoln Penny on Mars in Camera Calibration Target

    NASA Image and Video Library

    2012-09-10

    The penny in this image is part of a camera calibration target on NASA Mars rover Curiosity. The MAHLI camera on the rover took this image of the MAHLI calibration target during the 34th Martian day of Curiosity work on Mars, Sept. 9, 2012.

  18. Cheetah: A high frame rate, high resolution SWIR image camera

    NASA Astrophysics Data System (ADS)

    Neys, Joel; Bentell, Jonas; O'Grady, Matt; Vermeiren, Jan; Colin, Thierry; Hooylaerts, Peter; Grietens, Bob

    2008-10-01

    A high resolution, high frame rate InGaAs based image sensor and associated camera has been developed. The sensor and the camera are capable of recording and delivering more than 1700 full 640x512pixel frames per second. The FPA utilizes a low lag CTIA current integrator in each pixel, enabling integration times shorter than one microsecond. On-chip logics allows for four different sub windows to be read out simultaneously at even higher rates. The spectral sensitivity of the FPA is situated in the SWIR range [0.9-1.7 μm] and can be further extended into the Visible and NIR range. The Cheetah camera has max 16 GB of on-board memory to store the acquired images and transfer the data over a Gigabit Ethernet connection to the PC. The camera is also equipped with a full CameralinkTM interface to directly stream the data to a frame grabber or dedicated image processing unit. The Cheetah camera is completely under software control.

  19. Design and fabrication of a CCD camera for use with relay optics in solar X-ray astronomy

    NASA Technical Reports Server (NTRS)

    1984-01-01

    Configured as a subsystem of a sounding rocket experiment, a camera system was designed to record and transmit an X-ray image focused on a charge coupled device. The camera consists of a X-ray sensitive detector and the electronics for processing and transmitting image data. The design and operation of the camera are described. Schematics are included.

  20. An image-tube camera for cometary spectrography

    NASA Astrophysics Data System (ADS)

    Mamadov, O.

    The paper discusses the mounting of an image tube camera. The cathode is of antimony, sodium, potassium, and cesium. The parts used for mounting are of acrylic plastic and a fabric-based laminate. A mounting design that does not include cooling is presented. The aperture ratio of the camera is 1:27. Also discussed is the way that the camera is joined to the spectrograph.

  1. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  2. Preliminary Geological Map of the Ac-H-8 Nawish Quadrangle of Ceres: An Integrated Mapping Study Using Dawn Spacecraft Data

    NASA Astrophysics Data System (ADS)

    Frigeri, A.; De Sanctis, M. C.; Carrorro, F. G.; Ammannito, E.; Williams, D. A.; Mest, S. C.; Buczkowski, D.; Preusker, F.; Jaumann, R.; Roatsch, T.; Scully, J. E. C.; Raymond, C. A.; Russell, C. T.

    2015-12-01

    Herein we present the geologic mapping of the Ac-H-8 Nawish Quadrangle of dwarf planet Ceres, produced on the basis of the Dawn spacecraft data. The Ac-H-08 Nawish quadrangle is located between -22°S and 22°N and between 144°E and 216°E. At the north-east border, a polygonal, 75km-wide crater named Nawish gives the name to the whole quadrangle. An unamed, partially degraded, 100km-diameter crater is evident in the lower central sector of the quadrangle. Bright materials have been mapped and are associated with craters. For example, bright materials occur in the central peak region of Nawish crater and in the ejecta of an unnamed crater, which is located in the nearby quadrangle Ac-H-09. The topography of the area obtained from stereo-processing of imagery shows an highland in the middle of the quadrangle. Topography is lower in the northern and southern borders, with a altitude span of about 9500 meters. At the time of this writing geologic mapping was performed on Framing Camera (FC) mosaics from the Approach (1.3 km/px) and Survey (415 m/px) orbits, including grayscale and color images and digital terrain models derived from stereo images. In Fall 2015 images from the High Altitude Mapping Orbit (140 m/px) will be used to refine the mapping, followed by Low Altitude Mapping Orbit (35 m/px) images in January 2016. Support of the Dawn Instrument, Operations, and Science Teams is acknowledged. This work is supported by grants from NASA, and from the German and Italian Space Agencies.

  3. Bio-Inspired Sensing and Imaging of Polarization Information in Nature

    DTIC Science & Technology

    2008-05-04

    polarization imaging,” Appl. Opt. 36, 150–155 (1997). 5. L. B. Wolff, “Polarization camera for computer vision with a beam splitter ,” J. Opt. Soc. Am. A...vision with a beam splitter ,” J. Opt. Soc. Am. A 11, 2935–2945 (1994). 2. L. B. Wolff and A. G. Andreou, “Polarization camera sensors,” Image Vis. Comput...group we have been developing various man-made, non -invasive imaging methodologies, sensing schemes, camera systems, and visualization and display

  4. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  5. Quantitative evaluation of the accuracy and variance of individual pixels in a scientific CMOS (sCMOS) camera for computational imaging

    NASA Astrophysics Data System (ADS)

    Watanabe, Shigeo; Takahashi, Teruo; Bennett, Keith

    2017-02-01

    The"scientific" CMOS (sCMOS) camera architecture fundamentally differs from CCD and EMCCD cameras. In digital CCD and EMCCD cameras, conversion from charge to the digital output is generally through a single electronic chain, and the read noise and the conversion factor from photoelectrons to digital outputs are highly uniform for all pixels, although quantum efficiency may spatially vary. In CMOS cameras, the charge to voltage conversion is separate for each pixel and each column has independent amplifiers and analog-to-digital converters, in addition to possible pixel-to-pixel variation in quantum efficiency. The "raw" output from the CMOS image sensor includes pixel-to-pixel variability in the read noise, electronic gain, offset and dark current. Scientific camera manufacturers digitally compensate the raw signal from the CMOS image sensors to provide usable images. Statistical noise in images, unless properly modeled, can introduce errors in methods such as fluctuation correlation spectroscopy or computational imaging, for example, localization microscopy using maximum likelihood estimation. We measured the distributions and spatial maps of individual pixel offset, dark current, read noise, linearity, photoresponse non-uniformity and variance distributions of individual pixels for standard, off-the-shelf Hamamatsu ORCA-Flash4.0 V3 sCMOS cameras using highly uniform and controlled illumination conditions, from dark conditions to multiple low light levels between 20 to 1,000 photons / pixel per frame to higher light conditions. We further show that using pixel variance for flat field correction leads to errors in cameras with good factory calibration.

  6. An HDR imaging method with DTDI technology for push-broom cameras

    NASA Astrophysics Data System (ADS)

    Sun, Wu; Han, Chengshan; Xue, Xucheng; Lv, Hengyi; Shi, Junxia; Hu, Changhong; Li, Xiangzhi; Fu, Yao; Jiang, Xiaonan; Huang, Liang; Han, Hongyin

    2018-03-01

    Conventionally, high dynamic-range (HDR) imaging is based on taking two or more pictures of the same scene with different exposure. However, due to a high-speed relative motion between the camera and the scene, it is hard for this technique to be applied to push-broom remote sensing cameras. For the sake of HDR imaging in push-broom remote sensing applications, the present paper proposes an innovative method which can generate HDR images without redundant image sensors or optical components. Specifically, this paper adopts an area array CMOS (complementary metal oxide semiconductor) with the digital domain time-delay-integration (DTDI) technology for imaging, instead of adopting more than one row of image sensors, thereby taking more than one picture with different exposure. And then a new HDR image by fusing two original images with a simple algorithm can be achieved. By conducting the experiment, the dynamic range (DR) of the image increases by 26.02 dB. The proposed method is proved to be effective and has potential in other imaging applications where there is a relative motion between the cameras and scenes.

  7. Super-resolved refocusing with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Zhou, Zhiliang; Yuan, Yan; Bin, Xiangli; Qian, Lulu

    2011-03-01

    This paper presents an approach to enhance the resolution of refocused images by super resolution methods. In plenoptic imaging, we demonstrate that the raw sensor image can be divided to a number of low-resolution angular images with sub-pixel shifts between each other. The sub-pixel shift, which defines the super-resolving ability, is mathematically derived by considering the plenoptic camera as equivalent camera arrays. We implement simulation to demonstrate the imaging process of a plenoptic camera. A high-resolution image is then reconstructed using maximum a posteriori (MAP) super resolution algorithms. Without other degradation effects in simulation, the super resolved image achieves a resolution as high as predicted by the proposed model. We also build an experimental setup to acquire light fields. With traditional refocusing methods, the image is rendered at a rather low resolution. In contrast, we implement the super-resolved refocusing methods and recover an image with more spatial details. To evaluate the performance of the proposed method, we finally compare the reconstructed images using image quality metrics like peak signal to noise ratio (PSNR).

  8. Pedestrian Detection Based on Adaptive Selection of Visible Light or Far-Infrared Light Camera Image by Fuzzy Inference System and Convolutional Neural Network-Based Verification.

    PubMed

    Kang, Jin Kyu; Hong, Hyung Gil; Park, Kang Ryoung

    2017-07-08

    A number of studies have been conducted to enhance the pedestrian detection accuracy of intelligent surveillance systems. However, detecting pedestrians under outdoor conditions is a challenging problem due to the varying lighting, shadows, and occlusions. In recent times, a growing number of studies have been performed on visible light camera-based pedestrian detection systems using a convolutional neural network (CNN) in order to make the pedestrian detection process more resilient to such conditions. However, visible light cameras still cannot detect pedestrians during nighttime, and are easily affected by shadows and lighting. There are many studies on CNN-based pedestrian detection through the use of far-infrared (FIR) light cameras (i.e., thermal cameras) to address such difficulties. However, when the solar radiation increases and the background temperature reaches the same level as the body temperature, it remains difficult for the FIR light camera to detect pedestrians due to the insignificant difference between the pedestrian and non-pedestrian features within the images. Researchers have been trying to solve this issue by inputting both the visible light and the FIR camera images into the CNN as the input. This, however, takes a longer time to process, and makes the system structure more complex as the CNN needs to process both camera images. This research adaptively selects a more appropriate candidate between two pedestrian images from visible light and FIR cameras based on a fuzzy inference system (FIS), and the selected candidate is verified with a CNN. Three types of databases were tested, taking into account various environmental factors using visible light and FIR cameras. The results showed that the proposed method performs better than the previously reported methods.

  9. Novel computer-based endoscopic camera

    NASA Astrophysics Data System (ADS)

    Rabinovitz, R.; Hai, N.; Abraham, Martin D.; Adler, Doron; Nissani, M.; Fridental, Ron; Vitsnudel, Ilia

    1995-05-01

    We have introduced a computer-based endoscopic camera which includes (a) unique real-time digital image processing to optimize image visualization by reducing over exposed glared areas and brightening dark areas, and by accentuating sharpness and fine structures, and (b) patient data documentation and management. The image processing is based on i Sight's iSP1000TM digital video processor chip and Adaptive SensitivityTM patented scheme for capturing and displaying images with wide dynamic range of light, taking into account local neighborhood image conditions and global image statistics. It provides the medical user with the ability to view images under difficult lighting conditions, without losing details `in the dark' or in completely saturated areas. The patient data documentation and management allows storage of images (approximately 1 MB per image for a full 24 bit color image) to any storage device installed into the camera, or to an external host media via network. The patient data which is included with every image described essential information on the patient and procedure. The operator can assign custom data descriptors, and can search for the stored image/data by typing any image descriptor. The camera optics has extended zoom range of f equals 20 - 45 mm allowing control of the diameter of the field which is displayed on the monitor such that the complete field of view of the endoscope can be displayed on all the area of the screen. All these features provide versatile endoscopic camera with excellent image quality and documentation capabilities.

  10. Software for Acquiring Image Data for PIV

    NASA Technical Reports Server (NTRS)

    Wernet, Mark P.; Cheung, H. M.; Kressler, Brian

    2003-01-01

    PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.

  11. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  12. Television monitor field shifter and an opto-electronic method for obtaining a stereo image of optimal depth resolution and reduced depth distortion on a single screen

    NASA Technical Reports Server (NTRS)

    Diner, Daniel B. (Inventor)

    1989-01-01

    A method and apparatus is developed for obtaining a stereo image with reduced depth distortion and optimum depth resolution. Static and dynamic depth distortion and depth resolution tradeoff is provided. Cameras obtaining the images for a stereo view are converged at a convergence point behind the object to be presented in the image, and the collection-surface-to-object distance, the camera separation distance, and the focal lengths of zoom lenses for the cameras are all increased. Doubling the distances cuts the static depth distortion in half while maintaining image size and depth resolution. Dynamic depth distortion is minimized by panning a stereo view-collecting camera system about a circle which passes through the convergence point and the camera's first nodal points. Horizontal field shifting of the television fields on a television monitor brings both the monitor and the stereo views within the viewer's limit of binocular fusion.

  13. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  14. Design, demonstration and testing of low F-number LWIR panoramic imaging relay optics

    NASA Astrophysics Data System (ADS)

    Furxhi, Orges; Frascati, Joe; Driggers, Ronald

    2018-04-01

    Panoramic imaging is inherently wide field of view. High sensitivity uncooled Long Wave Infrared (LWIR) imaging requires low F-number optics. These two requirements result in short back working distance designs that, in addition to being costly, are challenging to integrate with commercially available uncooled LWIR cameras and cores. Common challenges include the relocation of the shutter flag, custom calibration of the camera dynamic range and NUC tables, focusing, and athermalization. Solutions to these challenges add to the system cost and make panoramic uncooled LWIR cameras commercially unattractive. In this paper, we present the design of Panoramic Imaging Relay Optics (PIRO) and show imagery and test results with one of the first prototypes. PIRO designs use several reflective surfaces (generally two) to relay a panoramic scene onto a real, donut-shaped image. The PIRO donut is imaged on the focal plane of the camera using a commercially-off-the-shelf (COTS) low F-number lens. This approach results in low component cost and effortless integration with pre-calibrated commercially available cameras and lenses.

  15. NPS assessment of color medical image displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-10-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired

  16. Object tracking using multiple camera video streams

    NASA Astrophysics Data System (ADS)

    Mehrubeoglu, Mehrube; Rojas, Diego; McLauchlan, Lifford

    2010-05-01

    Two synchronized cameras are utilized to obtain independent video streams to detect moving objects from two different viewing angles. The video frames are directly correlated in time. Moving objects in image frames from the two cameras are identified and tagged for tracking. One advantage of such a system involves overcoming effects of occlusions that could result in an object in partial or full view in one camera, when the same object is fully visible in another camera. Object registration is achieved by determining the location of common features in the moving object across simultaneous frames. Perspective differences are adjusted. Combining information from images from multiple cameras increases robustness of the tracking process. Motion tracking is achieved by determining anomalies caused by the objects' movement across frames in time in each and the combined video information. The path of each object is determined heuristically. Accuracy of detection is dependent on the speed of the object as well as variations in direction of motion. Fast cameras increase accuracy but limit the speed and complexity of the algorithm. Such an imaging system has applications in traffic analysis, surveillance and security, as well as object modeling from multi-view images. The system can easily be expanded by increasing the number of cameras such that there is an overlap between the scenes from at least two cameras in proximity. An object can then be tracked long distances or across multiple cameras continuously, applicable, for example, in wireless sensor networks for surveillance or navigation.

  17. Clinical evaluation of pixellated NaI:Tl and continuous LaBr 3:Ce, compact scintillation cameras for breast tumors imaging

    NASA Astrophysics Data System (ADS)

    Pani, R.; Pellegrini, R.; Betti, M.; De Vincentis, G.; Cinti, M. N.; Bennati, P.; Vittorini, F.; Casali, V.; Mattioli, M.; Orsolini Cencelli, V.; Navarria, F.; Bollini, D.; Moschini, G.; Iurlaro, G.; Montani, L.; de Notaristefani, F.

    2007-02-01

    The principal limiting factor in the clinical acceptance of scintimammography is certainly its low sensitivity for cancers sized <1 cm, mainly due to the lack of equipment specifically designed for breast imaging. The National Institute of Nuclear Physics (INFN) has been developing a new scintillation camera based on Lanthanum tri-Bromide Cerium-doped crystal (LaBr 3:Ce), that demonstrating superior imaging performances with respect to the dedicated scintillation γ-camera that was previously developed. The proposed detector consists of continuous LaBr 3:Ce scintillator crystal coupled to a Hamamatsu H8500 Flat Panel PMT. One centimeter thick crystal has been chosen to increase crystal detection efficiency. In this paper, we propose a comparison and evaluation between lanthanum γ-camera and a Multi PSPMT camera, NaI(Tl) discrete pixel based, previously developed under "IMI" Italian project for technological transfer of INFN. A phantom study has been developed to test both the cameras before introducing them in clinical trials. High resolution scans produced by LaBr 3:Ce camera showed higher tumor contrast with a detailed imaging of uptake area than pixellated NaI(Tl) dedicated camera. Furthermore, with the lanthanum camera, the Signal-to-Noise Ratio ( SNR) value was increased for a lesion as small as 5 mm, with a consequent strong improvement in detectability.

  18. Photogrammetric Modeling and Image-Based Rendering for Rapid Virtual Environment Creation

    DTIC Science & Technology

    2004-12-01

    area and different methods have been proposed. Pertinent methods include: Camera Calibration , Structure from Motion, Stereo Correspondence, and Image...Based Rendering 1.1.1 Camera Calibration Determining the 3D structure of a model from multiple views becomes simpler if the intrinsic (or internal...can introduce significant nonlinearities into the image. We have found that camera calibration is a straightforward process which can simplify the

  19. Research on inosculation between master of ceremonies or players and virtual scene in virtual studio

    NASA Astrophysics Data System (ADS)

    Li, Zili; Zhu, Guangxi; Zhu, Yaoting

    2003-04-01

    A technical principle about construction of virtual studio has been proposed where orientation tracker and telemeter has been used for improving conventional BETACAM pickup camera and connecting with the software module of the host. A model of virtual camera named Camera & Post-camera Coupling Pair has been put forward, which is different from the common model in computer graphics and has been bound to real BETACAM pickup camera for shooting. The formula has been educed to compute the foreground frame buffer image and the background frame buffer image of the virtual scene whose boundary is based on the depth information of target point of the real BETACAM pickup camera's projective ray. The effect of real-time consistency has been achieved between the video image sequences of the master of ceremonies or players and the CG video image sequences for the virtual scene in spatial position, perspective relationship and image object masking. The experimental result has shown that the technological scheme of construction of virtual studio submitted in this paper is feasible and more applicative and more effective than the existing technology to establish a virtual studio based on color-key and image synthesis with background using non-linear video editing technique.

  20. The use of low cost compact cameras with focus stacking functionality in entomological digitization projects

    PubMed Central

    Mertens, Jan E.J.; Roie, Martijn Van; Merckx, Jonas; Dekoninck, Wouter

    2017-01-01

    Abstract Digitization of specimen collections has become a key priority of many natural history museums. The camera systems built for this purpose are expensive, providing a barrier in institutes with limited funding, and therefore hampering progress. An assessment is made on whether a low cost compact camera with image stacking functionality can help expedite the digitization process in large museums or provide smaller institutes and amateur entomologists with the means to digitize their collections. Images of a professional setup were compared with the Olympus Stylus TG-4 Tough, a low-cost compact camera with internal focus stacking functions. Parameters considered include image quality, digitization speed, price, and ease-of-use. The compact camera’s image quality, although inferior to the professional setup, is exceptional considering its fourfold lower price point. Producing the image slices in the compact camera is a matter of seconds and when optimal image quality is less of a priority, the internal stacking function omits the need for dedicated stacking software altogether, further decreasing the cost and speeding up the process. In general, it is found that, aware of its limitations, this compact camera is capable of digitizing entomological collections with sufficient quality. As technology advances, more institutes and amateur entomologists will be able to easily and affordably catalogue their specimens. PMID:29134038

  1. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2015-08-05

    This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  2. From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth

    NASA Image and Video Library

    2017-12-08

    This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram

  3. Introducing the depth transfer curve for 3D capture system characterization

    NASA Astrophysics Data System (ADS)

    Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas

    2011-03-01

    3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.

  4. Constructing a Database from Multiple 2D Images for Camera Pose Estimation and Robot Localization

    NASA Technical Reports Server (NTRS)

    Wolf, Michael; Ansar, Adnan I.; Brennan, Shane; Clouse, Daniel S.; Padgett, Curtis W.

    2012-01-01

    The LMDB (Landmark Database) Builder software identifies persistent image features (landmarks) in a scene viewed multiple times and precisely estimates the landmarks 3D world positions. The software receives as input multiple 2D images of approximately the same scene, along with an initial guess of the camera poses for each image, and a table of features matched pair-wise in each frame. LMDB Builder aggregates landmarks across an arbitrarily large collection of frames with matched features. Range data from stereo vision processing can also be passed to improve the initial guess of the 3D point estimates. The LMDB Builder aggregates feature lists across all frames, manages the process to promote selected features to landmarks, and iteratively calculates the 3D landmark positions using the current camera pose estimations (via an optimal ray projection method), and then improves the camera pose estimates using the 3D landmark positions. Finally, it extracts image patches for each landmark from auto-selected key frames and constructs the landmark database. The landmark database can then be used to estimate future camera poses (and therefore localize a robotic vehicle that may be carrying the cameras) by matching current imagery to landmark database image patches and using the known 3D landmark positions to estimate the current pose.

  5. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  6. Evaluation of Real-Time Hand Motion Tracking Using a Range Camera and the Mean-Shift Algorithm

    NASA Astrophysics Data System (ADS)

    Lahamy, H.; Lichti, D.

    2011-09-01

    Several sensors have been tested for improving the interaction between humans and machines including traditional web cameras, special gloves, haptic devices, cameras providing stereo pairs of images and range cameras. Meanwhile, several methods are described in the literature for tracking hand motion: the Kalman filter, the mean-shift algorithm and the condensation algorithm. In this research, the combination of a range camera and the simple version of the mean-shift algorithm has been evaluated for its capability for hand motion tracking. The evaluation was assessed in terms of position accuracy of the tracking trajectory in x, y and z directions in the camera space and the time difference between image acquisition and image display. Three parameters have been analyzed regarding their influence on the tracking process: the speed of the hand movement, the distance between the camera and the hand and finally the integration time of the camera. Prior to the evaluation, the required warm-up time of the camera has been measured. This study has demonstrated the suitability of the range camera used in combination with the mean-shift algorithm for real-time hand motion tracking but for very high speed hand movement in the traverse plane with respect to the camera, the tracking accuracy is low and requires improvement.

  7. Attenuated anticorrelation between the default and dorsal attention networks with aging: evidence from task and rest.

    PubMed

    Spreng, R Nathan; Stevens, W Dale; Viviano, Joseph D; Schacter, Daniel L

    2016-09-01

    Anticorrelation between the default and dorsal attention networks is a central feature of human functional brain organization. Hallmarks of aging include impaired default network modulation and declining medial temporal lobe (MTL) function. However, it remains unclear if this anticorrelation is preserved into older adulthood during task performance, or how this is related to the intrinsic architecture of the brain. We hypothesized that older adults would show reduced within- and increased between-network functional connectivity (FC) across the default and dorsal attention networks. To test this hypothesis, we examined the effects of aging on task-related and intrinsic FC using functional magnetic resonance imaging during an autobiographical planning task known to engage the default network and during rest, respectively, with young (n = 72) and older (n = 79) participants. The task-related FC analysis revealed reduced anticorrelation with aging. At rest, there was a robust double dissociation, with older adults showing a pattern of reduced within-network FC, but increased between-network FC, across both networks, relative to young adults. Moreover, older adults showed reduced intrinsic resting-state FC of the MTL with both networks suggesting a fractionation of the MTL memory system in healthy aging. These findings demonstrate age-related dedifferentiation among these competitive large-scale networks during both task and rest, consistent with the idea that age-related changes are associated with a breakdown in the intrinsic functional architecture within and among large-scale brain networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  8. Technology and Technique Standards for Camera-Acquired Digital Dermatologic Images: A Systematic Review.

    PubMed

    Quigley, Elizabeth A; Tokay, Barbara A; Jewell, Sarah T; Marchetti, Michael A; Halpern, Allan C

    2015-08-01

    Photographs are invaluable dermatologic diagnostic, management, research, teaching, and documentation tools. Digital Imaging and Communications in Medicine (DICOM) standards exist for many types of digital medical images, but there are no DICOM standards for camera-acquired dermatologic images to date. To identify and describe existing or proposed technology and technique standards for camera-acquired dermatologic images in the scientific literature. Systematic searches of the PubMed, EMBASE, and Cochrane databases were performed in January 2013 using photography and digital imaging, standardization, and medical specialty and medical illustration search terms and augmented by a gray literature search of 14 websites using Google. Two reviewers independently screened titles of 7371 unique publications, followed by 3 sequential full-text reviews, leading to the selection of 49 publications with the most recent (1985-2013) or detailed description of technology or technique standards related to the acquisition or use of images of skin disease (or related conditions). No universally accepted existing technology or technique standards for camera-based digital images in dermatology were identified. Recommendations are summarized for technology imaging standards, including spatial resolution, color resolution, reproduction (magnification) ratios, postacquisition image processing, color calibration, compression, output, archiving and storage, and security during storage and transmission. Recommendations are also summarized for technique imaging standards, including environmental conditions (lighting, background, and camera position), patient pose and standard view sets, and patient consent, privacy, and confidentiality. Proposed standards for specific-use cases in total body photography, teledermatology, and dermoscopy are described. The literature is replete with descriptions of obtaining photographs of skin disease, but universal imaging standards have not been developed, validated, and adopted to date. Dermatologic imaging is evolving without defined standards for camera-acquired images, leading to variable image quality and limited exchangeability. The development and adoption of universal technology and technique standards may first emerge in scenarios when image use is most associated with a defined clinical benefit.

  9. Super-resolved all-refocused image with a plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wang, Xiang; Li, Lin; Hou, Guangqi

    2015-12-01

    This paper proposes an approach to produce the super-resolution all-refocused images with the plenoptic camera. The plenoptic camera can be produced by putting a micro-lens array between the lens and the sensor in a conventional camera. This kind of camera captures both the angular and spatial information of the scene in one single shot. A sequence of digital refocused images, which are refocused at different depth, can be produced after processing the 4D light field captured by the plenoptic camera. The number of the pixels in the refocused image is the same as that of the micro-lens in the micro-lens array. Limited number of the micro-lens will result in poor low resolution refocused images. Therefore, not enough details will exist in these images. Such lost details, which are often high frequency information, are important for the in-focus part in the refocused image. We decide to super-resolve these in-focus parts. The result of image segmentation method based on random walks, which works on the depth map produced from the 4D light field data, is used to separate the foreground and background in the refocused image. And focusing evaluation function is employed to determine which refocused image owns the clearest foreground part and which one owns the clearest background part. Subsequently, we employ single image super-resolution method based on sparse signal representation to process the focusing parts in these selected refocused images. Eventually, we can obtain the super-resolved all-focus image through merging the focusing background part and the focusing foreground part in the way of digital signal processing. And more spatial details will be kept in these output images. Our method will enhance the resolution of the refocused image, and just the refocused images owning the clearest foreground and background need to be super-resolved.

  10. Altered right anterior insular connectivity and loss of associated functions in adolescent chronic fatigue syndrome

    PubMed Central

    Glenne Øie, Merete; Endestad, Tor; Bruun Wyller, Vegard

    2017-01-01

    Impairments in cognition, pain intolerance, and physical inactivity characterize adolescent chronic fatigue syndrome (CFS), yet little is known about its neurobiology. The right dorsal anterior insular (dAI) connectivity of the salience network provides a motivational context to stimuli. In this study, we examined regional functional connectivity (FC) patterns of the right dAI in adolescent CFS patients and healthy participants. Eighteen adolescent patients with CFS and 18 aged-matched healthy adolescent control participants underwent resting-state functional magnetic resonance imaging. The right dAI region of interest was examined in a seed-to-voxel resting-state FC analysis using SPM and CONN toolbox. Relative to healthy adolescents, CFS patients demonstrated reduced FC of the right dAI to the right posterior parietal cortex (PPC) node of the central executive network. The decreased FC of the right dAI–PPC might indicate impaired cognitive control development in adolescent CFS. Immature FC of the right dAI–PPC in patients also lacked associations with three known functional domains: cognition, pain and physical activity, which were observed in the healthy group. These results suggest a distinct biological signature of adolescent CFS and might represent a fundamental role of the dAI in motivated behavior. PMID:28880891

  11. Altered right anterior insular connectivity and loss of associated functions in adolescent chronic fatigue syndrome.

    PubMed

    Wortinger, Laura Anne; Glenne Øie, Merete; Endestad, Tor; Bruun Wyller, Vegard

    2017-01-01

    Impairments in cognition, pain intolerance, and physical inactivity characterize adolescent chronic fatigue syndrome (CFS), yet little is known about its neurobiology. The right dorsal anterior insular (dAI) connectivity of the salience network provides a motivational context to stimuli. In this study, we examined regional functional connectivity (FC) patterns of the right dAI in adolescent CFS patients and healthy participants. Eighteen adolescent patients with CFS and 18 aged-matched healthy adolescent control participants underwent resting-state functional magnetic resonance imaging. The right dAI region of interest was examined in a seed-to-voxel resting-state FC analysis using SPM and CONN toolbox. Relative to healthy adolescents, CFS patients demonstrated reduced FC of the right dAI to the right posterior parietal cortex (PPC) node of the central executive network. The decreased FC of the right dAI-PPC might indicate impaired cognitive control development in adolescent CFS. Immature FC of the right dAI-PPC in patients also lacked associations with three known functional domains: cognition, pain and physical activity, which were observed in the healthy group. These results suggest a distinct biological signature of adolescent CFS and might represent a fundamental role of the dAI in motivated behavior.

  12. Effects of gratitude meditation on neural network functional connectivity and brain-heart coupling.

    PubMed

    Kyeong, Sunghyon; Kim, Joohan; Kim, Dae Jin; Kim, Hesun Erin; Kim, Jae-Jin

    2017-07-11

    A sense of gratitude is a powerful and positive experience that can promote a happier life, whereas resentment is associated with life dissatisfaction. To explore the effects of gratitude and resentment on mental well-being, we acquired functional magnetic resonance imaging and heart rate (HR) data before, during, and after the gratitude and resentment interventions. Functional connectivity (FC) analysis was conducted to identify the modulatory effects of gratitude on the default mode, emotion, and reward-motivation networks. The average HR was significantly lower during the gratitude intervention than during the resentment intervention. Temporostriatal FC showed a positive correlation with HR during the gratitude intervention, but not during the resentment intervention. Temporostriatal resting-state FC was significantly decreased after the gratitude intervention compared to the resentment intervention. After the gratitude intervention, resting-state FC of the amygdala with the right dorsomedial prefrontal cortex and left dorsal anterior cingulate cortex were positively correlated with anxiety scale and depression scale, respectively. Taken together, our findings shed light on the effect of gratitude meditation on an individual's mental well-being, and indicate that it may be a means of improving both emotion regulation and self-motivation by modulating resting-state FC in emotion and motivation-related brain regions.

  13. Affinity of C-Reactive Protein toward FcγRI Is Strongly Enhanced by the γ-Chain

    PubMed Central

    Röcker, Carlheinz; Manolov, Dimitar E.; Kuzmenkina, Elza V.; Tron, Kyrylo; Slatosch, Holger; Torzewski, Jan; Nienhaus, G. Ulrich

    2007-01-01

    C-reactive protein (CRP), the prototype human acute phase protein, is widely regarded as a key player in cardiovascular disease, but the identity of its cellular receptor is still under debate. By using ultrasensitive confocal imaging analysis, we have studied CRP binding to transfected COS-7 cells expressing the high-affinity IgG receptor FcγRI. Here we show that CRP binds to FcγRI on intact cells, with a kd of 10 ± 3 μmol/L. Transfection of COS-7 cells with a plasmid coding for both FcγRI and its functional counterpart, the γ-chain, markedly increases CRP affinity to FcγRI, resulting in a kd of 0.35 ± 0.10 μmol/L. The affinity increase results from an ∼30-fold enhanced association rate coefficient. The pronounced enhancement of affinity by the γ-chain suggests its crucial involvement in the CRP receptor interaction, possibly by mediating interactions between the transmembrane moieties of the receptors. Dissociation of CRP from the cell surfaces cannot be detected throughout the time course of several hours and is thus extremely slow. Considering the pentameric structure of CRP, this result indicates that multivalent binding and receptor clustering are crucially involved in the interaction of CRP with nucleated cells. PMID:17255341

  14. Information processing architecture of functionally defined clusters in the macaque cortex.

    PubMed

    Shen, Kelly; Bezgin, Gleb; Hutchison, R Matthew; Gati, Joseph S; Menon, Ravi S; Everling, Stefan; McIntosh, Anthony R

    2012-11-28

    Computational and empirical neuroimaging studies have suggested that the anatomical connections between brain regions primarily constrain their functional interactions. Given that the large-scale organization of functional networks is determined by the temporal relationships between brain regions, the structural limitations may extend to the global characteristics of functional networks. Here, we explored the extent to which the functional network community structure is determined by the underlying anatomical architecture. We directly compared macaque (Macaca fascicularis) functional connectivity (FC) assessed using spontaneous blood oxygen level-dependent functional magnetic resonance imaging (BOLD-fMRI) to directed anatomical connectivity derived from macaque axonal tract tracing studies. Consistent with previous reports, FC increased with increasing strength of anatomical connection, and FC was also present between regions that had no direct anatomical connection. We observed moderate similarity between the FC of each region and its anatomical connectivity. Notably, anatomical connectivity patterns, as described by structural motifs, were different within and across functional modules: partitioning of the functional network was supported by dense bidirectional anatomical connections within clusters and unidirectional connections between clusters. Together, our data directly demonstrate that the FC patterns observed in resting-state BOLD-fMRI are dictated by the underlying neuroanatomical architecture. Importantly, we show how this architecture contributes to the global organizational principles of both functional specialization and integration.

  15. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  16. Development of an Ultra-Violet Digital Camera for Volcanic Sulfur Dioxide Imaging

    NASA Astrophysics Data System (ADS)

    Bluth, G. J.; Shannon, J. M.; Watson, I. M.; Prata, F. J.; Realmuto, V. J.

    2006-12-01

    In an effort to improve monitoring of passive volcano degassing, we have constructed and tested a digital camera for quantifying the sulfur dioxide (SO2) content of volcanic plumes. The camera utilizes a bandpass filter to collect photons in the ultra-violet (UV) region where SO2 selectively absorbs UV light. SO2 is quantified by imaging calibration cells of known SO2 concentrations. Images of volcanic SO2 plumes were collected at four active volcanoes with persistent passive degassing: Villarrica, located in Chile, and Santiaguito, Fuego, and Pacaya, located in Guatemala. Images were collected from distances ranging between 4 and 28 km away, with crisp detection up to approximately 16 km. Camera set-up time in the field ranges from 5-10 minutes and images can be recorded in as rapidly as 10-second intervals. Variable in-plume concentrations can be observed and accurate plume speeds (or rise rates) can readily be determined by tracing individual portions of the plume within sequential images. Initial fluxes computed from camera images require a correction for the effects of environmental light scattered into the field of view. At Fuego volcano, simultaneous measurements of corrected SO2 fluxes with the camera and a Correlation Spectrometer (COSPEC) agreed within 25 percent. Experiments at the other sites were equally encouraging, and demonstrated the camera's ability to detect SO2 under demanding meteorological conditions. This early work has shown great success in imaging SO2 plumes and offers promise for volcano monitoring due to its rapid deployment and data processing capabilities, relatively low cost, and improved interpretation afforded by synoptic plume coverage from a range of distances.

  17. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  18. Optimized algorithm for the spatial nonuniformity correction of an imaging system based on a charge-coupled device color camera.

    PubMed

    de Lasarte, Marta; Pujol, Jaume; Arjona, Montserrat; Vilaseca, Meritxell

    2007-01-10

    We present an optimized linear algorithm for the spatial nonuniformity correction of a CCD color camera's imaging system and the experimental methodology developed for its implementation. We assess the influence of the algorithm's variables on the quality of the correction, that is, the dark image, the base correction image, and the reference level, and the range of application of the correction using a uniform radiance field provided by an integrator cube. The best spatial nonuniformity correction is achieved by having a nonzero dark image, by using an image with a mean digital level placed in the linear response range of the camera as the base correction image and taking the mean digital level of the image as the reference digital level. The response of the CCD color camera's imaging system to the uniform radiance field shows a high level of spatial uniformity after the optimized algorithm has been applied, which also allows us to achieve a high-quality spatial nonuniformity correction of captured images under different exposure conditions.

  19. Performance evaluation of low-cost airglow cameras for mesospheric gravity wave measurements

    NASA Astrophysics Data System (ADS)

    Suzuki, S.; Shiokawa, K.

    2016-12-01

    Atmospheric gravity waves significantly contribute to the wind/thermal balances in the mesosphere and lower thermosphere (MLT) through their vertical transport of horizontal momentum. It has been reported that the gravity wave momentum flux preferentially associated with the scale of the waves; the momentum fluxes of the waves with a horizontal scale of 10-100 km are particularly significant. Airglow imaging is a useful technique to observe two-dimensional structure of small-scale (<100 km) gravity waves in the MLT region and has been used to investigate global behaviour of the waves. Recent studies with simultaneous/multiple airglow cameras have derived spatial extent of the MLT waves. Such network imaging observations are advantageous to ever better understanding of coupling between the lower and upper atmosphere via gravity waves. In this study, we newly developed low-cost airglow cameras to enlarge the airglow imaging network. Each of the cameras has a fish-eye lens with a 185-deg field-of-view and equipped with a CCD video camera (WATEC WAT-910HX) ; the camera is small (W35.5 x H36.0 x D63.5 mm) and inexpensive, much more than the airglow camera used for the existing ground-based network (Optical Mesosphere Thermosphere Imagers (OMTI) operated by Solar-Terrestrial Environmental Laboratory, Nagoya University), and has a CCD sensor with 768 x 494 pixels that is highly sensitive enough to detect the mesospheric OH airglow emission perturbations. In this presentation, we will report some results of performance evaluation of this camera made at Shigaraki (35-deg N, 136-deg E), Japan, where is one of the OMTI station. By summing 15-images (i.e., 1-min composition of the images) we recognised clear gravity wave patterns in the images with comparable quality to the OMTI's image. Outreach and educational activities based on this research will be also reported.

  20. Digital Camera Control for Faster Inspection

    NASA Technical Reports Server (NTRS)

    Brown, Katharine; Siekierski, James D.; Mangieri, Mark L.; Dekome, Kent; Cobarruvias, John; Piplani, Perry J.; Busa, Joel

    2009-01-01

    Digital Camera Control Software (DCCS) is a computer program for controlling a boom and a boom-mounted camera used to inspect the external surface of a space shuttle in orbit around the Earth. Running in a laptop computer in the space-shuttle crew cabin, DCCS commands integrated displays and controls. By means of a simple one-button command, a crewmember can view low- resolution images to quickly spot problem areas and can then cause a rapid transition to high- resolution images. The crewmember can command that camera settings apply to a specific small area of interest within the field of view of the camera so as to maximize image quality within that area. DCCS also provides critical high-resolution images to a ground screening team, which analyzes the images to assess damage (if any); in so doing, DCCS enables the team to clear initially suspect areas more quickly than would otherwise be possible and further saves time by minimizing the probability of re-imaging of areas already inspected. On the basis of experience with a previous version (2.0) of the software, the present version (3.0) incorporates a number of advanced imaging features that optimize crewmember capability and efficiency.

  1. Brute Force Matching Between Camera Shots and Synthetic Images from Point Clouds

    NASA Astrophysics Data System (ADS)

    Boerner, R.; Kröhnert, M.

    2016-06-01

    3D point clouds, acquired by state-of-the-art terrestrial laser scanning techniques (TLS), provide spatial information about accuracies up to several millimetres. Unfortunately, common TLS data has no spectral information about the covered scene. However, the matching of TLS data with images is important for monoplotting purposes and point cloud colouration. Well-established methods solve this issue by matching of close range images and point cloud data by fitting optical camera systems on top of laser scanners or rather using ground control points. The approach addressed in this paper aims for the matching of 2D image and 3D point cloud data from a freely moving camera within an environment covered by a large 3D point cloud, e.g. a 3D city model. The key advantage of the free movement affects augmented reality applications or real time measurements. Therefore, a so-called real image, captured by a smartphone camera, has to be matched with a so-called synthetic image which consists of reverse projected 3D point cloud data to a synthetic projection centre whose exterior orientation parameters match the parameters of the image, assuming an ideal distortion free camera.

  2. Dense Region of Impact Craters

    NASA Image and Video Library

    2011-09-23

    NASA Dawn spacecraft obtained this image of the giant asteroid Vesta with its framing camera on Aug. 14 2011. This image was taken through the camera clear filter. The image has a resolution of about 260 meters per pixel.

  3. Low-cost printing of computerised tomography (CT) images where there is no dedicated CT camera.

    PubMed

    Tabari, Abdulkadir M

    2007-01-01

    Many developing countries still rely on conventional hard copy images to transfer information among physicians. We have developed a low-cost alternative method of printing computerised tomography (CT) scan images where there is no dedicated camera. A digital camera is used to photograph images from the CT scan screen monitor. The images are then transferred to a PC via a USB port, before being printed on glossy paper using an inkjet printer. The method can be applied to other imaging modalities like ultrasound and MRI and appears worthy of emulation elsewhere in the developing world where resources and technical expertise are scarce.

  4. A small field of view camera for hybrid gamma and optical imaging

    NASA Astrophysics Data System (ADS)

    Lees, J. E.; Bugby, S. L.; Bhatia, B. S.; Jambi, L. K.; Alqahtani, M. S.; McKnight, W. R.; Ng, A. H.; Perkins, A. C.

    2014-12-01

    The development of compact low profile gamma-ray detectors has allowed the production of small field of view, hand held imaging devices for use at the patient bedside and in operating theatres. The combination of an optical and a gamma camera, in a co-aligned configuration, offers high spatial resolution multi-modal imaging giving a superimposed scintigraphic and optical image. This innovative introduction of hybrid imaging offers new possibilities for assisting surgeons in localising the site of uptake in procedures such as sentinel node detection. Recent improvements to the camera system along with results of phantom and clinical imaging are reported.

  5. Sensor noise camera identification: countering counter-forensics

    NASA Astrophysics Data System (ADS)

    Goljan, Miroslav; Fridrich, Jessica; Chen, Mo

    2010-01-01

    In camera identification using sensor noise, the camera that took a given image can be determined with high certainty by establishing the presence of the camera's sensor fingerprint in the image. In this paper, we develop methods to reveal counter-forensic activities in which an attacker estimates the camera fingerprint from a set of images and pastes it onto an image from a different camera with the intent to introduce a false alarm and, in doing so, frame an innocent victim. We start by classifying different scenarios based on the sophistication of the attacker's activity and the means available to her and to the victim, who wishes to defend herself. The key observation is that at least some of the images that were used by the attacker to estimate the fake fingerprint will likely be available to the victim as well. We describe the socalled "triangle test" that helps the victim reveal attacker's malicious activity with high certainty under a wide range of conditions. This test is then extended to the case when none of the images that the attacker used to create the fake fingerprint are available to the victim but the victim has at least two forged images to analyze. We demonstrate the test's performance experimentally and investigate its limitations. The conclusion that can be made from this study is that planting a sensor fingerprint in an image without leaving a trace is significantly more difficult than previously thought.

  6. Bundle Adjustment-Based Stability Analysis Method with a Case Study of a Dual Fluoroscopy Imaging System

    NASA Astrophysics Data System (ADS)

    Al-Durgham, K.; Lichti, D. D.; Detchev, I.; Kuntze, G.; Ronsky, J. L.

    2018-05-01

    A fundamental task in photogrammetry is the temporal stability analysis of a camera/imaging-system's calibration parameters. This is essential to validate the repeatability of the parameters' estimation, to detect any behavioural changes in the camera/imaging system and to ensure precise photogrammetric products. Many stability analysis methods exist in the photogrammetric literature; each one has different methodological bases, and advantages and disadvantages. This paper presents a simple and rigorous stability analysis method that can be straightforwardly implemented for a single camera or an imaging system with multiple cameras. The basic collinearity model is used to capture differences between two calibration datasets, and to establish the stability analysis methodology. Geometric simulation is used as a tool to derive image and object space scenarios. Experiments were performed on real calibration datasets from a dual fluoroscopy (DF; X-ray-based) imaging system. The calibration data consisted of hundreds of images and thousands of image observations from six temporal points over a two-day period for a precise evaluation of the DF system stability. The stability of the DF system - for a single camera analysis - was found to be within a range of 0.01 to 0.66 mm in terms of 3D coordinates root-mean-square-error (RMSE), and 0.07 to 0.19 mm for dual cameras analysis. It is to the authors' best knowledge that this work is the first to address the topic of DF stability analysis.

  7. A survey of camera error sources in machine vision systems

    NASA Astrophysics Data System (ADS)

    Jatko, W. B.

    In machine vision applications, such as an automated inspection line, television cameras are commonly used to record scene intensity in a computer memory or frame buffer. Scene data from the image sensor can then be analyzed with a wide variety of feature-detection techniques. Many algorithms found in textbooks on image processing make the implicit simplifying assumption of an ideal input image with clearly defined edges and uniform illumination. The ideal image model is helpful to aid the student in understanding the principles of operation, but when these algorithms are blindly applied to real-world images the results can be unsatisfactory. This paper examines some common measurement errors found in camera sensors and their underlying causes, and possible methods of error compensation. The role of the camera in a typical image-processing system is discussed, with emphasis on the origination of signal distortions. The effects of such things as lighting, optics, and sensor characteristics are considered.

  8. Brandaris 128 ultra-high-speed imaging facility: 10 years of operation, updates, and enhanced features

    NASA Astrophysics Data System (ADS)

    Gelderblom, Erik C.; Vos, Hendrik J.; Mastik, Frits; Faez, Telli; Luan, Ying; Kokhuis, Tom J. A.; van der Steen, Antonius F. W.; Lohse, Detlef; de Jong, Nico; Versluis, Michel

    2012-10-01

    The Brandaris 128 ultra-high-speed imaging facility has been updated over the last 10 years through modifications made to the camera's hardware and software. At its introduction the camera was able to record 6 sequences of 128 images (500 × 292 pixels) at a maximum frame rate of 25 Mfps. The segmented mode of the camera was revised to allow for subdivision of the 128 image sensors into arbitrary segments (1-128) with an inter-segment time of 17 μs. Furthermore, a region of interest can be selected to increase the number of recordings within a single run of the camera from 6 up to 125. By extending the imaging system with a laser-induced fluorescence setup, time-resolved ultra-high-speed fluorescence imaging of microscopic objects has been enabled. Minor updates to the system are also reported here.

  9. Flame Imaging System

    NASA Technical Reports Server (NTRS)

    Barnes, Heidi L. (Inventor); Smith, Harvey S. (Inventor)

    1998-01-01

    A system for imaging a flame and the background scene is discussed. The flame imaging system consists of two charge-coupled-device (CCD) cameras. One camera uses a 800 nm long pass filter which during overcast conditions blocks sufficient background light so the hydrogen flame is brighter than the background light, and the second CCD camera uses a 1100 nm long pass filter, which blocks the solar background in full sunshine conditions such that the hydrogen flame is brighter than the solar background. Two electronic viewfinders convert the signal from the cameras into a visible image. The operator can select the appropriate filtered camera to use depending on the current light conditions. In addition, a narrow band pass filtered InGaAs sensor at 1360 nm triggers an audible alarm and a flashing LED if the sensor detects a flame, providing additional flame detection so the operator does not overlook a small flame.

  10. Single-camera stereo-digital image correlation with a four-mirror adapter: optimized design and validation

    NASA Astrophysics Data System (ADS)

    Yu, Liping; Pan, Bing

    2016-12-01

    A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.

  11. Disinfection of human enteric viruses in water by copper and silver in combination with low levels of chlorine.

    PubMed Central

    Abad, F X; Pintó, R M; Diez, J M; Bosch, A

    1994-01-01

    The efficacy of copper and silver ions, in combination with low levels of free chlorine (FC), was evaluated for the disinfection of hepatitis A virus (HAV), human rotavirus (HRV), human adenovirus, and poliovirus (PV) in water. HAV and HRV showed little inactivation in all conditions. PV showed more than a 4 log10 titer reduction in the presence of copper and silver combined with 0.5 mg of FC per liter or in the presence of 1 mg of FC per liter alone. Human adenovirus persisted longer than PV with the same treatments, although it persisted significantly less than HRV or HAV. The addition of 700 micrograms of copper and 70 micrograms of silver per liter did not enhance the inactivation rates after the exposure to 0.5 or 0.2 mg of FC per liter, although on some occasions it produced a level of inactivation similar to that induced by a higher dose of FC alone. Virus aggregates were observed in the presence of copper and silver ions, although not in the presence of FC alone. Our data indicate that the use of copper and silver ions in water systems may not provide a reliable alternative to high levels of FC for the disinfection of viral pathogens. Gene probe-based procedures were not adequate to monitor the presence of infectious HAV after disinfection. PV does not appear to be an adequate model viral strain to be used in disinfection studies. Bacteroides fragilis bacteriophages were consistently more resistant to disinfection than PV, suggesting that they would be more suitable indicators, although they survived significantly less than HAV or HRV. Images PMID:8074518

  12. Integration and Segregation of Default Mode Network Resting-State Functional Connectivity in Transition-Age Males with High-Functioning Autism Spectrum Disorder: A Proof-of-Concept Study.

    PubMed

    Joshi, Gagan; Arnold Anteraper, Sheeba; Patil, Kaustubh R; Semwal, Meha; Goldin, Rachel L; Furtak, Stephannie L; Chai, Xiaoqian Jenny; Saygin, Zeynep M; Gabrieli, John D E; Biederman, Joseph; Whitfield-Gabrieli, Susan

    2017-11-01

    The aim of this study is to assess the resting-state functional connectivity (RsFc) profile of the default mode network (DMN) in transition-age males with autism spectrum disorder (ASD). Resting-state blood oxygen level-dependent functional magnetic resonance imaging data were acquired from adolescent and young adult males with high-functioning ASD (n = 15) and from age-, sex-, and intelligence quotient-matched healthy controls (HCs; n = 16). The DMN was examined by assessing the positive and negative RsFc correlations of an average of the literature-based conceptualized major DMN nodes (medial prefrontal cortex [mPFC], posterior cingulate cortex, bilateral angular, and inferior temporal gyrus regions). RsFc data analysis was performed using a seed-driven approach. ASD was characterized by an altered pattern of RsFc in the DMN. The ASD group exhibited a weaker pattern of intra- and extra-DMN-positive and -negative RsFc correlations, respectively. In ASD, the strength of intra-DMN coupling was significantly reduced with the mPFC and the bilateral angular gyrus regions. In addition, the polarity of the extra-DMN correlation with the right hemispheric task-positive regions of fusiform gyrus and supramarginal gyrus was reversed from typically negative to positive in the ASD group. A wide variability was observed in the presentation of the RsFc profile of the DMN in both HC and ASD groups that revealed a distinct pattern of subgrouping using pattern recognition analyses. These findings imply that the functional architecture profile of the DMN is altered in ASD with weaker than expected integration and segregation of DMN RsFc. Future studies with larger sample sizes are warranted.

  13. Development of Automated Tracking System with Active Cameras for Figure Skating

    NASA Astrophysics Data System (ADS)

    Haraguchi, Tomohiko; Taki, Tsuyoshi; Hasegawa, Junichi

    This paper presents a system based on the control of PTZ cameras for automated real-time tracking of individual figure skaters moving on an ice rink. In the video images of figure skating, irregular trajectories, various postures, rapid movements, and various costume colors are included. Therefore, it is difficult to determine some features useful for image tracking. On the other hand, an ice rink has a limited area and uniform high intensity, and skating is always performed on ice. In the proposed system, an ice rink region is first extracted from a video image by the region growing method, and then, a skater region is extracted using the rink shape information. In the camera control process, each camera is automatically panned and/or tilted so that the skater region is as close to the center of the image as possible; further, the camera is zoomed to maintain the skater image at an appropriate scale. The results of experiments performed for 10 training scenes show that the skater extraction rate is approximately 98%. Thus, it was concluded that tracking with camera control was successful for almost all the cases considered in the study.

  14. Achieving thermography with a thermal security camera using uncooled amorphous silicon microbolometer image sensors

    NASA Astrophysics Data System (ADS)

    Wang, Yu-Wei; Tesdahl, Curtis; Owens, Jim; Dorn, David

    2012-06-01

    Advancements in uncooled microbolometer technology over the last several years have opened up many commercial applications which had been previously cost prohibitive. Thermal technology is no longer limited to the military and government market segments. One type of thermal sensor with low NETD which is available in the commercial market segment is the uncooled amorphous silicon (α-Si) microbolometer image sensor. Typical thermal security cameras focus on providing the best image quality by auto tonemaping (contrast enhancing) the image, which provides the best contrast depending on the temperature range of the scene. While this may provide enough information to detect objects and activities, there are further benefits of being able to estimate the actual object temperatures in a scene. This thermographic ability can provide functionality beyond typical security cameras by being able to monitor processes. Example applications of thermography[2] with thermal camera include: monitoring electrical circuits, industrial machinery, building thermal leaks, oil/gas pipelines, power substations, etc...[3][5] This paper discusses the methodology of estimating object temperatures by characterizing/calibrating different components inside a thermal camera utilizing an uncooled amorphous silicon microbolometer image sensor. Plots of system performance across camera operating temperatures will be shown.

  15. Calibration of Action Cameras for Photogrammetric Purposes

    PubMed Central

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-01-01

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898

  16. Calibration of action cameras for photogrammetric purposes.

    PubMed

    Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo

    2014-09-18

    The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.

  17. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  18. Can Commercial Digital Cameras Be Used as Multispectral Sensors? A Crop Monitoring Test

    PubMed Central

    Lebourgeois, Valentine; Bégué, Agnès; Labbé, Sylvain; Mallavan, Benjamin; Prévot, Laurent; Roux, Bruno

    2008-01-01

    The use of consumer digital cameras or webcams to characterize and monitor different features has become prevalent in various domains, especially in environmental applications. Despite some promising results, such digital camera systems generally suffer from signal aberrations due to the on-board image processing systems and thus offer limited quantitative data acquisition capability. The objective of this study was to test a series of radiometric corrections having the potential to reduce radiometric distortions linked to camera optics and environmental conditions, and to quantify the effects of these corrections on our ability to monitor crop variables. In 2007, we conducted a five-month experiment on sugarcane trial plots using original RGB and modified RGB (Red-Edge and NIR) cameras fitted onto a light aircraft. The camera settings were kept unchanged throughout the acquisition period and the images were recorded in JPEG and RAW formats. These images were corrected to eliminate the vignetting effect, and normalized between acquisition dates. Our results suggest that 1) the use of unprocessed image data did not improve the results of image analyses; 2) vignetting had a significant effect, especially for the modified camera, and 3) normalized vegetation indices calculated with vignetting-corrected images were sufficient to correct for scene illumination conditions. These results are discussed in the light of the experimental protocol and recommendations are made for the use of these versatile systems for quantitative remote sensing of terrestrial surfaces. PMID:27873930

  19. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  20. Earth on the Horizon

    NASA Image and Video Library

    2004-03-13

    This is the first image ever taken of Earth from the surface of a planet beyond the Moon. It was taken by the Mars Exploration Rover Spirit one hour before sunrise on the 63rd martian day, or sol, of its mission. Earth is the tiny white dot in the center. The image is a mosaic of images taken by the rover's navigation camera showing a broad view of the sky, and an image taken by the rover's panoramic camera of Earth. The contrast in the panoramic camera image was increased two times to make Earth easier to see. http://photojournal.jpl.nasa.gov/catalog/PIA05560

  1. Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.

    2007-09-01

    We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.

  2. PSMA-Specific Theranostic Nanoplex for Combination of TRAIL Gene and 5-FC Prodrug Therapy of Prostate Cancer

    PubMed Central

    Chen, Zhihang; Penet, Marie-France; Krishnamachary, Balaji; Banerjee, Sangeeta R.; Pomper, Martin G.; Bhujwalla, Zaver M.

    2015-01-01

    Metastatic prostate cancer causes significant morbidity and mortality and there is a critical unmet need for effective treatments. We have developed a theranostic nanoplex platform for combined imaging and therapy of prostate cancer. Our prostate-specific membrane antigen (PSMA) targeted nanoplex is designed to deliver plasmid DNA encoding tumor necrosis factor related apoptosis-inducing ligand (TRAIL), together with bacterial cytosine deaminase (bCD) as a prodrug enzyme. Nanoplex specificity was tested using two variants of human PC3 prostate cancer cells in culture and in tumor xenografts, one with high PSMA expression and the other with negligible expression levels. The expression of EGFP-TRAIL was demonstrated by fluorescence optical imaging and real-time PCR. Noninvasive 19F MR spectroscopy detected the conversion of the nontoxic prodrug 5-fluorocytosine (5-FC) to cytotoxic 5-fluorouracil (5-FU) by bCD. The combination strategy of TRAIL gene and 5-FC/bCD therapy showed significant inhibition of the growth of prostate cancer cells and tumors. These data demonstrate that the PSMA-specific theranostic nanoplex can deliver gene therapy and prodrug enzyme therapy concurrently for precision medicine in metastatic prostate cancer. PMID:26706476

  3. PubMed Central

    Baum, S.; Sillem, M.; Ney, J. T.; Baum, A.; Friedrich, M.; Radosa, J.; Kramer, K. M.; Gronwald, B.; Gottschling, S.; Solomayer, E. F.; Rody, A.; Joukhadar, R.

    2017-01-01

    Introduction Minimally invasive operative techniques are being used increasingly in gynaecological surgery. The expansion of the laparoscopic operation spectrum is in part the result of improved imaging. This study investigates the practical advantages of using 3D cameras in routine surgical practice. Materials and Methods Two different 3-dimensional camera systems were compared with a 2-dimensional HD system; the operating surgeonʼs experiences were documented immediately postoperatively using a questionnaire. Results Significant advantages were reported for suturing and cutting of anatomical structures when using the 3D compared to 2D camera systems. There was only a slight advantage for coagulating. The use of 3D cameras significantly improved the general operative visibility and in particular the representation of spacial depth compared to 2-dimensional images. There was not a significant advantage for image width. Depiction of adhesions and retroperitoneal neural structures was significantly improved by the stereoscopic cameras, though this did not apply to blood vessels, ureter, uterus or ovaries. Conclusion 3-dimensional cameras were particularly advantageous for the depiction of fine anatomical structures due to improved spacial depth representation compared to 2D systems. 3D cameras provide the operating surgeon with a monitor image that more closely resembles actual anatomy, thus simplifying laparoscopic procedures. PMID:28190888

  4. Line-Constrained Camera Location Estimation in Multi-Image Stereomatching.

    PubMed

    Donné, Simon; Goossens, Bart; Philips, Wilfried

    2017-08-23

    Stereomatching is an effective way of acquiring dense depth information from a scene when active measurements are not possible. So-called lightfield methods take a snapshot from many camera locations along a defined trajectory (usually uniformly linear or on a regular grid-we will assume a linear trajectory) and use this information to compute accurate depth estimates. However, they require the locations for each of the snapshots to be known: the disparity of an object between images is related to both the distance of the camera to the object and the distance between the camera positions for both images. Existing solutions use sparse feature matching for camera location estimation. In this paper, we propose a novel method that uses dense correspondences to do the same, leveraging an existing depth estimation framework to also yield the camera locations along the line. We illustrate the effectiveness of the proposed technique for camera location estimation both visually for the rectification of epipolar plane images and quantitatively with its effect on the resulting depth estimation. Our proposed approach yields a valid alternative for sparse techniques, while still being executed in a reasonable time on a graphics card due to its highly parallelizable nature.

  5. Exploring the feasibility of iris recognition for visible spectrum iris images obtained using smartphone camera

    NASA Astrophysics Data System (ADS)

    Trokielewicz, Mateusz; Bartuzi, Ewelina; Michowska, Katarzyna; Andrzejewska, Antonina; Selegrat, Monika

    2015-09-01

    In the age of modern, hyperconnected society that increasingly relies on mobile devices and solutions, implementing a reliable and accurate biometric system employing iris recognition presents new challenges. Typical biometric systems employing iris analysis require expensive and complicated hardware. We therefore explore an alternative way using visible spectrum iris imaging. This paper aims at answering several questions related to applying iris biometrics for images obtained in the visible spectrum using smartphone camera. Can irides be successfully and effortlessly imaged using a smartphone's built-in camera? Can existing iris recognition methods perform well when presented with such images? The main advantage of using near-infrared (NIR) illumination in dedicated iris recognition cameras is good performance almost independent of the iris color and pigmentation. Are the images obtained from smartphone's camera of sufficient quality even for the dark irides? We present experiments incorporating simple image preprocessing to find the best visibility of iris texture, followed by a performance study to assess whether iris recognition methods originally aimed at NIR iris images perform well with visible light images. To our best knowledge this is the first comprehensive analysis of iris recognition performance using a database of high-quality images collected in visible light using the smartphones flashlight together with the application of commercial off-the-shelf (COTS) iris recognition methods.

  6. Semi-automated camera trap image processing for the detection of ungulate fence crossing events.

    PubMed

    Janzen, Michael; Visser, Kaitlyn; Visscher, Darcy; MacLeod, Ian; Vujnovic, Dragomir; Vujnovic, Ksenija

    2017-09-27

    Remote cameras are an increasingly important tool for ecological research. While remote camera traps collect field data with minimal human attention, the images they collect require post-processing and characterization before it can be ecologically and statistically analyzed, requiring the input of substantial time and money from researchers. The need for post-processing is due, in part, to a high incidence of non-target images. We developed a stand-alone semi-automated computer program to aid in image processing, categorization, and data reduction by employing background subtraction and histogram rules. Unlike previous work that uses video as input, our program uses still camera trap images. The program was developed for an ungulate fence crossing project and tested against an image dataset which had been previously processed by a human operator. Our program placed images into categories representing the confidence of a particular sequence of images containing a fence crossing event. This resulted in a reduction of 54.8% of images that required further human operator characterization while retaining 72.6% of the known fence crossing events. This program can provide researchers using remote camera data the ability to reduce the time and cost required for image post-processing and characterization. Further, we discuss how this procedure might be generalized to situations not specifically related to animal use of linear features.

  7. iPhone 4s and iPhone 5s Imaging of the Eye

    PubMed Central

    Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.

    2017-01-01

    Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604

  8. Reduced integration and differentiation of the imitation network in autism: A combined functional connectivity magnetic resonance imaging and diffusion-weighted imaging study.

    PubMed

    Fishman, Inna; Datko, Michael; Cabrera, Yuliana; Carper, Ruth A; Müller, Ralph-Axel

    2015-12-01

    Converging evidence indicates that brain abnormalities in autism spectrum disorder (ASD) involve atypical network connectivity, but few studies have integrated functional with structural connectivity measures. This multimodal investigation examined functional and structural connectivity of the imitation network in children and adolescents with ASD, and its links with clinical symptoms. Resting state functional magnetic resonance imaging and diffusion-weighted imaging were performed in 35 participants with ASD and 35 typically developing controls, aged 8 to 17 years, matched for age, gender, intelligence quotient, and head motion. Within-network analyses revealed overall reduced functional connectivity (FC) between distributed imitation regions in the ASD group. Whole brain analyses showed that underconnectivity in ASD occurred exclusively in regions belonging to the imitation network, whereas overconnectivity was observed between imitation nodes and extraneous regions. Structurally, reduced fractional anisotropy and increased mean diffusivity were found in white matter tracts directly connecting key imitation regions with atypical FC in ASD. These differences in microstructural organization of white matter correlated with weaker FC and greater ASD symptomatology. Findings demonstrate atypical connectivity of the brain network supporting imitation in ASD, characterized by a highly specific pattern. This pattern of underconnectivity within, but overconnectivity outside the functional network is in contrast with typical development and suggests reduced network integration and differentiation in ASD. Our findings also indicate that atypical connectivity of the imitation network may contribute to ASD clinical symptoms, highlighting the role of this fundamental social cognition ability in the pathophysiology of ASD. © 2015 American Neurological Association.

  9. Portable, low-priced retinal imager for eye disease screening

    NASA Astrophysics Data System (ADS)

    Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto

    2014-02-01

    The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.

  10. Optical Transient Monitor (OTM) for BOOTES Project

    NASA Astrophysics Data System (ADS)

    Páta, P.; Bernas, M.; Castro-Tirado, A. J.; Hudec, R.

    2003-04-01

    The Optical Transient Monitor (OTM) is a software for control of three wide and ultra-wide filed cameras of BOOTES (Burst Observer and Optical Transient Exploring System) station. The OTM is a PC based and it is powerful tool for taking images from two SBIG CCD cameras in same time or from one camera only. The control program for BOOTES cameras is Windows 98 or MSDOS based. Now the version for Windows 2000 is prepared. There are five main supported modes of work. The OTM program could control cameras and evaluate image data without human interaction.

  11. Noise and sensitivity of x-ray framing cameras at Nike (abstract)

    NASA Astrophysics Data System (ADS)

    Pawley, C. J.; Deniz, A. V.; Lehecka, T.

    1999-01-01

    X-ray framing cameras are the most widely used tool for radiographing density distributions in laser and Z-pinch driven experiments. The x-ray framing cameras that were developed specifically for experiments on the Nike laser system are described. One of these cameras has been coupled to a CCD camera and was tested for resolution and image noise using both electrons and x rays. The largest source of noise in the images was found to be due to low quantum detection efficiency of x-ray photons.

  12. Recognizable-image selection for fingerprint recognition with a mobile-device camera.

    PubMed

    Lee, Dongjae; Choi, Kyoungtaek; Choi, Heeseung; Kim, Jaihie

    2008-02-01

    This paper proposes a recognizable-image selection algorithm for fingerprint-verification systems that use a camera embedded in a mobile device. A recognizable image is defined as the fingerprint image which includes the characteristics that are sufficiently discriminating an individual from other people. While general camera systems obtain focused images by using various gradient measures to estimate high-frequency components, mobile cameras cannot acquire recognizable images in the same way because the obtained images may not be adequate for fingerprint recognition, even if they are properly focused. A recognizable image has to meet the following two conditions: First, valid region in the recognizable image should be large enough compared with other nonrecognizable images. Here, a valid region is a well-focused part, and ridges in the region are clearly distinguishable from valleys. In order to select valid regions, this paper proposes a new focus-measurement algorithm using the secondary partial derivatives and a quality estimation utilizing the coherence and symmetry of gradient distribution. Second, rolling and pitching degrees of a finger measured from the camera plane should be within some limit for a recognizable image. The position of a core point and the contour of a finger are used to estimate the degrees of rolling and pitching. Experimental results show that our proposed method selects valid regions and estimates the degrees of rolling and pitching properly. In addition, fingerprint-verification performance is improved by detecting the recognizable images.

  13. NPS assessment of color medical displays using a monochromatic CCD camera

    NASA Astrophysics Data System (ADS)

    Roehrig, Hans; Gu, Xiliang; Fan, Jiahua

    2012-02-01

    This paper presents an approach to Noise Power Spectrum (NPS) assessment of color medical displays without using an expensive imaging colorimeter. The R, G and B color uniform patterns were shown on the display under study and the images were taken using a high resolution monochromatic camera. A colorimeter was used to calibrate the camera images. Synthetic intensity images were formed by the weighted sum of the R, G, B and the dark screen images. Finally the NPS analysis was conducted on the synthetic images. The proposed method replaces an expensive imaging colorimeter for NPS evaluation, which also suggests a potential solution for routine color medical display QA/QC in the clinical area, especially when imaging of display devices is desired.

  14. Concept of a photon-counting camera based on a diffraction-addressed Gray-code mask

    NASA Astrophysics Data System (ADS)

    Morel, Sébastien

    2004-09-01

    A new concept of photon counting camera for fast and low-light-level imaging applications is introduced. The possible spectrum covered by this camera ranges from visible light to gamma rays, depending on the device used to transform an incoming photon into a burst of visible photons (photo-event spot) localized in an (x,y) image plane. It is actually an evolution of the existing "PAPA" (Precision Analog Photon Address) Camera that was designed for visible photons. This improvement comes from a simplified optics. The new camera transforms, by diffraction, each photo-event spot from an image intensifier or a scintillator into a cross-shaped pattern, which is projected onto a specific Gray code mask. The photo-event position is then extracted from the signal given by an array of avalanche photodiodes (or photomultiplier tubes, alternatively) downstream of the mask. After a detailed explanation of this camera concept that we have called "DIAMICON" (DIffraction Addressed Mask ICONographer), we briefly discuss about technical solutions to build such a camera.

  15. Bupropion Shows Different Effects on Brain Functional Connectivity in Patients With Internet-Based Gambling Disorder and Internet Gaming Disorder.

    PubMed

    Bae, Sujin; Hong, Ji Sun; Kim, Sun Mi; Han, Doug Hyun

    2018-01-01

    Internet gaming disorder (IGD) and gambling disorder (GD) share similar clinical characteristics but show different brain functional connectivity patterns. Bupropion is known to be effective for the treatment of patients with IGD and GD. We hypothesized that bupropion may be effective for the treatment of Internet-based gambling disorder (ibGD) and IGD and that the connections between the default mode network (DMN) and cognitive control network (CCN) would be different between ibGD and IGD patients after 12 weeks of bupropion treatment. 16 patients with IGD, 15 patients with ibGD, and 15 healthy subjects were recruited in this study. At baseline and after 12 weeks of bupropion treatment, the clinical symptoms of patients with IGD or ibGD were assessed, and brain activity was evaluated using resting state functional magnetic resonance imaging. After the 12-week bupropion treatment, clinical symptoms, including the severity of IGD or GD, depressive symptoms, attention, and impulsivity improved in both groups. In the IGD group, the functional connectivity (FC) within the posterior DMN as well as the FC between the DMN and the CCN decreased following treatment. Moreover, the FC within the DMN in the IGD group was positively correlated with changes in Young Internet Addiction Scale scores after the bupropion treatment period. In the ibGD group, the FC within the posterior DMN decreased while the FC within the CCN increased after the bupropion treatment period. Moreover, the FC within the CCN in the ibGD group was significantly greater than that in the IGD group. Bupropion was effective in improving clinical symptoms in patients with IGD and ibGD. However, there were differences in the pharmacodynamics between the two groups. After 12 weeks of bupropion treatment, the FC within the DMN as well as between the DMN and CCN decreased in patients with IGD, whereas the FC within the CCN increased in patients with ibGD.

  16. Reduced anterior temporal and hippocampal functional connectivity during face processing discriminates individuals with social anxiety disorder from healthy controls and panic disorder, and increases following treatment.

    PubMed

    Pantazatos, Spiro P; Talati, Ardesheer; Schneier, Franklin R; Hirsch, Joy

    2014-01-01

    Group functional magnetic resonance imaging (fMRI) studies suggest that anxiety disorders are associated with anomalous brain activation and functional connectivity (FC). However, brain-based features sensitive enough to discriminate individual subjects with a specific anxiety disorder and that track symptom severity longitudinally, desirable qualities for putative disorder-specific biomarkers, remain to be identified. Blood oxygen level-dependent (BOLD) fMRI during emotional face perceptual tasks and a new, large-scale and condition-dependent FC and machine learning approach were used to identify features (pair-wise correlations) that discriminated patients with social anxiety disorder (SAD, N=16) from controls (N=19). We assessed whether these features discriminated SAD from panic disorder (PD, N=16), and SAD from controls in an independent replication sample that performed a similar task at baseline (N: SAD=15, controls=17) and following 8-weeks paroxetine treatment (N: SAD=12, untreated controls=7). High SAD vs HCs discrimination (area under the ROC curve, AUC, arithmetic mean of sensitivity and specificity) was achieved with two FC features during unattended neutral face perception (AUC=0.88, P<0.05 corrected). These features also discriminated SAD vs PD (AUC=0.82, P=0.0001) and SAD vs HCs in the independent replication sample (FC during unattended angry face perception, AUC=0.71, P=0.01). The most informative FC was left hippocampus-left temporal pole, which was reduced in both SAD samples (replication sample P=0.027), and this FC increased following the treatment (post>pre, t(11)=2.9, P=0.007). In conclusion, SAD is associated with reduced FC between left temporal pole and left hippocampus during face perception, and results suggest promise for emerging FC-based biomarkers for SAD diagnosis and treatment effects.

  17. High-speed imaging using 3CCD camera and multi-color LED flashes

    NASA Astrophysics Data System (ADS)

    Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis

    2017-11-01

    This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.

  18. A position and attitude vision measurement system for wind tunnel slender model

    NASA Astrophysics Data System (ADS)

    Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi

    2014-11-01

    A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.

  19. Applying image quality in cell phone cameras: lens distortion

    NASA Astrophysics Data System (ADS)

    Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje

    2009-01-01

    This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.

  20. Stepping over obstacles: gait patterns of healthy young and old adults.

    PubMed

    Chen, H C; Ashton-Miller, J A; Alexander, N B; Schultz, A B

    1991-11-01

    Falls associated with tripping over an obstacle can be devastating to elderly individuals, yet little is known about the strategies used for stepping over obstacles by either old or young adults. The gait of gender-matched groups of 24 young and 24 old healthy adults (mean ages 22 and 71 years) was studied during a 4 m approach to and while stepping over obstacles of 0, 25, 51, or 152 mm height and in level obstacle-free walking. Optoelectronic cameras and recorders were used to record approach and obstacle crossing speeds as well as bilateral lower extremity kinematic parameters that described foot placement and movement trajectories relative to the obstacle. The results showed that age had no effect on minimum swing foot clearance (FC) over an obstacle. For the 25 mm obstacle, mean FC was 64 mm, or approximately three times that used in level gait; FC increased nonlinearly with obstacle height for all subjects. Although no age differences were found in obstacle-free gait, old adults exhibited a significantly more conservative strategy when crossing obstacles, with slower crossing speed, shorter step length, and shorter obstacle-heel strike distance. In addition, the old adults crossed the obstacle so that it was 10% further forward in their obstacle-crossing step. Although all subjects successfully avoided the riskiest form of obstacle contact, tripping, 4/24 healthy old adults stepped on an obstacle, demonstrating an increased risk for obstacle contact with age.

  1. Abnormal brain functional connectivity leads to impaired mood and cognition in hyperthyroidism: a resting-state functional MRI study

    PubMed Central

    Li, Ling; Zhi, Mengmeng; Hou, Zhenghua; Zhang, Yuqun; Yue, Yingying; Yuan, Yonggui

    2017-01-01

    Patients with hyperthyroidism frequently have neuropsychiatric complaints such as lack of concentration, poor memory, depression, anxiety, nervousness, and irritability, suggesting brain dysfunction. However, the underlying process of these symptoms remains unclear. Using resting-state functional magnetic resonance imaging (rs-fMRI), we depicted the altered graph theoretical metric degree centrality (DC) and seed-based resting-state functional connectivity (FC) in 33 hyperthyroid patients relative to 33 healthy controls. The peak points of significantly altered DC between the two groups were defined as the seed regions to calculate FC to the whole brain. Then, partial correlation analyses were performed between abnormal DC, FC and neuropsychological performances, as well as some clinical indexes. The decreased intrinsic functional connectivity in the posterior lobe of cerebellum (PLC) and medial frontal gyrus (MeFG), as well as the abnormal seed-based FC anchored in default mode network (DMN), attention network, visual network and cognitive network in this study, possibly constitutes the latent mechanism for emotional and cognitive changes in hyperthyroidism, including anxiety and impaired processing speed. PMID:28009983

  2. Abnormal brain functional connectivity leads to impaired mood and cognition in hyperthyroidism: a resting-state functional MRI study.

    PubMed

    Li, Ling; Zhi, Mengmeng; Hou, Zhenghua; Zhang, Yuqun; Yue, Yingying; Yuan, Yonggui

    2017-01-24

    Patients with hyperthyroidism frequently have neuropsychiatric complaints such as lack of concentration, poor memory, depression, anxiety, nervousness, and irritability, suggesting brain dysfunction. However, the underlying process of these symptoms remains unclear. Using resting-state functional magnetic resonance imaging (rs-fMRI), we depicted the altered graph theoretical metric degree centrality (DC) and seed-based resting-state functional connectivity (FC) in 33 hyperthyroid patients relative to 33 healthy controls. The peak points of significantly altered DC between the two groups were defined as the seed regions to calculate FC to the whole brain. Then, partial correlation analyses were performed between abnormal DC, FC and neuropsychological performances, as well as some clinical indexes. The decreased intrinsic functional connectivity in the posterior lobe of cerebellum (PLC) and medial frontal gyrus (MeFG), as well as the abnormal seed-based FC anchored in default mode network (DMN), attention network, visual network and cognitive network in this study, possibly constitutes the latent mechanism for emotional and cognitive changes in hyperthyroidism, including anxiety and impaired processing speed.

  3. Embedded processor extensions for image processing

    NASA Astrophysics Data System (ADS)

    Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy

    2008-04-01

    The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.

  4. A new compact, high sensitivity neutron imaging systema)

    NASA Astrophysics Data System (ADS)

    Caillaud, T.; Landoas, O.; Briat, M.; Rossé, B.; Thfoin, I.; Philippe, F.; Casner, A.; Bourgade, J. L.; Disdier, L.; Glebov, V. Yu.; Marshall, F. J.; Sangster, T. C.; Park, H. S.; Robey, H. F.; Amendt, P.

    2012-10-01

    We have developed a new small neutron imaging system (SNIS) diagnostic for the OMEGA laser facility. The SNIS uses a penumbral coded aperture and has been designed to record images from low yield (109-1010 neutrons) implosions such as those using deuterium as the fuel. This camera was tested at OMEGA in 2009 on a rugby hohlraum energetics experiment where it recorded an image at a yield of 1.4 × 1010. The resolution of this image was 54 μm and the camera was located only 4 meters from target chamber centre. We recently improved the instrument by adding a cooled CCD camera. The sensitivity of the new camera has been fully characterized using a linear accelerator and a 60Co γ-ray source. The calibration showed that the signal-to-noise ratio could be improved by using raw binning detection.

  5. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  6. Functional Connectivity of Human Chewing

    PubMed Central

    Quintero, A.; Ichesco, E.; Schutt, R.; Myers, C.; Peltier, S.; Gerstner, G.E.

    2013-01-01

    Mastication is one of the most important orofacial functions. The neurobiological mechanisms of masticatory control have been investigated in animal models, but less so in humans. This project used functional connectivity magnetic resonance imaging (fcMRI) to assess the positive temporal correlations among activated brain areas during a gum-chewing task. Twenty-nine healthy young-adults underwent an fcMRI scanning protocol while they chewed gum. Seed-based fcMRI analyses were performed with the motor cortex and cerebellum as regions of interest. Both left and right motor cortices were reciprocally functionally connected and functionally connected with the post-central gyrus, cerebellum, cingulate cortex, and precuneus. The cerebellar seeds showed functional connections with the contralateral cerebellar hemispheres, bilateral sensorimotor cortices, left superior temporal gyrus, and left cingulate cortex. These results are the first to identify functional central networks engaged during mastication. PMID:23355525

  7. Novel Robotic Tools for Piping Inspection and Repair, Phase 1

    DTIC Science & Technology

    2014-02-13

    35 Figure 57 - Accowle ODVS cross section and reflective path ......................................... 36 Figure 58 - Leopard Imaging HD...mounted to iPhone ............................................................................. 39 Figure 63 - Kogeto mounted to Leopard Imaging HD...40 Figure 65 - Leopard Imaging HD camera pipe test (letters) ............................................. 40 Figure 66 - Leopard Imaging HD camera

  8. A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i

    USGS Publications Warehouse

    Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.

    2015-01-01

    We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity. 

  9. Two-Camera Acquisition and Tracking of a Flying Target

    NASA Technical Reports Server (NTRS)

    Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter

    2008-01-01

    A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.

  10. High speed photography, videography, and photonics III; Proceedings of the Meeting, San Diego, CA, August 22, 23, 1985

    NASA Technical Reports Server (NTRS)

    Ponseggi, B. G. (Editor); Johnson, H. C. (Editor)

    1985-01-01

    Papers are presented on the picosecond electronic framing camera, photogrammetric techniques using high-speed cineradiography, picosecond semiconductor lasers for characterizing high-speed image shutters, the measurement of dynamic strain by high-speed moire photography, the fast framing camera with independent frame adjustments, design considerations for a data recording system, and nanosecond optical shutters. Consideration is given to boundary-layer transition detectors, holographic imaging, laser holographic interferometry in wind tunnels, heterodyne holographic interferometry, a multispectral video imaging and analysis system, a gated intensified camera, a charge-injection-device profile camera, a gated silicon-intensified-target streak tube and nanosecond-gated photoemissive shutter tubes. Topics discussed include high time-space resolved photography of lasers, time-resolved X-ray spectrographic instrumentation for laser studies, a time-resolving X-ray spectrometer, a femtosecond streak camera, streak tubes and cameras, and a short pulse X-ray diagnostic development facility.

  11. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  12. NGEE Arctic Zero Power Warming PhenoCamera Images, Barrow, Alaska, 2016

    DOE Data Explorer

    Shawn Serbin; Andrew McMahon; Keith Lewin; Kim Ely; Alistair Rogers

    2016-11-14

    StarDot NetCam SC pheno camera images collected from the top of the Barrow, BEO Sled Shed. The camera was installed to monitor the BNL TEST group's prototype ZPW (Zero Power Warming) chambers during the growing season of 2016 (including early spring and late fall). Images were uploaded to the BNL FTP server every 10 minutes and renamed with the date and time of the image. See associated data "Zero Power Warming (ZPW) Chamber Prototype Measurements, Barrow, Alaska, 2016" http://dx.doi.org/10.5440/1343066.

  13. Low-cost laser speckle contrast imaging of blood flow using a webcam.

    PubMed

    Richards, Lisa M; Kazmi, S M Shams; Davis, Janel L; Olin, Katherine E; Dunn, Andrew K

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion.

  14. Low-cost laser speckle contrast imaging of blood flow using a webcam

    PubMed Central

    Richards, Lisa M.; Kazmi, S. M. Shams; Davis, Janel L.; Olin, Katherine E.; Dunn, Andrew K.

    2013-01-01

    Laser speckle contrast imaging has become a widely used tool for dynamic imaging of blood flow, both in animal models and in the clinic. Typically, laser speckle contrast imaging is performed using scientific-grade instrumentation. However, due to recent advances in camera technology, these expensive components may not be necessary to produce accurate images. In this paper, we demonstrate that a consumer-grade webcam can be used to visualize changes in flow, both in a microfluidic flow phantom and in vivo in a mouse model. A two-camera setup was used to simultaneously image with a high performance monochrome CCD camera and the webcam for direct comparison. The webcam was also tested with inexpensive aspheric lenses and a laser pointer for a complete low-cost, compact setup ($90, 5.6 cm length, 25 g). The CCD and webcam showed excellent agreement with the two-camera setup, and the inexpensive setup was used to image dynamic blood flow changes before and after a targeted cerebral occlusion. PMID:24156082

  15. HIGH SPEED KERR CELL FRAMING CAMERA

    DOEpatents

    Goss, W.C.; Gilley, L.F.

    1964-01-01

    The present invention relates to a high speed camera utilizing a Kerr cell shutter and a novel optical delay system having no moving parts. The camera can selectively photograph at least 6 frames within 9 x 10/sup -8/ seconds during any such time interval of an occurring event. The invention utilizes particularly an optical system which views and transmits 6 images of an event to a multi-channeled optical delay relay system. The delay relay system has optical paths of successively increased length in whole multiples of the first channel optical path length, into which optical paths the 6 images are transmitted. The successively delayed images are accepted from the exit of the delay relay system by an optical image focusing means, which in turn directs the images into a Kerr cell shutter disposed to intercept the image paths. A camera is disposed to simultaneously view and record the 6 images during a single exposure of the Kerr cell shutter. (AEC)

  16. Design of a high-numerical-aperture digital micromirror device camera with high dynamic range.

    PubMed

    Qiao, Yang; Xu, Xiping; Liu, Tao; Pan, Yue

    2015-01-01

    A high-NA imaging system with high dynamic range is presented based on a digital micromirror device (DMD). The DMD camera consists of an objective imaging system and a relay imaging system, connected by a DMD chip. With the introduction of a total internal reflection prism system, the objective imaging system is designed with a working F/# of 1.97, breaking through the F/2.45 limitation of conventional DMD projection lenses. As for the relay imaging system, an off-axis design that could correct off-axis aberrations of the tilt relay imaging system is developed. This structure has the advantage of increasing the NA of the imaging system while maintaining a compact size. Investigation revealed that the dynamic range of a DMD camera could be greatly increased, by 2.41 times. We built one prototype DMD camera with a working F/# of 1.23, and the field experiments proved the validity and reliability our work.

  17. Performance benefits and limitations of a camera network

    NASA Astrophysics Data System (ADS)

    Carr, Peter; Thomas, Paul J.; Hornsey, Richard

    2005-06-01

    Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.

  18. Developments in mercuric iodide gamma ray imaging

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Patt, B.E.; Beyerle, A.G.; Dolin, R.C.

    A mercuric iodide gamma-ray imaging array and camera system previously described has been characterized for spatial and energy resolution. Based on this data a new camera is being developed to more fully exploit the potential of the array. Characterization results and design criterion for the new camera will be presented. 2 refs., 7 figs.

  19. Seeing the Light: A Classroom-Sized Pinhole Camera Demonstration for Teaching Vision

    ERIC Educational Resources Information Center

    Prull, Matthew W.; Banks, William P.

    2005-01-01

    We describe a classroom-sized pinhole camera demonstration (camera obscura) designed to enhance students' learning of the visual system. The demonstration consists of a suspended rear-projection screen onto which the outside environment projects images through a small hole in a classroom window. Students can observe these images in a darkened…

  20. An airborne multispectral imaging system based on two consumer-grade cameras for agricultural remote sensing

    USDA-ARS?s Scientific Manuscript database

    This paper describes the design and evaluation of an airborne multispectral imaging system based on two identical consumer-grade cameras for agricultural remote sensing. The cameras are equipped with a full-frame complementary metal oxide semiconductor (CMOS) sensor with 5616 × 3744 pixels. One came...

  1. Completely optical orientation determination for an unstabilized aerial three-line camera

    NASA Astrophysics Data System (ADS)

    Wohlfeil, Jürgen

    2010-10-01

    Aerial line cameras allow the fast acquisition of high-resolution images at low costs. Unfortunately the measurement of the camera's orientation with the necessary rate and precision is related with large effort, unless extensive camera stabilization is used. But also stabilization implicates high costs, weight, and power consumption. This contribution shows that it is possible to completely derive the absolute exterior orientation of an unstabilized line camera from its images and global position measurements. The presented approach is based on previous work on the determination of the relative orientation of subsequent lines using optical information from the remote sensing system. The relative orientation is used to pre-correct the line images, in which homologous points can reliably be determined using the SURF operator. Together with the position measurements these points are used to determine the absolute orientation from the relative orientations via bundle adjustment of a block of overlapping line images. The approach was tested at a flight with the DLR's RGB three-line camera MFC. To evaluate the precision of the resulting orientation the measurements of a high-end navigation system and ground control points are used.

  2. Flow Interactions and Control

    DTIC Science & Technology

    2012-03-08

    to-Use 3-D Camera For Measurements in Turbulent Flow Fields B Thurow, Auburn Near Mid Far Conventional imaging Plenoptic imaging Conventional 2...depth-of-field and blur  Reduced aperture (restricted angular information) leads to low signal levels Lightfield Imaging  Plenoptic camera records

  3. Tenth Anniversary Image from Camera on NASA Mars Orbiter

    NASA Image and Video Library

    2012-02-29

    NASA Mars Odyssey spacecraft captured this image on Feb. 19, 2012, 10 years to the day after the camera recorded its first view of Mars. This image covers an area in the Nepenthes Mensae region north of the Martian equator.

  4. Full-Frame Reference for Test Photo of Moon

    NASA Technical Reports Server (NTRS)

    2005-01-01

    This pair of views shows how little of the full image frame was taken up by the Moon in test images taken Sept. 8, 2005, by the High Resolution Imaging Science Experiment (HiRISE) camera on NASA's Mars Reconnaissance Orbiter. The Mars-bound camera imaged Earth's Moon from a distance of about 10 million kilometers (6 million miles) away -- 26 times the distance between Earth and the Moon -- as part of an activity to test and calibrate the camera. The images are very significant because they show that the Mars Reconnaissance Orbiter spacecraft and this camera can properly operate together to collect very high-resolution images of Mars. The target must move through the camera's telescope view in just the right direction and speed to acquire a proper image. The day's test images also demonstrate that the focus mechanism works properly with the telescope to produce sharp images.

    Out of the 20,000-pixel-by-6,000-pixel full frame, the Moon's diameter is about 340 pixels, if the full Moon could be seen. The illuminated crescent is about 60 pixels wide, and the resolution is about 10 kilometers (6 miles) per pixel. At Mars, the entire image region will be filled with high-resolution information.

    The Mars Reconnaissance Orbiter, launched on Aug. 12, 2005, is on course to reach Mars on March 10, 2006. After gradually adjusting the shape of its orbit for half a year, it will begin its primary science phase in November 2006. From the mission's planned science orbit about 300 kilometers (186 miles) above the surface of Mars, the high resolution camera will be able to discern features as small as one meter or yard across.

    The Mars Reconnaissance Orbiter mission is managed by NASA's Jet Propulsion Laboratory, a division of the California Institute of Technology, Pasadena, for the NASA Science Mission Directorate. Lockheed Martin Space Systems, Denver, prime contractor for the project, built the spacecraft. Ball Aerospace & Technologies Corp., Boulder, Colo., built the High Resolution Imaging Science Experiment instrument for the University of Arizona, Tucson, to provide to the mission. The HiRISE Operations Center at the University of Arizona processes images from the camera.

  5. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  6. Plume propagation direction determination with SO2 cameras

    NASA Astrophysics Data System (ADS)

    Klein, Angelika; Lübcke, Peter; Bobrowski, Nicole; Kuhn, Jonas; Platt, Ulrich

    2017-03-01

    SO2 cameras are becoming an established tool for measuring sulfur dioxide (SO2) fluxes in volcanic plumes with good precision and high temporal resolution. The primary result of SO2 camera measurements are time series of two-dimensional SO2 column density distributions (i.e. SO2 column density images). However, it is frequently overlooked that, in order to determine the correct SO2 fluxes, not only the SO2 column density, but also the distance between the camera and the volcanic plume, has to be precisely known. This is because cameras only measure angular extents of objects while flux measurements require knowledge of the spatial plume extent. The distance to the plume may vary within the image array (i.e. the field of view of the SO2 camera) since the plume propagation direction (i.e. the wind direction) might not be parallel to the image plane of the SO2 camera. If the wind direction and thus the camera-plume distance are not well known, this error propagates into the determined SO2 fluxes and can cause errors exceeding 50 %. This is a source of error which is independent of the frequently quoted (approximate) compensation of apparently higher SO2 column densities and apparently lower plume propagation velocities at non-perpendicular plume observation angles.Here, we propose a new method to estimate the propagation direction of the volcanic plume directly from SO2 camera image time series by analysing apparent flux gradients along the image plane. From the plume propagation direction and the known location of the SO2 source (i.e. volcanic vent) and camera position, the camera-plume distance can be determined. Besides being able to determine the plume propagation direction and thus the wind direction in the plume region directly from SO2 camera images, we additionally found that it is possible to detect changes of the propagation direction at a time resolution of the order of minutes. In addition to theoretical studies we applied our method to SO2 flux measurements at Mt Etna and demonstrate that we obtain considerably more precise (up to a factor of 2 error reduction) SO2 fluxes. We conclude that studies on SO2 flux variability become more reliable by excluding the possible influences of propagation direction variations.

  7. Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?

    NASA Astrophysics Data System (ADS)

    Barazzetti, L.; Previtali, M.; Roncoroni, F.

    2018-05-01

    360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.

  8. A hands-free region-of-interest selection interface for solo surgery with a wide-angle endoscope: preclinical proof of concept.

    PubMed

    Jung, Kyunghwa; Choi, Hyunseok; Hong, Hanpyo; Adikrishna, Arnold; Jeon, In-Ho; Hong, Jaesung

    2017-02-01

    A hands-free region-of-interest (ROI) selection interface is proposed for solo surgery using a wide-angle endoscope. A wide-angle endoscope provides images with a larger field of view than a conventional endoscope. With an appropriate selection interface for a ROI, surgeons can also obtain a detailed local view as if they moved a conventional endoscope in a specific position and direction. To manipulate the endoscope without releasing the surgical instrument in hand, a mini-camera is attached to the instrument, and the images taken by the attached camera are analyzed. When a surgeon moves the instrument, the instrument orientation is calculated by an image processing. Surgeons can select the ROI with this instrument movement after switching from 'task mode' to 'selection mode.' The accelerated KAZE algorithm is used to track the features of the camera images once the instrument is moved. Both the wide-angle and detailed local views are displayed simultaneously, and a surgeon can move the local view area by moving the mini-camera attached to the surgical instrument. Local view selection for a solo surgery was performed without releasing the instrument. The accuracy of camera pose estimation was not significantly different between camera resolutions, but it was significantly different between background camera images with different numbers of features (P < 0.01). The success rate of ROI selection diminished as the number of separated regions increased. However, separated regions up to 12 with a region size of 160 × 160 pixels were selected with no failure. Surgical tasks on a phantom model and a cadaver were attempted to verify the feasibility in a clinical environment. Hands-free endoscope manipulation without releasing the instruments in hand was achieved. The proposed method requires only a small, low-cost camera and an image processing. The technique enables surgeons to perform solo surgeries without a camera assistant.

  9. Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras

    NASA Technical Reports Server (NTRS)

    Amer, Tahani R.; Goad, William K.

    2005-01-01

    Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.

  10. Integration of USB and firewire cameras in machine vision applications

    NASA Astrophysics Data System (ADS)

    Smith, Timothy E.; Britton, Douglas F.; Daley, Wayne D.; Carey, Richard

    1999-08-01

    Digital cameras have been around for many years, but a new breed of consumer market cameras is hitting the main stream. By using these devices, system designers and integrators will be well posited to take advantage of technological advances developed to support multimedia and imaging applications on the PC platform. Having these new cameras on the consumer market means lower cost, but it does not necessarily guarantee ease of integration. There are many issues that need to be accounted for like image quality, maintainable frame rates, image size and resolution, supported operating system, and ease of software integration. This paper will describe briefly a couple of the consumer digital standards, and then discuss some of the advantages and pitfalls of integrating both USB and Firewire cameras into computer/machine vision applications.

  11. Modulated CMOS camera for fluorescence lifetime microscopy.

    PubMed

    Chen, Hongtao; Holst, Gerhard; Gratton, Enrico

    2015-12-01

    Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.

  12. Research on auto-calibration technology of the image plane's center of 360-degree and all round looking camera

    NASA Astrophysics Data System (ADS)

    Zhang, Shaojun; Xu, Xiping

    2015-10-01

    The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.

  13. Immediate Changes Following Manual Therapy in Resting State Functional Connectivity As Measured By Magnetic Resonance Imaging (fMRI) In Subjects With Induced Low Back Pain

    PubMed Central

    Gay, Charles W.; Robinson, Michael E.; George, Steven Z.; Perlstein, William M.; Bishop, Mark D.

    2014-01-01

    Objective The purpose of this study was to use functional magnetic resonance imaging (fMRI) to investigate the immediate changes in functional connectivity (FC) between brain regions that process and modulate the pain experience following 3 different types of manual therapies (MT) and to identify reductions in experimentally induced myalgia and changes in local and remote pressure pain sensitivity. Methods Twenty-four participants (17 females, mean age ± SD = 21.6 ± 4.2 years), who completed an exercise-injury protocol to induce low back pain, were randomized into 3 groups: chiropractic spinal manipulation (n=6), spinal mobilization (n=8) or therapeutic touch (n=10). The primary outcome was the immediate change in FC as measured on fMRI between the following brain regions: somatosensory cortex, secondary somatosensory cortex, thalamus, anterior and posterior cingulate cortices, anterior and poster insula, and periaqueductal grey. Secondary outcomes were immediate changes in pain intensity measured with a 101-point numeric rating scale, and pain sensitivity, measured with a hand-held dynamometer. Repeated measures ANOVA models and correlation analyses were conducted to examine treatment effects and the relationship between within-person changes across outcome measures. Results Changes in FC were found between several brain regions that were common to all 3 manual therapy interventions. Treatment-dependent changes in FC were also observed between several brain regions. Improvement was seen in pain intensity following all interventions (p<0.05) with no difference between groups (p>0.05). There were no observed changes in pain sensitivity, or an association between primary and secondary outcome measures. Conclusion These results suggest that manual therapies (chiropractic spinal manipulation, spinal mobilization, and therapeutic touch) have an immediate effect on the FC between brain regions involved in processing and modulating the pain experience. This suggests that neurophysiological changes following MT may be an underlying mechanism of pain relief. PMID:25284739

  14. Performance evaluation of a two detector camera for real-time video.

    PubMed

    Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo

    2016-12-20

    Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.

  15. Use of a color CMOS camera as a colorimeter

    NASA Astrophysics Data System (ADS)

    Dallas, William J.; Roehrig, Hans; Redford, Gary R.

    2006-08-01

    In radiology diagnosis, film is being quickly replaced by computer monitors as the display medium for all imaging modalities. Increasingly, these monitors are color instead of monochrome. It is important to have instruments available to characterize the display devices in order to guarantee reproducible presentation of image material. We are developing an imaging colorimeter based on a commercially available color digital camera. The camera uses a sensor that has co-located pixels in all three primary colors.

  16. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  17. Feasibility of a high-speed gamma-camera design using the high-yield-pileup-event-recovery method.

    PubMed

    Wong, W H; Li, H; Uribe, J; Baghaei, H; Wang, Y; Yokoyama, S

    2001-04-01

    Higher count-rate gamma cameras than are currently used are needed if the technology is to fulfill its promise in positron coincidence imaging, radionuclide therapy dosimetry imaging, and cardiac first-pass imaging. The present single-crystal design coupled with conventional detector electronics and the traditional Anger-positioning algorithm hinder higher count-rate imaging because of the pileup of gamma-ray signals in the detector and electronics. At an interaction rate of 2 million events per second, the fraction of nonpileup events is < 20% of the total incident events. Hence, the recovery of pileup events can significantly increase the count-rate capability, increase the yield of imaging photons, and minimize image artifacts associated with pileups. A new technology to significantly enhance the performance of gamma cameras in this area is introduced. We introduce a new electronic design called high-yield-pileup-event-recovery (HYPER) electronics for processing the detector signal in gamma cameras so that the individual gamma energies and positions of pileup events, including multiple pileups, can be resolved and recovered despite the mixing of signals. To illustrate the feasibility of the design concept, we have developed a small gamma-camera prototype with the HYPER-Anger electronics. The camera has a 10 x 10 x 1 cm NaI(Tl) crystal with four photomultipliers. Hot-spot and line sources with very high 99mTc activities were imaged. The phantoms were imaged continuously from 60,000 to 3,500,000 counts per second to illustrate the efficacy of the method as a function of counting rates. At 2-3 million events per second, all phantoms were imaged with little distortion, pileup, and dead-time loss. At these counting rates, multiple pileup events (> or = 3 events piling together) were the predominate occurrences, and the HYPER circuit functioned well to resolve and recover these events. The full width at half maximum of the line-spread function at 3,000,000 counts per second was 1.6 times that at 60,000 counts per second. This feasibility study showed that the HYPER electronic concept works; it can significantly increase the count-rate capability and dose efficiency of gamma cameras. In a larger clinical camera, multiple HYPER-Anger circuits may be implemented to further improve the imaging counting rates that we have shown by multiple times. This technology would facilitate the use of gamma cameras for radionuclide therapy dosimetry imaging, cardiac first-pass imaging, and positron coincidence imaging and the simultaneous acquisition of transmission and emission data using different isotopes with less cross-contamination between transmission and emission data.

  18. High-resolution ophthalmic imaging system

    DOEpatents

    Olivier, Scot S.; Carrano, Carmen J.

    2007-12-04

    A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.

  19. Blood pulsation measurement using cameras operating in visible light: limitations.

    PubMed

    Koprowski, Robert

    2016-10-03

    The paper presents an automatic method for analysis and processing of images from a camera operating in visible light. This analysis applies to images containing the human facial area (body) and enables to measure the blood pulse rate. Special attention was paid to the limitations of this measurement method taking into account the possibility of using consumer cameras in real conditions (different types of lighting, different camera resolution, camera movement). The proposed new method of image analysis and processing was associated with three stages: (1) image pre-processing-allowing for the image filtration and stabilization (object location tracking); (2) main image processing-allowing for segmentation of human skin areas, acquisition of brightness changes; (3) signal analysis-filtration, FFT (Fast Fourier Transformation) analysis, pulse calculation. The presented algorithm and method for measuring the pulse rate has the following advantages: (1) it allows for non-contact and non-invasive measurement; (2) it can be carried out using almost any camera, including webcams; (3) it enables to track the object on the stage, which allows for the measurement of the heart rate when the patient is moving; (4) for a minimum of 40,000 pixels, it provides a measurement error of less than ±2 beats per minute for p < 0.01 and sunlight, or a slightly larger error (±3 beats per minute) for artificial lighting; (5) analysis of a single image takes about 40 ms in Matlab Version 7.11.0.584 (R2010b) with Image Processing Toolbox Version 7.1 (R2010b).

  20. Characterizing dynamic amplitude of low-frequency fluctuation and its relationship with dynamic functional connectivity: An application to schizophrenia.

    PubMed

    Fu, Zening; Tu, Yiheng; Di, Xin; Du, Yuhui; Pearlson, G D; Turner, J A; Biswal, Bharat B; Zhang, Zhiguo; Calhoun, V D

    2017-09-20

    The human brain is a highly dynamic system with non-stationary neural activity and rapidly-changing neural interaction. Resting-state dynamic functional connectivity (dFC) has been widely studied during recent years, and the emerging aberrant dFC patterns have been identified as important features of many mental disorders such as schizophrenia (SZ). However, only focusing on the time-varying patterns in FC is not enough, since the local neural activity itself (in contrast to the inter-connectivity) is also found to be highly fluctuating from research using high-temporal-resolution imaging techniques. Exploring the time-varying patterns in brain activity and their relationships with time-varying brain connectivity is important for advancing our understanding of the co-evolutionary property of brain network and the underlying mechanism of brain dynamics. In this study, we introduced a framework for characterizing time-varying brain activity and exploring its associations with time-varying brain connectivity, and applied this framework to a resting-state fMRI dataset including 151 SZ patients and 163 age- and gender matched healthy controls (HCs). In this framework, 48 brain regions were first identified as intrinsic connectivity networks (ICNs) using group independent component analysis (GICA). A sliding window approach was then adopted for the estimation of dynamic amplitude of low-frequency fluctuation (dALFF) and dFC, which were used to measure time-varying brain activity and time-varying brain connectivity respectively. The dALFF was further clustered into six reoccurring states by the k-means clustering method and the group difference in occurrences of dALFF states was explored. Lastly, correlation coefficients between dALFF and dFC were calculated and the group difference in these dALFF-dFC correlations was explored. Our results suggested that 1) ALFF of brain regions was highly fluctuating during the resting-state and such dynamic patterns are altered in SZ, 2) dALFF and dFC were correlated in time and their correlations are altered in SZ. The overall results support and expand prior work on abnormalities of brain activity, static FC (sFC) and dFC in SZ, and provide new evidence on aberrant time-varying brain activity and its associations with brain connectivity in SZ, which might underscore the disrupted brain cognitive functions in this mental disorder. Copyright © 2017 Elsevier Inc. All rights reserved.

  1. Variable high-resolution color CCD camera system with online capability for professional photo studio application

    NASA Astrophysics Data System (ADS)

    Breitfelder, Stefan; Reichel, Frank R.; Gaertner, Ernst; Hacker, Erich J.; Cappellaro, Markus; Rudolf, Peter; Voelk, Ute

    1998-04-01

    Digital cameras are of increasing significance for professional applications in photo studios where fashion, portrait, product and catalog photographs or advertising photos of high quality have to be taken. The eyelike is a digital camera system which has been developed for such applications. It is capable of working online with high frame rates and images of full sensor size and it provides a resolution that can be varied between 2048 by 2048 and 6144 by 6144 pixel at a RGB color depth of 12 Bit per channel with an also variable exposure time of 1/60s to 1s. With an exposure time of 100 ms digitization takes approx. 2 seconds for an image of 2048 by 2048 pixels (12 Mbyte), 8 seconds for the image of 4096 by 4096 pixels (48 Mbyte) and 40 seconds for the image of 6144 by 6144 pixels (108 MByte). The eyelike can be used in various configurations. Used as a camera body most commercial lenses can be connected to the camera via existing lens adaptors. On the other hand the eyelike can be used as a back to most commercial 4' by 5' view cameras. This paper describes the eyelike camera concept with the essential system components. The article finishes with a description of the software, which is needed to bring the high quality of the camera to the user.

  2. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  3. Thermographic measurements of high-speed metal cutting

    NASA Astrophysics Data System (ADS)

    Mueller, Bernhard; Renz, Ulrich

    2002-03-01

    Thermographic measurements of a high-speed cutting process have been performed with an infrared camera. To realize images without motion blur the integration times were reduced to a few microseconds. Since the high tool wear influences the measured temperatures a set-up has been realized which enables small cutting lengths. Only single images have been recorded because the process is too fast to acquire a sequence of images even with the frame rate of the very fast infrared camera which has been used. To expose the camera when the rotating tool is in the middle of the camera image an experimental set-up with a light barrier and a digital delay generator with a time resolution of 1 ns has been realized. This enables a very exact triggering of the camera at the desired position of the tool in the image. Since the cutting depth is between 0.1 and 0.2 mm a high spatial resolution was also necessary which was obtained by a special close-up lens allowing a resolution of app. 45 microns. The experimental set-up will be described and infrared images and evaluated temperatures of a titanium alloy and a carbon steel will be presented for cutting speeds up to 42 m/s.

  4. Establishing imaging sensor specifications for digital still cameras

    NASA Astrophysics Data System (ADS)

    Kriss, Michael A.

    2007-02-01

    Digital Still Cameras, DSCs, have now displaced conventional still cameras in most markets. The heart of a DSC is thought to be the imaging sensor, be it Full Frame CCD, and Interline CCD, a CMOS sensor or the newer Foveon buried photodiode sensors. There is a strong tendency by consumers to consider only the number of mega-pixels in a camera and not to consider the overall performance of the imaging system, including sharpness, artifact control, noise, color reproduction, exposure latitude and dynamic range. This paper will provide a systematic method to characterize the physical requirements of an imaging sensor and supporting system components based on the desired usage. The analysis is based on two software programs that determine the "sharpness", potential for artifacts, sensor "photographic speed", dynamic range and exposure latitude based on the physical nature of the imaging optics, sensor characteristics (including size of pixels, sensor architecture, noise characteristics, surface states that cause dark current, quantum efficiency, effective MTF, and the intrinsic full well capacity in terms of electrons per square centimeter). Examples will be given for consumer, pro-consumer, and professional camera systems. Where possible, these results will be compared to imaging system currently on the market.

  5. Thermal feature extraction of servers in a datacenter using thermal image registration

    NASA Astrophysics Data System (ADS)

    Liu, Hang; Ran, Jian; Xie, Ting; Gao, Shan

    2017-09-01

    Thermal cameras provide fine-grained thermal information that enhances monitoring and enables automatic thermal management in large datacenters. Recent approaches employing mobile robots or thermal camera networks can already identify the physical locations of hot spots. Other distribution information used to optimize datacenter management can also be obtained automatically using pattern recognition technology. However, most of the features extracted from thermal images, such as shape and gradient, may be affected by changes in the position and direction of the thermal camera. This paper presents a method for extracting the thermal features of a hot spot or a server in a container datacenter. First, thermal and visual images are registered based on textural characteristics extracted from images acquired in datacenters. Then, the thermal distribution of each server is standardized. The features of a hot spot or server extracted from the standard distribution can reduce the impact of camera position and direction. The results of experiments show that image registration is efficient for aligning the corresponding visual and thermal images in the datacenter, and the standardization procedure reduces the impacts of camera position and direction on hot spot or server features.

  6. In vitro near-infrared imaging of occlusal dental caries using a germanium-enhanced CMOS camera

    NASA Astrophysics Data System (ADS)

    Lee, Chulsung; Darling, Cynthia L.; Fried, Daniel

    2010-02-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  7. In vitro near-infrared imaging of occlusal dental caries using germanium enhanced CMOS camera.

    PubMed

    Lee, Chulsung; Darling, Cynthia L; Fried, Daniel

    2010-03-01

    The high transparency of dental enamel in the near-infrared (NIR) at 1310-nm can be exploited for imaging dental caries without the use of ionizing radiation. The objective of this study was to determine whether the lesion contrast derived from NIR transillumination can be used to estimate lesion severity. Another aim was to compare the performance of a new Ge enhanced complementary metal-oxide-semiconductor (CMOS) based NIR imaging camera with the InGaAs focal plane array (FPA). Extracted human teeth (n=52) with natural occlusal caries were imaged with both cameras at 1310-nm and the image contrast between sound and carious regions was calculated. After NIR imaging, teeth were sectioned and examined using more established methods, namely polarized light microscopy (PLM) and transverse microradiography (TMR) to calculate lesion severity. Lesions were then classified into 4 categories according to the lesion severity. Lesion contrast increased significantly with lesion severity for both cameras (p<0.05). The Ge enhanced CMOS camera equipped with the larger array and smaller pixels yielded higher contrast values compared with the smaller InGaAs FPA (p<0.01). Results demonstrate that NIR lesion contrast can be used to estimate lesion severity.

  8. An optimal algorithm for reconstructing images from binary measurements

    NASA Astrophysics Data System (ADS)

    Yang, Feng; Lu, Yue M.; Sbaiz, Luciano; Vetterli, Martin

    2010-01-01

    We have studied a camera with a very large number of binary pixels referred to as the gigavision camera [1] or the gigapixel digital film camera [2, 3]. Potential advantages of this new camera design include improved dynamic range, thanks to its logarithmic sensor response curve, and reduced exposure time in low light conditions, due to its highly sensitive photon detection mechanism. We use maximum likelihood estimator (MLE) to reconstruct a high quality conventional image from the binary sensor measurements of the gigavision camera. We prove that when the threshold T is "1", the negative loglikelihood function is a convex function. Therefore, optimal solution can be achieved using convex optimization. Base on filter bank techniques, fast algorithms are given for computing the gradient and the multiplication of a vector and Hessian matrix of the negative log-likelihood function. We show that with a minor change, our algorithm also works for estimating conventional images from multiple binary images. Numerical experiments with synthetic 1-D signals and images verify the effectiveness and quality of the proposed algorithm. Experimental results also show that estimation performance can be improved by increasing the oversampling factor or the number of binary images.

  9. Recent technology and usage of plastic lenses in image taking objectives

    NASA Astrophysics Data System (ADS)

    Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko

    2005-09-01

    Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.

  10. Curiosity ChemCam Removes Dust

    NASA Image and Video Library

    2013-04-08

    This pair of images taken a few minutes apart show how laser firing by NASA Mars rover Curiosity removes dust from the surface of a rock. The images were taken by the remote micro-imager camera in the laser-firing Chemistry and Camera ChemCam.

  11. What Is an Image?

    ERIC Educational Resources Information Center

    Zetie, K. P.

    2017-01-01

    In basic physics, often in their first year of study of the subject, students meet the concept of an image, for example when using pinhole cameras and finding the position of an image in a mirror. They are also familiar with the term in photography and design, through software which allows image manipulation, even "in-camera" on most…

  12. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  13. Capturing the plenoptic function in a swipe

    NASA Astrophysics Data System (ADS)

    Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi

    2016-09-01

    Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.

  14. Pain modulation is affected differently in medication-overuse headache and chronic myofascial pain - A multimodal MRI study.

    PubMed

    Michels, Lars; Christidi, Foteini; Steiger, Vivian R; Sándor, Peter S; Gantenbein, Andreas R; Landmann, Gunther; Schreglmann, Sebastian R; Kollias, Spyros; Riederer, Franz

    2017-07-01

    Background Neuroimaging studies revealed structural and functional changes in medication-overuse headache (MOH), but it remains unclear whether similar changes could be observed in other chronic pain disorders. Methods In this cross-sectional study, we investigated functional connectivity (FC) with resting-state functional magnetic resonance imaging (fMRI) and white matter integrity using diffusion tensor imaging (DTI) to measure fractional anisotropy (FA) and mean diffusivity (MD) in patients with MOH ( N = 12) relative to two control groups: patients with chronic myofascial pain (MYO; N = 11) and healthy controls (CN; N = 16). Results In a data-driven approach we found hypoconnectivity in the fronto-parietal attention network in both pain groups relative to CN (i.e. MOH < CN and MYO < CN). In contrast, hyperconnectivity in the saliency network (SN) was detected only in MOH, which correlated with FA in the insula. In a seed-based analysis we investigated FC between the periaqueductal grey (PAG) and all other brain regions. In addition to overlapping hyperconnectivity seen in patient groups (relative to CN), MOH had a distinct connectivity pattern with lower FC to parieto-occipital regions and higher FC to orbitofrontal regions compared to controls. FA and MD abnormalities were mostly observed in MOH, involving the insula. Conclusions Hyperconnectivity within the SN along with associated white matter changes therein suggest a particular role of this network in MOH. In addition, abnormal connectivity between the PAG and other pain modulatory (frontal) regions in MOH are consistent with dysfunctional central pain control.

  15. An optical assessment of the effects of glioma growth on resting state networks in mice (Conference Presentation)

    NASA Astrophysics Data System (ADS)

    Orukari, Inema E.; Bauer, Adam Q.; Baxter, Grant A.; Rubin, Joshua B.; Culver, Joseph P.

    2017-02-01

    Gliomas are known to cause significant changes in normal brain function that lead to cognitive deficits. Disruptions in resting state networks (RSNs) are thought to underlie these changes. However, investigating the effects of glioma growth on RSNs in humans is complicated by the heterogeneity in lesion size, type, and location across subjects. In this study, we evaluated the effects of tumor growth on RSNs over time in a controlled mouse model of glioma growth. Methods: Glioma cells (5x104-105 U87s) were stereotactically injected into the forepaw somatosensory cortex of adult nude mice (n=5). Disruptions in RSNs were evaluated weekly with functional connectivity optical intrinsic signal imaging (fcOIS). Tumor growth was monitored with MRI and weekly bioluminescence imaging (BLI). In order to characterize how tumor growth affected different RSNs over time, we calculated a number of functional connectivity (fc) metrics, including homotopic (bilateral) connectivity, spatial similarity, and node degree. Results: Deficits in fc initiate near the lesion, and over a period of several weeks, extend more globally. The reductions in spatial similarity were found to strongly correlate with the BLI signal indicating that increased tumor size is associated with increased RSN disruption. Conclusions: We have shown that fcOIS is capable of detecting alterations in mouse RSNs due to brain tumor growth. A better understanding of how RSN disruption contributes to the development of cognitive deficits in brain tumor patients may lead to better patient risk stratification and consequently improved cognitive outcomes.

  16. Lymphoscintigraphy

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  17. Hepatobiliary

    MedlinePlus

    ... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...

  18. Presence capture cameras - a new challenge to the image quality

    NASA Astrophysics Data System (ADS)

    Peltoketo, Veli-Tapani

    2016-04-01

    Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.

  19. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F. M.; Jorge, Jorge M.

    1998-01-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

  20. Digital photorefraction

    NASA Astrophysics Data System (ADS)

    Costa, Manuel F.; Jorge, Jorge M.

    1997-12-01

    The early evaluation of the visual status of human infants is of a critical importance. It is of utmost importance to the development of the child's visual system that she perceives clear, focused, retinal images. Furthermore if the refractive problems are not corrected in due time amblyopia may occur. Photorefraction is a non-invasive clinical tool rather convenient for application to this kind of population. A qualitative or semi-quantitative information about refractive errors, accommodation, strabismus, amblyogenic factors and some pathologies (cataracts) can the easily obtained. The photorefraction experimental setup we established using new technological breakthroughs on the fields of imaging devices, image processing and fiber optics, allows the implementation of both the isotropic and eccentric photorefraction approaches. Essentially both methods consist on delivering a light beam into the eyes. It is refracted by the ocular media, strikes the retina, focusing or not, reflects off and is collected by a camera. The system is formed by one CCD color camera and a light source. A beam splitter in front of the camera's objective allows coaxial illumination and observation. An optomechanical system also allows eccentric illumination. The light source is a flash type one and is synchronized with the camera's image acquisition. The camera's image is digitized displayed in real time. Image processing routines are applied for image's enhancement and feature extraction.

Top