Sample records for target guide camera

  1. Camera calibration: active versus passive targets

    NASA Astrophysics Data System (ADS)

    Schmalz, Christoph; Forster, Frank; Angelopoulou, Elli

    2011-11-01

    Traditionally, most camera calibrations rely on a planar target with well-known marks. However, the localization error of the marks in the image is a source of inaccuracy. We propose the use of high-resolution digital displays as active calibration targets to obtain more accurate calibration results for all types of cameras. The display shows a series of coded patterns to generate correspondences between world points and image points. This has several advantages. No special calibration hardware is necessary because suitable displays are practically ubiquitious. The method is fully automatic, and no identification of marks is necessary. For a coding scheme based on phase shifting, the localization accuracy is approximately independent of the camera's focus settings. Most importantly, higher accuracy can be achieved compared to passive targets, such as printed checkerboards. A rigorous evaluation is performed to substantiate this claim. Our active target method is compared to standard calibrations using a checkerboard target. We perform camera, calibrations with different combinations of displays, cameras, and lenses, as well as with simulated images and find markedly lower reprojection errors when using active targets. For example, in a stereo reconstruction task, the accuracy of a system calibrated with an active target is five times better.

  2. Two-Camera Acquisition and Tracking of a Flying Target

    NASA Technical Reports Server (NTRS)

    Biswas, Abhijit; Assad, Christopher; Kovalik, Joseph M.; Pain, Bedabrata; Wrigley, Chris J.; Twiss, Peter

    2008-01-01

    A method and apparatus have been developed to solve the problem of automated acquisition and tracking, from a location on the ground, of a luminous moving target in the sky. The method involves the use of two electronic cameras: (1) a stationary camera having a wide field of view, positioned and oriented to image the entire sky; and (2) a camera that has a much narrower field of view (a few degrees wide) and is mounted on a two-axis gimbal. The wide-field-of-view stationary camera is used to initially identify the target against the background sky. So that the approximate position of the target can be determined, pixel locations on the image-detector plane in the stationary camera are calibrated with respect to azimuth and elevation. The approximate target position is used to initially aim the gimballed narrow-field-of-view camera in the approximate direction of the target. Next, the narrow-field-of view camera locks onto the target image, and thereafter the gimbals are actuated as needed to maintain lock and thereby track the target with precision greater than that attainable by use of the stationary camera.

  3. Calibration Target for Curiosity Arm Camera

    NASA Image and Video Library

    2012-09-10

    This view of the calibration target for the MAHLI camera aboard NASA Mars rover Curiosity combines two images taken by that camera during Sept. 9, 2012. Part of Curiosity left-front and center wheels and a patch of Martian ground are also visible.

  4. Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery

    NASA Astrophysics Data System (ADS)

    Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng

    2012-10-01

    In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.

  5. Lincoln Penny on Mars in Camera Calibration Target

    NASA Image and Video Library

    2012-09-10

    The penny in this image is part of a camera calibration target on NASA Mars rover Curiosity. The MAHLI camera on the rover took this image of the MAHLI calibration target during the 34th Martian day of Curiosity work on Mars, Sept. 9, 2012.

  6. Setup for testing cameras for image guided surgery using a controlled NIR fluorescence mimicking light source and tissue phantom

    NASA Astrophysics Data System (ADS)

    Georgiou, Giota; Verdaasdonk, Rudolf M.; van der Veen, Albert; Klaessens, John H.

    2017-02-01

    In the development of new near-infrared (NIR) fluorescence dyes for image guided surgery, there is a need for new NIR sensitive camera systems that can easily be adjusted to specific wavelength ranges in contrast the present clinical systems that are only optimized for ICG. To test alternative camera systems, a setup was developed to mimic the fluorescence light in a tissue phantom to measure the sensitivity and resolution. Selected narrow band NIR LED's were used to illuminate a 6mm diameter circular diffuse plate to create uniform intensity controllable light spot (μW-mW) as target/source for NIR camera's. Layers of (artificial) tissue with controlled thickness could be placed on the spot to mimic a fluorescent `cancer' embedded in tissue. This setup was used to compare a range of NIR sensitive consumer's cameras for potential use in image guided surgery. The image of the spot obtained with the cameras was captured and analyzed using ImageJ software. Enhanced CCD night vision cameras were the most sensitive capable of showing intensities < 1 μW through 5 mm of tissue. However, there was no control over the automatic gain and hence noise level. NIR sensitive DSLR cameras proved relative less sensitive but could be fully manually controlled as to gain (ISO 25600) and exposure time and are therefore preferred for a clinical setting in combination with Wi-Fi remote control. The NIR fluorescence testing setup proved to be useful for camera testing and can be used for development and quality control of new NIR fluorescence guided surgery equipment.

  7. Mast Camera and Its Calibration Target on Curiosity Rover

    NASA Image and Video Library

    2013-03-18

    This set of images illustrates the twin cameras of the Mastcam instrument on NASA Curiosity Mars rover upper left, the Mastcam calibration target lower center, and the locations of the cameras and target on the rover.

  8. Target-Tracking Camera for a Metrology System

    NASA Technical Reports Server (NTRS)

    Liebe, Carl; Bartman, Randall; Chapsky, Jacob; Abramovici, Alexander; Brown, David

    2009-01-01

    An analog electronic camera that is part of a metrology system measures the varying direction to a light-emitting diode that serves as a bright point target. In the original application for which the camera was developed, the metrological system is used to determine the varying relative positions of radiating elements of an airborne synthetic aperture-radar (SAR) antenna as the airplane flexes during flight; precise knowledge of the relative positions as a function of time is needed for processing SAR readings. It has been common metrology system practice to measure the varying direction to a bright target by use of an electronic camera of the charge-coupled-device or active-pixel-sensor type. A major disadvantage of this practice arises from the necessity of reading out and digitizing the outputs from a large number of pixels and processing the resulting digital values in a computer to determine the centroid of a target: Because of the time taken by the readout, digitization, and computation, the update rate is limited to tens of hertz. In contrast, the analog nature of the present camera makes it possible to achieve an update rate of hundreds of hertz, and no computer is needed to determine the centroid. The camera is based on a position-sensitive detector (PSD), which is a rectangular photodiode with output contacts at opposite ends. PSDs are usually used in triangulation for measuring small distances. PSDs are manufactured in both one- and two-dimensional versions. Because it is very difficult to calibrate two-dimensional PSDs accurately, the focal-plane sensors used in this camera are two orthogonally mounted one-dimensional PSDs.

  9. User guide for the USGS aerial camera Report of Calibration.

    USGS Publications Warehouse

    Tayman, W.P.

    1984-01-01

    Calibration and testing of aerial mapping cameras includes the measurement of optical constants and the check for proper functioning of a number of complicated mechanical and electrical parts. For this purpose the US Geological Survey performs an operational type photographic calibration. This paper is not strictly a scientific paper but rather a 'user guide' to the USGS Report of Calibration of an aerial mapping camera for compliance with both Federal and State mapping specifications. -Author

  10. Rover mast calibration, exact camera pointing, and camara handoff for visual target tracking

    NASA Technical Reports Server (NTRS)

    Kim, Won S.; Ansar, Adnan I.; Steele, Robert D.

    2005-01-01

    This paper presents three technical elements that we have developed to improve the accuracy of the visual target tracking for single-sol approach-and-instrument placement in future Mars rover missions. An accurate, straightforward method of rover mast calibration is achieved by using a total station, a camera calibration target, and four prism targets mounted on the rover. The method was applied to Rocky8 rover mast calibration and yielded a 1.1-pixel rms residual error. Camera pointing requires inverse kinematic solutions for mast pan and tilt angles such that the target image appears right at the center of the camera image. Two issues were raised. Mast camera frames are in general not parallel to the masthead base frame. Further, the optical axis of the camera model in general does not pass through the center of the image. Despite these issues, we managed to derive non-iterative closed-form exact solutions, which were verified with Matlab routines. Actual camera pointing experiments aver 50 random target image paints yielded less than 1.3-pixel rms pointing error. Finally, a purely geometric method for camera handoff using stereo views of the target has been developed. Experimental test runs show less than 2.5 pixels error on high-resolution Navcam for Pancam-to-Navcam handoff, and less than 4 pixels error on lower-resolution Hazcam for Navcam-to-Hazcam handoff.

  11. Field-of-View Guiding Camera on the HISAKI (SPRINT-A) Satellite

    NASA Astrophysics Data System (ADS)

    Yamazaki, A.; Tsuchiya, F.; Sakanoi, T.; Uemizu, K.; Yoshioka, K.; Murakami, G.; Kagitani, M.; Kasaba, Y.; Yoshikawa, I.; Terada, N.; Kimura, T.; Sakai, S.; Nakaya, K.; Fukuda, S.; Sawai, S.

    2014-11-01

    HISAKI (SPRINT-A) satellite is an earth-orbiting Extreme UltraViolet (EUV) spectroscopic mission and launched on 14 Sep. 2013 by the launch vehicle Epsilon-1. Extreme ultraviolet spectroscope (EXCEED) onboard the satellite will investigate plasma dynamics in Jupiter's inner magnetosphere and atmospheric escape from Venus and Mars. EUV spectroscopy is useful to measure electron density and temperature and ion composition in plasma environment. EXCEED also has an advantage to measure spatial distribution of plasmas around the planets. To measure radial plasma distribution in the Jovian inner magnetosphere and plasma emissions from ionosphere, exosphere and tail separately (for Venus and Mars), the pointing accuracy of the spectroscope should be smaller than spatial structures of interest (20 arc-seconds). For satellites in the low earth orbit (LEO), the pointing displacement is generally caused by change of alignment between the satellite bus module and the telescope due to the changing thermal inputs from the Sun and Earth. The HISAKI satellite is designed to compensate the displacement by tracking the target with using a Field-Of-View (FOV) guiding camera. Initial checkout of the attitude control for the EXCEED observation shows that pointing accuracy kept within 2 arc-seconds in a case of "track mode" which is used for Jupiter observation. For observations of Mercury, Venus, Mars, and Saturn, the entire disk will be guided inside slit to observe plasma around the planets. Since the FOV camera does not capture the disk in this case, the satellite uses a star tracker (STT) to hold the attitude ("hold mode"). Pointing accuracy during this mode has been 20-25 arc-seconds. It has been confirmed that the attitude control works well as designed.

  12. Occlusion handling framework for tracking in smart camera networks by per-target assistance task assignment

    NASA Astrophysics Data System (ADS)

    Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried

    2017-09-01

    Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.

  13. Multi-Target Camera Tracking, Hand-off and Display LDRD 158819 Final Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn’t lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identify individual moving targets from the background imagery, and then display the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  14. Multi-target camera tracking, hand-off and display LDRD 158819 final report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Robert J.

    2014-10-01

    Modern security control rooms gather video and sensor feeds from tens to hundreds of cameras. Advanced camera analytics can detect motion from individual video streams and convert unexpected motion into alarms, but the interpretation of these alarms depends heavily upon human operators. Unfortunately, these operators can be overwhelmed when a large number of events happen simultaneously, or lulled into complacency due to frequent false alarms. This LDRD project has focused on improving video surveillance-based security systems by changing the fundamental focus from the cameras to the targets being tracked. If properly integrated, more cameras shouldn't lead to more alarms, moremore » monitors, more operators, and increased response latency but instead should lead to better information and more rapid response times. For the course of the LDRD we have been developing algorithms that take live video imagery from multiple video cameras, identifies individual moving targets from the background imagery, and then displays the results in a single 3D interactive video. In this document we summarize the work in developing this multi-camera, multi-target system, including lessons learned, tools developed, technologies explored, and a description of current capability.« less

  15. Laser guide star pointing camera for ESO LGS Facilities

    NASA Astrophysics Data System (ADS)

    Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.

    2014-08-01

    Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.

  16. Analysis of calibration accuracy of cameras with different target sizes for large field of view

    NASA Astrophysics Data System (ADS)

    Zhang, Jin; Chai, Zhiwen; Long, Changyu; Deng, Huaxia; Ma, Mengchao; Zhong, Xiang; Yu, Huan

    2018-03-01

    Visual measurement plays an increasingly important role in the field o f aerospace, ship and machinery manufacturing. Camera calibration of large field-of-view is a critical part of visual measurement . For the issue a large scale target is difficult to be produced, and the precision can not to be guaranteed. While a small target has the advantage of produced of high precision, but only local optimal solutions can be obtained . Therefore, studying the most suitable ratio of the target size to the camera field of view to ensure the calibration precision requirement of the wide field-of-view is required. In this paper, the cameras are calibrated by a series of different dimensions of checkerboard calibration target s and round calibration targets, respectively. The ratios of the target size to the camera field-of-view are 9%, 18%, 27%, 36%, 45%, 54%, 63%, 72%, 81% and 90%. The target is placed in different positions in the camera field to obtain the camera parameters of different positions . Then, the distribution curves of the reprojection mean error of the feature points' restructure in different ratios are analyzed. The experimental data demonstrate that with the ratio of the target size to the camera field-of-view increas ing, the precision of calibration is accordingly improved, and the reprojection mean error changes slightly when the ratio is above 45%.

  17. Development of digital shade guides for color assessment using a digital camera with ring flashes.

    PubMed

    Tung, Oi-Hong; Lai, Yu-Lin; Ho, Yi-Ching; Chou, I-Chiang; Lee, Shyh-Yuan

    2011-02-01

    Digital photographs taken with cameras and ring flashes are commonly used for dental documentation. We hypothesized that different illuminants and camera's white balance setups shall influence color rendering of digital images and affect the effectiveness of color matching using digital images. Fifteen ceramic disks of different shades were fabricated and photographed with a digital camera in both automatic white balance (AWB) and custom white balance (CWB) under either light-emitting diode (LED) or electronic ring flash. The Commission Internationale d'Éclairage L*a*b* parameters of the captured images were derived from Photoshop software and served as digital shade guides. We found significantly high correlation coefficients (r² > 0.96) between the respective spectrophotometer standards and those shade guides generated in CWB setups. Moreover, the accuracy of color matching of another set of ceramic disks using digital shade guides, which was verified by ten operators, improved from 67% in AWB to 93% in CWB under LED illuminants. Probably, because of the inconsistent performance of the flashlight and specular reflection, the digital images captured under electronic ring flash in both white balance setups revealed less reliable and relative low-matching ability. In conclusion, the reliability of color matching with digital images is much influenced by the illuminants and camera's white balance setups, while digital shade guides derived under LED illuminants with CWB demonstrate applicable potential in the fields of color assessments.

  18. A calibration method based on virtual large planar target for cameras with large FOV

    NASA Astrophysics Data System (ADS)

    Yu, Lei; Han, Yangyang; Nie, Hong; Ou, Qiaofeng; Xiong, Bangshu

    2018-02-01

    In order to obtain high precision in camera calibration, a target should be large enough to cover the whole field of view (FOV). For cameras with large FOV, using a small target will seriously reduce the precision of calibration. However, using a large target causes many difficulties in making, carrying and employing the large target. In order to solve this problem, a calibration method based on the virtual large planar target (VLPT), which is virtually constructed with multiple small targets (STs), is proposed for cameras with large FOV. In the VLPT-based calibration method, first, the positions and directions of STs are changed several times to obtain a number of calibration images. Secondly, the VLPT of each calibration image is created by finding the virtual point corresponding to the feature points of the STs. Finally, intrinsic and extrinsic parameters of the camera are calculated by using the VLPTs. Experiment results show that the proposed method can not only achieve the similar calibration precision as those employing a large target, but also have good stability in the whole measurement area. Thus, the difficulties to accurately calibrate cameras with large FOV can be perfectly tackled by the proposed method with good operability.

  19. Practical target location and accuracy indicator in digital close range photogrammetry using consumer grade cameras

    NASA Astrophysics Data System (ADS)

    Moriya, Gentaro; Chikatsu, Hirofumi

    2011-07-01

    Recently, pixel numbers and functions of consumer grade digital camera are amazingly increasing by modern semiconductor and digital technology, and there are many low-priced consumer grade digital cameras which have more than 10 mega pixels on the market in Japan. In these circumstances, digital photogrammetry using consumer grade cameras is enormously expected in various application fields. There is a large body of literature on calibration of consumer grade digital cameras and circular target location. Target location with subpixel accuracy had been investigated as a star tracker issue, and many target location algorithms have been carried out. It is widely accepted that the least squares models with ellipse fitting is the most accurate algorithm. However, there are still problems for efficient digital close range photogrammetry. These problems are reconfirmation of the target location algorithms with subpixel accuracy for consumer grade digital cameras, relationship between number of edge points along target boundary and accuracy, and an indicator for estimating the accuracy of normal digital close range photogrammetry using consumer grade cameras. With this motive, an empirical testing of several algorithms for target location with subpixel accuracy and an indicator for estimating the accuracy are investigated in this paper using real data which were acquired indoors using 7 consumer grade digital cameras which have 7.2 mega pixels to 14.7 mega pixels.

  20. Target recognitions in multiple-camera closed-circuit television using color constancy

    NASA Astrophysics Data System (ADS)

    Soori, Umair; Yuen, Peter; Han, Ji Wen; Ibrahim, Izzati; Chen, Wentao; Hong, Kan; Merfort, Christian; James, David; Richardson, Mark

    2013-04-01

    People tracking in crowded scenes from closed-circuit television (CCTV) footage has been a popular and challenging task in computer vision. Due to the limited spatial resolution in the CCTV footage, the color of people's dress may offer an alternative feature for their recognition and tracking. However, there are many factors, such as variable illumination conditions, viewing angles, and camera calibration, that may induce illusive modification of intrinsic color signatures of the target. Our objective is to recognize and track targets in multiple camera views using color as the detection feature, and to understand if a color constancy (CC) approach may help to reduce these color illusions due to illumination and camera artifacts and thereby improve target recognition performance. We have tested a number of CC algorithms using various color descriptors to assess the efficiency of target recognition from a real multicamera Imagery Library for Intelligent Detection Systems (i-LIDS) data set. Various classifiers have been used for target detection, and the figure of merit to assess the efficiency of target recognition is achieved through the area under the receiver operating characteristics (AUROC). We have proposed two modifications of luminance-based CC algorithms: one with a color transfer mechanism and the other using a pixel-wise sigmoid function for an adaptive dynamic range compression, a method termed enhanced luminance reflectance CC (ELRCC). We found that both algorithms improve the efficiency of target recognitions substantially better than that of the raw data without CC treatment, and in some cases the ELRCC improves target tracking by over 100% within the AUROC assessment metric. The performance of the ELRCC has been assessed over 10 selected targets from three different camera views of the i-LIDS footage, and the averaged target recognition efficiency over all these targets is found to be improved by about 54% in AUROC after the data are processed by

  1. Global calibration of multi-cameras with non-overlapping fields of view based on photogrammetry and reconfigurable target

    NASA Astrophysics Data System (ADS)

    Xia, Renbo; Hu, Maobang; Zhao, Jibin; Chen, Songlin; Chen, Yueling

    2018-06-01

    Multi-camera vision systems are often needed to achieve large-scale and high-precision measurement because these systems have larger fields of view (FOV) than a single camera. Multiple cameras may have no or narrow overlapping FOVs in many applications, which pose a huge challenge to global calibration. This paper presents a global calibration method for multi-cameras without overlapping FOVs based on photogrammetry technology and a reconfigurable target. Firstly, two planar targets are fixed together and made into a long target according to the distance between the two cameras to be calibrated. The relative positions of the two planar targets can be obtained by photogrammetric methods and used as invariant constraints in global calibration. Then, the reprojection errors of target feature points in the two cameras’ coordinate systems are calculated at the same time and optimized by the Levenberg–Marquardt algorithm to find the optimal solution of the transformation matrix between the two cameras. Finally, all the camera coordinate systems are converted to the reference coordinate system in order to achieve global calibration. Experiments show that the proposed method has the advantages of high accuracy (the RMS error is 0.04 mm) and low cost and is especially suitable for on-site calibration.

  2. A target detection multi-layer matched filter for color and hyperspectral cameras

    NASA Astrophysics Data System (ADS)

    Miyanishi, Tomoya; Preece, Bradley L.; Reynolds, Joseph P.

    2018-05-01

    In this article, a method for applying matched filters to a 3-dimentional hyperspectral data cube is discussed. In many applications, color visible cameras or hyperspectral cameras are used for target detection where the color or spectral optical properties of the imaged materials are partially known in advance. Therefore, the use of matched filtering with spectral data along with shape data is an effective method for detecting certain targets. Since many methods for 2D image filtering have been researched, we propose a multi-layer filter where ordinary spatially matched filters are used before the spectral filters. We discuss a way to layer the spectral filters for a 3D hyperspectral data cube, accompanied by a detectability metric for calculating the SNR of the filter. This method is appropriate for visible color cameras and hyperspectral cameras. We also demonstrate an analysis using the Night Vision Integrated Performance Model (NV-IPM) and a Monte Carlo simulation in order to confirm the effectiveness of the filtering in providing a higher output SNR and a lower false alarm rate.

  3. Design of an infrared camera based aircraft detection system for laser guide star installations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Friedman, H.; Macintosh, B.

    1996-03-05

    There have been incidents in which the irradiance resulting from laser guide stars have temporarily blinded pilots or passengers of aircraft. An aircraft detection system based on passive near infrared cameras (instead of active radar) is described in this report.

  4. Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri

    2012-01-01

    An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared

  5. High-resolution mini gamma camera for diagnosis and radio-guided surgery in diabetic foot infection

    NASA Astrophysics Data System (ADS)

    Scopinaro, F.; Capriotti, G.; Di Santo, G.; Capotondi, C.; Micarelli, A.; Massari, R.; Trotta, C.; Soluri, A.

    2006-12-01

    The diagnosis of diabetic foot osteomyelitis is often difficult. 99mTc-WBC (White Blood Cell) scintigraphy plays a key role in the diagnosis of bone infections. Spatial resolution of Anger camera is not always able to differentiate soft tissue from bone infection. Aim of present study is to verify if HRD (High-Resolution Detector) is able to improve diagnosis and to help surgery. Patients were studied by HRD showing 25.7×25.7 mm 2 FOV, 2 mm spatial resolution and 18% energy resolution. The patients were underwent to surgery and, when necessary, bone biopsy, both guided by HRD. Four patients were positive at Anger camera without specific signs of osteomyelitis. HRS (High-Resolution Scintigraphy) showed hot spots in the same patients. In two of them the hot spot was bar-shaped and it was localized in correspondence of the small phalanx. The presence of bone infection was confirmed at surgery, which was successfully guided by HRS. 99mTc-WBC HRS was able to diagnose pedal infection and to guide the surgery of diabetic foot, opening a new way in the treatment of infected diabetic foot.

  6. Hand-eye calibration using a target registration error model.

    PubMed

    Chen, Elvis C S; Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M

    2017-10-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand-eye calibration between the camera and the tracking system. The authors introduce the concept of 'guided hand-eye calibration', where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand-eye calibration as a registration problem between homologous point-line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera.

  7. Hand–eye calibration using a target registration error model

    PubMed Central

    Morgan, Isabella; Jayarathne, Uditha; Ma, Burton; Peters, Terry M.

    2017-01-01

    Surgical cameras are prevalent in modern operating theatres and are often used as a surrogate for direct vision. Visualisation techniques (e.g. image fusion) made possible by tracking the camera require accurate hand–eye calibration between the camera and the tracking system. The authors introduce the concept of ‘guided hand–eye calibration’, where calibration measurements are facilitated by a target registration error (TRE) model. They formulate hand–eye calibration as a registration problem between homologous point–line pairs. For each measurement, the position of a monochromatic ball-tip stylus (a point) and its projection onto the image (a line) is recorded, and the TRE of the resulting calibration is predicted using a TRE model. The TRE model is then used to guide the placement of the calibration tool, so that the subsequent measurement minimises the predicted TRE. Assessing TRE after each measurement produces accurate calibration using a minimal number of measurements. As a proof of principle, they evaluated guided calibration using a webcam and an endoscopic camera. Their endoscopic camera results suggest that millimetre TRE is achievable when at least 15 measurements are acquired with the tracker sensor ∼80 cm away on the laparoscope handle for a target ∼20 cm away from the camera. PMID:29184657

  8. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.

    PubMed

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-03-20

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.

  9. An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks

    PubMed Central

    Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing

    2017-01-01

    In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537

  10. Multi-target detection and positioning in crowds using multiple camera surveillance

    NASA Astrophysics Data System (ADS)

    Huang, Jiahu; Zhu, Qiuyu; Xing, Yufeng

    2018-04-01

    In this study, we propose a pixel correspondence algorithm for positioning in crowds based on constraints on the distance between lines of sight, grayscale differences, and height in a world coordinates system. First, a Gaussian mixture model is used to obtain the background and foreground from multi-camera videos. Second, the hair and skin regions are extracted as regions of interest. Finally, the correspondences between each pixel in the region of interest are found under multiple constraints and the targets are positioned by pixel clustering. The algorithm can provide appropriate redundancy information for each target, which decreases the risk of losing targets due to a large viewing angle and wide baseline. To address the correspondence problem for multiple pixels, we construct a pixel-based correspondence model based on a similar permutation matrix, which converts the correspondence problem into a linear programming problem where a similar permutation matrix is found by minimizing an objective function. The correct pixel correspondences can be obtained by determining the optimal solution of this linear programming problem and the three-dimensional position of the targets can also be obtained by pixel clustering. Finally, we verified the algorithm with multiple cameras in experiments, which showed that the algorithm has high accuracy and robustness.

  11. DNA targeting specificity of RNA-guided Cas9 nucleases.

    PubMed

    Hsu, Patrick D; Scott, David A; Weinstein, Joshua A; Ran, F Ann; Konermann, Silvana; Agarwala, Vineeta; Li, Yinqing; Fine, Eli J; Wu, Xuebing; Shalem, Ophir; Cradick, Thomas J; Marraffini, Luciano A; Bao, Gang; Zhang, Feng

    2013-09-01

    The Streptococcus pyogenes Cas9 (SpCas9) nuclease can be efficiently targeted to genomic loci by means of single-guide RNAs (sgRNAs) to enable genome editing. Here, we characterize SpCas9 targeting specificity in human cells to inform the selection of target sites and avoid off-target effects. Our study evaluates >700 guide RNA variants and SpCas9-induced indel mutation levels at >100 predicted genomic off-target loci in 293T and 293FT cells. We find that SpCas9 tolerates mismatches between guide RNA and target DNA at different positions in a sequence-dependent manner, sensitive to the number, position and distribution of mismatches. We also show that SpCas9-mediated cleavage is unaffected by DNA methylation and that the dosage of SpCas9 and sgRNA can be titrated to minimize off-target modification. To facilitate mammalian genome engineering applications, we provide a web-based software tool to guide the selection and validation of target sequences as well as off-target analyses.

  12. Astatine-211 imaging by a Compton camera for targeted radiotherapy.

    PubMed

    Nagao, Yuto; Yamaguchi, Mitsutaka; Watanabe, Shigeki; Ishioka, Noriko S; Kawachi, Naoki; Watabe, Hiroshi

    2018-05-24

    Astatine-211 is a promising radionuclide for targeted radiotherapy. It is required to image the distribution of targeted radiotherapeutic agents in a patient's body for optimization of treatment strategies. We proposed to image 211 At with high-energy photons to overcome some problems in conventional planar or single-photon emission computed tomography imaging. We performed an imaging experiment of a point-like 211 At source using a Compton camera, and demonstrated the capability of imaging 211 At with the high-energy photons for the first time. Copyright © 2018 Elsevier Ltd. All rights reserved.

  13. Metric Calibration of a Focused Plenoptic Camera Based on a 3d Calibration Target

    NASA Astrophysics Data System (ADS)

    Zeller, N.; Noury, C. A.; Quint, F.; Teulière, C.; Stilla, U.; Dhome, M.

    2016-06-01

    In this paper we present a new calibration approach for focused plenoptic cameras. We derive a new mathematical projection model of a focused plenoptic camera which considers lateral as well as depth distortion. Therefore, we derive a new depth distortion model directly from the theory of depth estimation in a focused plenoptic camera. In total the model consists of five intrinsic parameters, the parameters for radial and tangential distortion in the image plane and two new depth distortion parameters. In the proposed calibration we perform a complete bundle adjustment based on a 3D calibration target. The residual of our optimization approach is three dimensional, where the depth residual is defined by a scaled version of the inverse virtual depth difference and thus conforms well to the measured data. Our method is evaluated based on different camera setups and shows good accuracy. For a better characterization of our approach we evaluate the accuracy of virtual image points projected back to 3D space.

  14. Heliostat calibration using attached cameras and artificial targets

    NASA Astrophysics Data System (ADS)

    Burisch, Michael; Sanchez, Marcelino; Olarra, Aitor; Villasante, Cristobal

    2016-05-01

    The efficiency of the solar field greatly depends on the ability of the heliostats to precisely reflect solar radiation onto a central receiver. To control the heliostats with such a precision requires the accurate knowledge of the motion of each of them. The motion of each heliostat can be described by a set of parameters, most notably the position and axis configuration. These parameters have to be determined individually for each heliostat during a calibration process. With the ongoing development of small sized heliostats, the ability to automatically perform such a calibration becomes more and more crucial as possibly hundreds of thousands of heliostats are involved. Furthermore, efficiency becomes an important factor as small sized heliostats potentially have to be recalibrated far more often, due to the limited stability of the components. In the following we present an automatic calibration procedure using cameras attached to each heliostat which are observing different targets spread throughout the solar field. Based on a number of observations of these targets under different heliostat orientations, the parameters describing the heliostat motion can be estimated with high precision.

  15. Video camera system for locating bullet holes in targets at a ballistics tunnel

    NASA Technical Reports Server (NTRS)

    Burner, A. W.; Rummler, D. R.; Goad, W. K.

    1990-01-01

    A system consisting of a single charge coupled device (CCD) video camera, computer controlled video digitizer, and software to automate the measurement was developed to measure the location of bullet holes in targets at the International Shooters Development Fund (ISDF)/NASA Ballistics Tunnel. The camera/digitizer system is a crucial component of a highly instrumented indoor 50 meter rifle range which is being constructed to support development of wind resistant, ultra match ammunition. The system was designed to take data rapidly (10 sec between shoots) and automatically with little operator intervention. The system description, measurement concept, and procedure are presented along with laboratory tests of repeatability and bias error. The long term (1 hour) repeatability of the system was found to be 4 microns (one standard deviation) at the target and the bias error was found to be less than 50 microns. An analysis of potential errors and a technique for calibration of the system are presented.

  16. Adaptive target binarization method based on a dual-camera system

    NASA Astrophysics Data System (ADS)

    Lei, Jing; Zhang, Ping; Xu, Jiangtao; Gao, Zhiyuan; Gao, Jing

    2018-01-01

    An adaptive target binarization method based on a dual-camera system that contains two dynamic vision sensors was proposed. First, a preprocessing procedure of denoising is introduced to remove the noise events generated by the sensors. Then, the complete edge of the target is retrieved and represented by events based on an event mosaicking method. Third, the region of the target is confirmed by an event-to-event method. Finally, a postprocessing procedure of image open and close operations of morphology methods is adopted to remove the artifacts caused by event-to-event mismatching. The proposed binarization method has been extensively tested on numerous degraded images with nonuniform illumination, low contrast, noise, or light spots and successfully compared with other well-known binarization methods. The experimental results, which are based on visual and misclassification error criteria, show that the proposed method performs well and has better robustness on the binarization of degraded images.

  17. Opto-mechanical design of the G-CLEF flexure control camera system

    NASA Astrophysics Data System (ADS)

    Oh, Jae Sok; Park, Chan; Kim, Jihun; Kim, Kang-Min; Chun, Moo-Young; Yu, Young Sam; Lee, Sungho; Nah, Jakyoung; Park, Sung-Joon; Szentgyorgyi, Andrew; McMuldroch, Stuart; Norton, Timothy; Podgorski, William; Evans, Ian; Mueller, Mark; Uomoto, Alan; Crane, Jeffrey; Hare, Tyson

    2016-08-01

    The GMT-Consortium Large Earth Finder (G-CLEF) is the very first light instrument of the Giant Magellan Telescope (GMT). The G-CLEF is a fiber feed, optical band echelle spectrograph that is capable of extremely precise radial velocity measurement. KASI (Korea Astronomy and Space Science Institute) is responsible for Flexure Control Camera (FCC) included in the G-CLEF Front End Assembly (GCFEA). The FCC is a kind of guide camera, which monitors the field images focused on a fiber mirror to control the flexure and the focus errors within the GCFEA. The FCC consists of five optical components: a collimator including triple lenses for producing a pupil, neutral density filters allowing us to use much brighter star as a target or a guide, a tent prism as a focus analyzer for measuring the focus offset at the fiber mirror, a reimaging camera with three pair of lenses for focusing the beam on a CCD focal plane, and a CCD detector for capturing the image on the fiber mirror. In this article, we present the optical and mechanical FCC designs which have been modified after the PDR in April 2015.

  18. Texture-based measurement of spatial frequency response using the dead leaves target: extensions, and application to real camera systems

    NASA Astrophysics Data System (ADS)

    McElvain, Jon; Campbell, Scott P.; Miller, Jonathan; Jin, Elaine W.

    2010-01-01

    The dead leaves model was recently introduced as a method for measuring the spatial frequency response (SFR) of camera systems. The target consists of a series of overlapping opaque circles with a uniform gray level distribution and radii distributed as r-3. Unlike the traditional knife-edge target, the SFR derived from the dead leaves target will be penalized for systems that employ aggressive noise reduction. Initial studies have shown that the dead leaves SFR correlates well with sharpness/texture blur preference, and thus the target can potentially be used as a surrogate for more expensive subjective image quality evaluations. In this paper, the dead leaves target is analyzed for measurement of camera system spatial frequency response. It was determined that the power spectral density (PSD) of the ideal dead leaves target does not exhibit simple power law dependence, and scale invariance is only loosely obeyed. An extension to the ideal dead leaves PSD model is proposed, including a correction term to account for system noise. With this extended model, the SFR of several camera systems with a variety of formats was measured, ranging from 3 to 10 megapixels; the effects of handshake motion blur are also analyzed via the dead leaves target.

  19. Multiple-target tracking implementation in the ebCMOS camera system: the LUSIPHER prototype

    NASA Astrophysics Data System (ADS)

    Doan, Quang Tuyen; Barbier, Remi; Dominjon, Agnes; Cajgfinger, Thomas; Guerin, Cyrille

    2012-06-01

    The domain of the low light imaging systems progresses very fast, thanks to detection and electronic multiplication technology evolution, such as the emCCD (electron multiplying CCD) or the ebCMOS (electron bombarded CMOS). We present an ebCMOS camera system that is able to track every 2 ms more than 2000 targets with a mean number of photons per target lower than two. The point light sources (targets) are spots generated by a microlens array (Shack-Hartmann) used in adaptive optics. The Multiple-Target-Tracking designed and implemented on a rugged workstation is described. The results and the performances of the system on the identification and tracking are presented and discussed.

  20. Radiation effects on active camera electronics in the target chamber at the National Ignition Facility

    NASA Astrophysics Data System (ADS)

    Dayton, M.; Datte, P.; Carpenter, A.; Eckart, M.; Manuel, A.; Khater, H.; Hargrove, D.; Bell, P.

    2017-08-01

    The National Ignition Facility's (NIF) harsh radiation environment can cause electronics to malfunction during high-yield DT shots. Until now there has been little experience fielding electronic-based cameras in the target chamber under these conditions; hence, the performance of electronic components in NIF's radiation environment was unknown. It is possible to purchase radiation tolerant devices, however, they are usually qualified for radiation environments different to NIF, such as space flight or nuclear reactors. This paper presents the results from a series of online experiments that used two different prototype camera systems built from non-radiation hardened components and one commercially available camera that permanently failed at relatively low total integrated dose. The custom design built in Livermore endured a 5 × 1015 neutron shot without upset, while the other custom design upset at 2 × 1014 neutrons. These results agreed with offline testing done with a flash x-ray source and a 14 MeV neutron source, which suggested a methodology for developing and qualifying electronic systems for NIF. Further work will likely lead to the use of embedded electronic systems in the target chamber during high-yield shots.

  1. Robotic Camera Assistance and Its Benefit in 1033 Traditional Laparoscopic Procedures: Prospective Clinical Trial Using a Joystick-guided Camera Holder.

    PubMed

    Holländer, Sebastian W; Klingen, Hans Joachim; Fritz, Marliese; Djalali, Peter; Birk, Dieter

    2014-11-01

    Despite advances in instruments and techniques in laparoscopic surgery, one thing remains uncomfortable: the camera assistance. The aim of this study was to investigate the benefit of a joystick-guided camera holder (SoloAssist®, Aktormed, Barbing, Germany) for laparoscopic surgery and to compare the robotic assistance to human assistance. 1033 consecutive laparoscopic procedures were performed assisted by the SoloAssist®. Failures and aborts were documented and nine surgeons were interviewed by questionnaire regarding their experiences. In 71 of 1033 procedures, robotic assistance was aborted and the procedure was continued manually, mostly because of frequent changes of position, narrow spaces, and adverse angular degrees. One case of short circuit was reported. Emergency stop was necessary in three cases due to uncontrolled movement into the abdominal cavity. Eight of nine surgeons prefer robotic to human assistance, mostly because of a steady image and self-control. The SoloAssist® robot is a reliable system for laparoscopic procedures. Emergency shutdown was necessary in only three cases. Some minor weak spots could have been identified. Most surgeons prefer robotic assistance to human assistance. We feel that the SoloAssist® makes standard laparoscopic surgery more comfortable and further development is desirable, but it cannot fully replace a human assistant.

  2. Homography-based multiple-camera person-tracking

    NASA Astrophysics Data System (ADS)

    Turk, Matthew R.

    2009-01-01

    Multiple video cameras are cheaply installed overlooking an area of interest. While computerized single-camera tracking is well-developed, multiple-camera tracking is a relatively new problem. The main multi-camera problem is to give the same tracking label to all projections of a real-world target. This is called the consistent labelling problem. Khan and Shah (2003) introduced a method to use field of view lines to perform multiple-camera tracking. The method creates inter-camera meta-target associations when objects enter at the scene edges. They also said that a plane-induced homography could be used for tracking, but this method was not well described. Their homography-based system would not work if targets use only one side of a camera to enter the scene. This paper overcomes this limitation and fully describes a practical homography-based tracker. A new method to find the feet feature is introduced. The method works especially well if the camera is tilted, when using the bottom centre of the target's bounding-box would produce inaccurate results. The new method is more accurate than the bounding-box method even when the camera is not tilted. Next, a method is presented that uses a series of corresponding point pairs "dropped" by oblivious, live human targets to find a plane-induced homography. The point pairs are created by tracking the feet locations of moving targets that were associated using the field of view line method. Finally, a homography-based multiple-camera tracking algorithm is introduced. Rules governing when to create the homography are specified. The algorithm ensures that homography-based tracking only starts after a non-degenerate homography is found. The method works when not all four field of view lines are discoverable; only one line needs to be found to use the algorithm. To initialize the system, the operator must specify pairs of overlapping cameras. Aside from that, the algorithm is fully automatic and uses the natural movement of

  3. Nonholonomic camera-space manipulation using cameras mounted on a mobile base

    NASA Astrophysics Data System (ADS)

    Goodwine, Bill; Seelinger, Michael J.; Skaar, Steven B.; Ma, Qun

    1998-10-01

    The body of work called `Camera Space Manipulation' is an effective and proven method of robotic control. Essentially, this technique identifies and refines the input-output relationship of the plant using estimation methods and drives the plant open-loop to its target state. 3D `success' of the desired motion, i.e., the end effector of the manipulator engages a target at a particular location with a particular orientation, is guaranteed when there is camera space success in two cameras which are adequately separated. Very accurate, sub-pixel positioning of a robotic end effector is possible using this method. To date, however, most efforts in this area have primarily considered holonomic systems. This work addresses the problem of nonholonomic camera space manipulation by considering the problem of a nonholonomic robot with two cameras and a holonomic manipulator on board the nonholonomic platform. While perhaps not as common in robotics, such a combination of holonomic and nonholonomic degrees of freedom are ubiquitous in industry: fork lifts and earth moving equipment are common examples of a nonholonomic system with an on-board holonomic actuator. The nonholonomic nature of the system makes the automation problem more difficult due to a variety of reasons; in particular, the target location is not fixed in the image planes, as it is for holonomic systems (since the cameras are attached to a moving platform), and there is a fundamental `path dependent' nature of nonholonomic kinematics. This work focuses on the sensor space or camera-space-based control laws necessary for effectively implementing an autonomous system of this type.

  4. Enviropod handbook: A guide to preparation and use of the Environmental Protection Agency's light-weight aerial camera system. [Weber River, Utah

    NASA Technical Reports Server (NTRS)

    Brower, S. J.; Ridd, M. K.

    1984-01-01

    The use of the Environmental Protection Agency (EPA) Enviropod camera system is detailed in this handbook which contains a step-by-step guide for mission planning, flights, film processing, indexing, and documentation. Information regarding Enviropod equipment and specifications is included.

  5. Ground moving target geo-location from monocular camera mounted on a micro air vehicle

    NASA Astrophysics Data System (ADS)

    Guo, Li; Ang, Haisong; Zheng, Xiangming

    2011-08-01

    The usual approaches to unmanned air vehicle(UAV)-to-ground target geo-location impose some severe constraints to the system, such as stationary objects, accurate geo-reference terrain database, or ground plane assumption. Micro air vehicle(MAV) works with characteristics including low altitude flight, limited payload and onboard sensors' low accuracy. According to these characteristics, a method is developed to determine the location of ground moving target which imaged from the air using monocular camera equipped on MAV. This method eliminates the requirements for terrain database (elevation maps) and altimeters that can provide MAV's and target's altitude. Instead, the proposed method only requires MAV flight status provided by its inherent onboard navigation system which includes inertial measurement unit(IMU) and global position system(GPS). The key is to get accurate information on the altitude of the ground moving target. First, Optical flow method extracts background static feature points. Setting a local region around the target in the current image, The features which are on the same plane with the target in this region are extracted, and are retained as aided features. Then, inverse-velocity method calculates the location of these points by integrated with aircraft status. The altitude of object, which is calculated by using position information of these aided features, combining with aircraft status and image coordinates, geo-locate the target. Meanwhile, a framework with Bayesian estimator is employed to eliminate noise caused by camera, IMU and GPS. Firstly, an extended Kalman filter(EKF) provides a simultaneous localization and mapping solution for the estimation of aircraft states and aided features location which defines the moving target local environment. Secondly, an unscented transformation(UT) method determines the estimated mean and covariance of target location from aircraft states and aided features location, and then exports them for the

  6. Highly Complementary Target RNAs Promote Release of Guide RNAs from Human Argonaute2

    PubMed Central

    De, Nabanita; Young, Lisa; Lau, Pick-Wei; Meisner, Nicole-Claudia; Morrissey, David V.; MacRae, Ian J.

    2013-01-01

    SUMMARY Argonaute proteins use small RNAs to guide the silencing of complementary target RNAs in many eukaryotes. Although small RNA biogenesis pathways are well studied, mechanisms for removal of guide RNAs from Argonaute are poorly understood. Here we show that the Argonaute2 (Ago2) guide RNA complex is extremely stable, with a half-life on the order of days. However, highly complementary target RNAs destabilize the complex and significantly accelerate release of the guide RNA from Ago2. This “unloading” activity can be enhanced by mismatches between the target and the guide 5′ end and attenuated by mismatches to the guide 3′ end. The introduction of 3′ mismatches leads to more potent silencing of abundant mRNAs in mammalian cells. These findings help to explain why the 3′ ends of mammalian microRNAs (miRNAs) rarely match their targets, suggest a mechanism for sequence-specific small RNA turnover, and offer insights for controlling small RNAs in mammalian cells. PMID:23664376

  7. Light-reflection random-target method for measurement of the modulation transfer function of a digital video-camera

    NASA Astrophysics Data System (ADS)

    Pospisil, J.; Jakubik, P.; Machala, L.

    2005-11-01

    This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.

  8. Heterogeneous Vision Data Fusion for Independently Moving Cameras

    DTIC Science & Technology

    2010-03-01

    target detection , tracking , and identification over a large terrain. The goal of the project is to investigate and evaluate the existing image...fusion algorithms, develop new real-time algorithms for Category-II image fusion, and apply these algorithms in moving target detection and tracking . The...moving target detection and classification. 15. SUBJECT TERMS Image Fusion, Target Detection , Moving Cameras, IR Camera, EO Camera 16. SECURITY

  9. Dermoscopy-guided reflectance confocal microscopy of skin using high-NA objective lens with integrated wide-field color camera

    NASA Astrophysics Data System (ADS)

    Dickensheets, David L.; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind

    2016-02-01

    Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.

  10. Dermoscopy-guided reflectance confocal microscopy of skin using high-NA objective lens with integrated wide-field color camera.

    PubMed

    Dickensheets, David L; Kreitinger, Seth; Peterson, Gary; Heger, Michael; Rajadhyaksha, Milind

    2016-02-01

    Reflectance Confocal Microscopy, or RCM, is being increasingly used to guide diagnosis of skin lesions. The combination of widefield dermoscopy (WFD) with RCM is highly sensitive (~90%) and specific (~ 90%) for noninvasively detecting melanocytic and non-melanocytic skin lesions. The combined WFD and RCM approach is being implemented on patients to triage lesions into benign (with no biopsy) versus suspicious (followed by biopsy and pathology). Currently, however, WFD and RCM imaging are performed with separate instruments, while using an adhesive ring attached to the skin to sequentially image the same region and co-register the images. The latest small handheld RCM instruments offer no provision yet for a co-registered wide-field image. This paper describes an innovative solution that integrates an ultra-miniature dermoscopy camera into the RCM objective lens, providing simultaneous wide-field color images of the skin surface and RCM images of the subsurface cellular structure. The objective lens (0.9 NA) includes a hyperhemisphere lens and an ultra-miniature CMOS color camera, commanding a 4 mm wide dermoscopy view of the skin surface. The camera obscures the central portion of the aperture of the objective lens, but the resulting annular aperture provides excellent RCM optical sectioning and resolution. Preliminary testing on healthy volunteers showed the feasibility of combined WFD and RCM imaging to concurrently show the skin surface in wide-field and the underlying microscopic cellular-level detail. The paper describes this unique integrated dermoscopic WFD/RCM lens, and shows representative images. The potential for dermoscopy-guided RCM for skin cancer diagnosis is discussed.

  11. Partial DNA-guided Cas9 enables genome editing with reduced off-target activity

    PubMed Central

    Yin, Hao; Song, Chun-Qing; Suresh, Sneha; Kwan, Suet-Yan; Wu, Qiongqiong; Walsh, Stephen; Ding, Junmei; Bogorad, Roman L; Zhu, Lihua Julie; Wolfe, Scot A; Koteliansky, Victor; Xue, Wen; Langer, Robert; Anderson, Daniel G

    2018-01-01

    CRISPR–Cas9 is a versatile RNA-guided genome editing tool. Here we demonstrate that partial replacement of RNA nucleotides with DNA nucleotides in CRISPR RNA (crRNA) enables efficient gene editing in human cells. This strategy of partial DNA replacement retains on-target activity when used with both crRNA and sgRNA, as well as with multiple guide sequences. Partial DNA replacement also works for crRNA of Cpf1, another CRISPR system. We find that partial DNA replacement in the guide sequence significantly reduces off-target genome editing through focused analysis of off-target cleavage, measurement of mismatch tolerance and genome-wide profiling of off-target sites. Using the structure of the Cas9–sgRNA complex as a guide, the majority of the 3′ end of crRNA can be replaced with DNA nucleotide, and the 5 - and 3′-DNA-replaced crRNA enables efficient genome editing. Cas9 guided by a DNA–RNA chimera may provide a generalized strategy to reduce both the cost and the off-target genome editing in human cells. PMID:29377001

  12. Coherent infrared imaging camera (CIRIC)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.

    1995-07-01

    New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less

  13. A method of camera calibration in the measurement process with reference mark for approaching observation space target

    NASA Astrophysics Data System (ADS)

    Zhang, Hua; Zeng, Luan

    2017-11-01

    Binocular stereoscopic vision can be used for space-based space targets near observation. In order to solve the problem that the traditional binocular vision system cannot work normally after interference, an online calibration method of binocular stereo measuring camera with self-reference is proposed. The method uses an auxiliary optical imaging device to insert the image of the standard reference object into the edge of the main optical path and image with the target on the same focal plane, which is equivalent to a standard reference in the binocular imaging optical system; When the position of the system and the imaging device parameters are disturbed, the image of the standard reference will change accordingly in the imaging plane, and the position of the standard reference object does not change. The camera's external parameters can be re-calibrated by the visual relationship of the standard reference object. The experimental results show that the maximum mean square error of the same object can be reduced from the original 72.88mm to 1.65mm when the right camera is deflected by 0.4 degrees and the left camera is high and low with 0.2° rotation. This method can realize the online calibration of binocular stereoscopic vision measurement system, which can effectively improve the anti - jamming ability of the system.

  14. GUIDE-Seq enables genome-wide profiling of off-target cleavage by CRISPR-Cas nucleases

    PubMed Central

    Nguyen, Nhu T.; Liebers, Matthew; Topkar, Ved V.; Thapar, Vishal; Wyvekens, Nicolas; Khayter, Cyd; Iafrate, A. John; Le, Long P.; Aryee, Martin J.; Joung, J. Keith

    2014-01-01

    CRISPR RNA-guided nucleases (RGNs) are widely used genome-editing reagents, but methods to delineate their genome-wide off-target cleavage activities have been lacking. Here we describe an approach for global detection of DNA double-stranded breaks (DSBs) introduced by RGNs and potentially other nucleases. This method, called Genome-wide Unbiased Identification of DSBs Enabled by Sequencing (GUIDE-Seq), relies on capture of double-stranded oligodeoxynucleotides into breaks Application of GUIDE-Seq to thirteen RGNs in two human cell lines revealed wide variability in RGN off-target activities and unappreciated characteristics of off-target sequences. The majority of identified sites were not detected by existing computational methods or ChIP-Seq. GUIDE-Seq also identified RGN-independent genomic breakpoint ‘hotspots’. Finally, GUIDE-Seq revealed that truncated guide RNAs exhibit substantially reduced RGN-induced off-target DSBs. Our experiments define the most rigorous framework for genome-wide identification of RGN off-target effects to date and provide a method for evaluating the safety of these nucleases prior to clinical use. PMID:25513782

  15. Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation

    USGS Publications Warehouse

    Bell, J.F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.N.; Arneson, H.M.; Brown, D.; Collins, S.A.; Dingizian, A.; Elliot, S.T.; Hagerott, E.C.; Hayes, A.G.; Johnson, M.J.; Johnson, J. R.; Joseph, J.; Kinch, K.; Lemmon, M.T.; Morris, R.V.; Scherr, L.; Schwochert, M.; Shepard, M.K.; Smith, G.H.; Sohl-Dickstein, J. N.; Sullivan, R.J.; Sullivan, W.T.; Wadsworth, M.

    2003-01-01

    The Panoramic Camera (Pancam) investigation is part of the Athena science payload launched to Mars in 2003 on NASA's twin Mars Exploration Rover (MER) missions. The scientific goals of the Pancam investigation are to assess the high-resolution morphology, topography, and geologic context of each MER landing site, to obtain color images to constrain the mineralogic, photometric, and physical properties of surface materials, and to determine dust and aerosol opacity and physical properties from direct imaging of the Sun and sky. Pancam also provides mission support measurements for the rovers, including Sun-finding for rover navigation, hazard identification and digital terrain modeling to help guide long-term rover traverse decisions, high-resolution imaging to help guide the selection of in situ sampling targets, and acquisition of education and public outreach products. The Pancam optical, mechanical, and electronics design were optimized to achieve these science and mission support goals. Pancam is a multispectral, stereoscopic, panoramic imaging system consisting of two digital cameras mounted on a mast 1.5 m above the Martian surface. The mast allows Pancam to image the full 360?? in azimuth and ??90?? in elevation. Each Pancam camera utilizes a 1024 ?? 1024 active imaging area frame transfer CCD detector array. The Pancam optics have an effective focal length of 43 mm and a focal ratio f/20, yielding an instantaneous field of view of 0.27 mrad/pixel and a field of view of 16?? ?? 16??. Each rover's two Pancam "eyes" are separated by 30 cm and have a 1?? toe-in to provide adequate stereo parallax. Each eye also includes a small eight position filter wheel to allow surface mineralogic studies, multispectral sky imaging, and direct Sun imaging in the 400-1100 nm wavelength region. Pancam was designed and calibrated to operate within specifications on Mars at temperatures from -55?? to +5??C. An onboard calibration target and fiducial marks provide the capability

  16. High-frequency ultrasound-guided disruption of glycoprotein VI-targeted microbubbles targets atheroprogressison in mice.

    PubMed

    Metzger, Katja; Vogel, Sebastian; Chatterjee, Madhumita; Borst, Oliver; Seizer, Peter; Schönberger, Tanja; Geisler, Tobias; Lang, Florian; Langer, Harald; Rheinlaender, Johannes; Schäffer, Tilman E; Gawaz, Meinrad

    2015-01-01

    Targeted contrast-enhanced ultrasound (CEU) using microbubble agents is a promising non-invasive imaging technique to evaluate atherosclerotic lesions. In this study, we decipher the diagnostic and therapeutic potential of targeted-CEU with soluble glycoprotein (GP)-VI in vivo. Microbubbles were conjugated with the recombinant fusion protein GPVI-Fc (MBGPVI) that binds with high affinity to atherosclerotic lesions. MBGPVI or control microbubbles (MBC) were intravenously administered into ApoE(-/-) or wild type mice and binding of the microbubbles to the vessel wall was visualized by high-resolution CEU. CEU molecular imaging signals of MBGPVI were substantially enhanced in the aortic arch and in the truncus brachiocephalicus in ApoE(-/-) as compared to wild type mice. High-frequency ultrasound (HFU)-guided disruption of MBGPVI enhanced accumulation of GPVI in the atherosclerotic lesions, which may interfere with atheroprogression. Thus, we establish targeted-CEU with soluble GPVI as a novel non-invasive molecular imaging method for atherosclerosis. Further, HFU-guided disruption of GPVI-targeted microbubbles is an innovate therapeutic approach that potentially prevents progression of atherosclerotic disease. Copyright © 2014 Elsevier Ltd. All rights reserved.

  17. Characterization of SWIR cameras by MRC measurements

    NASA Astrophysics Data System (ADS)

    Gerken, M.; Schlemmer, H.; Haan, Hubertus A.; Siemens, Christofer; Münzberg, M.

    2014-05-01

    Cameras for the SWIR wavelength range are becoming more and more important because of the better observation range for day-light operation under adverse weather conditions (haze, fog, rain). In order to choose the best suitable SWIR camera or to qualify a camera for a given application, characterization of the camera by means of the Minimum Resolvable Contrast MRC concept is favorable as the MRC comprises all relevant properties of the instrument. With the MRC known for a given camera device the achievable observation range can be calculated for every combination of target size, illumination level or weather conditions. MRC measurements in the SWIR wavelength band can be performed widely along the guidelines of the MRC measurements of a visual camera. Typically measurements are performed with a set of resolution targets (e.g. USAF 1951 target) manufactured with different contrast values from 50% down to less than 1%. For a given illumination level the achievable spatial resolution is then measured for each target. The resulting curve is showing the minimum contrast that is necessary to resolve the structure of a target as a function of spatial frequency. To perform MRC measurements for SWIR cameras at first the irradiation parameters have to be given in radiometric instead of photometric units which are limited in their use to the visible range. In order to do so, SWIR illumination levels for typical daylight and twilight conditions have to be defined. At second, a radiation source is necessary with appropriate emission in the SWIR range (e.g. incandescent lamp) and the irradiance has to be measured in W/m2 instead of Lux = Lumen/m2. At third, the contrast values of the targets have to be calibrated newly for the SWIR range because they typically differ from the values determined for the visual range. Measured MRC values of three cameras are compared to the specified performance data of the devices and the results of a multi-band in-house designed Vis-SWIR camera

  18. Night Vision Camera

    NASA Technical Reports Server (NTRS)

    1996-01-01

    PixelVision, Inc. developed the Night Video NV652 Back-illuminated CCD Camera, based on the expertise of a former Jet Propulsion Laboratory employee and a former employee of Scientific Imaging Technologies, Inc. The camera operates without an image intensifier, using back-illuminated and thinned CCD technology to achieve extremely low light level imaging performance. The advantages of PixelVision's system over conventional cameras include greater resolution and better target identification under low light conditions, lower cost and a longer lifetime. It is used commercially for research and aviation.

  19. Automated Camera Array Fine Calibration

    NASA Technical Reports Server (NTRS)

    Clouse, Daniel; Padgett, Curtis; Ansar, Adnan; Cheng, Yang

    2008-01-01

    Using aerial imagery, the JPL FineCalibration (JPL FineCal) software automatically tunes a set of existing CAHVOR camera models for an array of cameras. The software finds matching features in the overlap region between images from adjacent cameras, and uses these features to refine the camera models. It is not necessary to take special imagery of a known target and no surveying is required. JPL FineCal was developed for use with an aerial, persistent surveillance platform.

  20. Using queuing models to aid design and guide research effort for multimodality buried target detection systems

    NASA Astrophysics Data System (ADS)

    Malof, Jordan M.; Collins, Leslie M.

    2016-05-01

    Many remote sensing modalities have been developed for buried target detection (BTD), each one offering relative advantages over the others. There has been interest in combining several modalities into a single BTD system that benefits from the advantages of each constituent sensor. Recently an approach was developed, called multi-state management (MSM), that aims to achieve this goal by separating BTD system operation into discrete states, each with different sensor activity and system velocity. Additionally, a modeling approach, called Q-MSM, was developed to quickly analyze multi-modality BTD systems operating with MSM. This work extends previous work by demonstrating how Q-MSM modeling can be used to design BTD systems operating with MSM, and to guide research to yield the most performance benefits. In this work an MSM system is considered that combines a forward-looking infrared (FLIR) camera and a ground penetrating radar (GPR). Experiments are conducted using a dataset of real, field-collected, data which demonstrates how the Q-MSM model can be used to evaluate performance benefits of altering, or improving via research investment, various characteristics of the GPR and FLIR systems. Q-MSM permits fast analysis that can determine where system improvements will have the greatest impact, and can therefore help guide BTD research.

  1. Fuzzy System-Based Target Selection for a NIR Camera-Based Gaze Tracker

    PubMed Central

    Naqvi, Rizwan Ali; Arsalan, Muhammad; Park, Kang Ryoung

    2017-01-01

    Gaze-based interaction (GBI) techniques have been a popular subject of research in the last few decades. Among other applications, GBI can be used by persons with disabilities to perform everyday tasks, as a game interface, and can play a pivotal role in the human computer interface (HCI) field. While gaze tracking systems have shown high accuracy in GBI, detecting a user’s gaze for target selection is a challenging problem that needs to be considered while using a gaze detection system. Past research has used the blinking of the eyes for this purpose as well as dwell time-based methods, but these techniques are either inconvenient for the user or requires a long time for target selection. Therefore, in this paper, we propose a method for fuzzy system-based target selection for near-infrared (NIR) camera-based gaze trackers. The results of experiments performed in addition to tests of the usability and on-screen keyboard use of the proposed method show that it is better than previous methods. PMID:28420114

  2. Precision targeting in guided munition using IR sensor and MmW radar

    NASA Astrophysics Data System (ADS)

    Sreeja, S.; Hablani, H. B.; Arya, H.

    2015-10-01

    Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a Precision Guided Munition(PGM) equipped with an infrared sensor and a millimeter wave radar [IR and MmW, for short]. Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov process. To estimate the target location on the ground and the line-of-sight rate to intercept it an Extended Kalman Filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The line-of-sight angle measurement from the infrared seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including by image processing delays is 1:45m.

  3. Precision targeting in guided munition using infrared sensor and millimeter wave radar

    NASA Astrophysics Data System (ADS)

    Sulochana, Sreeja; Hablani, Hari B.; Arya, Hemendra

    2016-07-01

    Conventional munitions are not guided with sensors and therefore miss the target, particularly if the target is mobile. The miss distance of these munitions can be decreased by incorporating sensors to detect the target and guide the munition during flight. This paper is concerned with a precision guided munition equipped with an infrared (IR) sensor and a millimeter wave radar (MmW). Three-dimensional flight of the munition and its pitch and yaw motion models are developed and simulated. The forward and lateral motion of a target tank on the ground is modeled as two independent second-order Gauss-Markov processes. To estimate the target location on the ground and the line-of-sight (LOS) rate to intercept it, an extended Kalman filter is composed whose state vector consists of cascaded state vectors of missile dynamics and target dynamics. The LOS angle measurement from the IR seeker is by centroiding the target image in 40 Hz. The centroid estimation of the images in the focal plane is at a frequency of 10 Hz. Every 10 Hz, centroids of four consecutive images are averaged, yielding a time-averaged centroid, implying some measurement delay. The miss distance achieved by including image processing delays is 1.45 m.

  4. SU-E-J-44: A Novel Approach to Quantify Patient Setup and Target Motion for Real-Time Image-Guided Radiotherapy (IGRT)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Li, S; Charpentier, P; Sayler, E

    2015-06-15

    Purpose Isocenter shifts and rotations to correct patient setup errors and organ motion cannot remedy some shape changes of large targets. We are investigating new methods in quantification of target deformation for realtime IGRT of breast and chest wall cancer. Methods Ninety-five patients of breast or chest wall cancer were accrued in an IRB-approved clinical trial of IGRT using 3D surface images acquired at daily setup and beam-on time via an in-room camera. Shifts and rotations relating to the planned reference surface were determined using iterative-closest-point alignment. Local surface displacements and target deformation are measured via a ray-surface intersection andmore » principal component analysis (PCA) of external surface, respectively. Isocenter shift, upper-abdominal displacement, and vectors of the surface projected onto the two principal components, PC1 and PC2, were evaluated for sensitivity and accuracy in detection of target deformation. Setup errors for some deformed targets were estimated by superlatively registering target volume, inner surface, or external surface in weekly CBCT or these outlines on weekly EPI. Results Setup difference according to the inner-surface, external surface, or target volume could be 1.5 cm. Video surface-guided setup agreed with EPI results to within < 0.5 cm while CBCT results were sometimes (∼20%) different from that of EPI (>0.5 cm) due to target deformation for some large breasts and some chest walls undergoing deep-breath-hold irradiation. Square root of PC1 and PC2 is very sensitive to external surface deformation and irregular breathing. Conclusion PCA of external surfaces is quick and simple way to detect target deformation in IGRT of breast and chest wall cancer. Setup corrections based on the target volume, inner surface, and external surface could be significant different. Thus, checking of target shape changes is essential for accurate image-guided patient setup and motion tracking of large

  5. Convergent transmission of RNAi guide-target mismatch information across Argonaute internal allosteric network.

    PubMed

    Joseph, Thomas T; Osman, Roman

    2012-01-01

    In RNA interference, a guide strand derived from a short dsRNA such as a microRNA (miRNA) is loaded into Argonaute, the central protein in the RNA Induced Silencing Complex (RISC) that silences messenger RNAs on a sequence-specific basis. The positions of any mismatched base pairs in an miRNA determine which Argonaute subtype is used. Subsequently, the Argonaute-guide complex binds and silences complementary target mRNAs; certain Argonautes cleave the target. Mismatches between guide strand and the target mRNA decrease cleavage efficiency. Thus, loading and silencing both require that signals about the presence of a mismatched base pair are communicated from the mismatch site to effector sites. These effector sites include the active site, to prevent target cleavage; the binding groove, to modify nucleic acid binding affinity; and surface allosteric sites, to control recruitment of additional proteins to form the RISC. To examine how such signals may be propagated, we analyzed the network of internal allosteric pathways in Argonaute exhibited through correlations of residue-residue interactions. The emerging network can be described as a set of pathways emanating from the core of the protein near the active site, distributed into the bulk of the protein, and converging upon a distributed cluster of surface residues. Nucleotides in the guide strand "seed region" have a stronger relationship with the protein than other nucleotides, concordant with their importance in sequence selectivity. Finally, any of several seed region guide-target mismatches cause certain Argonaute residues to have modified correlations with the rest of the protein. This arises from the aggregation of relatively small interaction correlation changes distributed across a large subset of residues. These residues are in effector sites: the active site, binding groove, and surface, implying that direct functional consequences of guide-target mismatches are mediated through the cumulative effects of

  6. Mechanism of duplex DNA destabilization by RNA-guided Cas9 nuclease during target interrogation

    PubMed Central

    Mekler, Vladimir; Minakhin, Leonid; Severinov, Konstantin

    2017-01-01

    The prokaryotic clustered regularly interspaced short palindromic repeats (CRISPR)-associated 9 (Cas9) endonuclease cleaves double-stranded DNA sequences specified by guide RNA molecules and flanked by a protospacer adjacent motif (PAM) and is widely used for genome editing in various organisms. The RNA-programmed Cas9 locates the target site by scanning genomic DNA. We sought to elucidate the mechanism of initial DNA interrogation steps that precede the pairing of target DNA with guide RNA. Using fluorometric and biochemical assays, we studied Cas9/guide RNA complexes with model DNA substrates that mimicked early intermediates on the pathway to the final Cas9/guide RNA–DNA complex. The results show that Cas9/guide RNA binding to PAM favors separation of a few PAM-proximal protospacer base pairs allowing initial target interrogation by guide RNA. The duplex destabilization is mediated, in part, by Cas9/guide RNA affinity for unpaired segments of nontarget strand DNA close to PAM. Furthermore, our data indicate that the entry of double-stranded DNA beyond a short threshold distance from PAM into the Cas9/single-guide RNA (sgRNA) interior is hindered. We suggest that the interactions unfavorable for duplex DNA binding promote DNA bending in the PAM-proximal region during early steps of Cas9/guide RNA–DNA complex formation, thus additionally destabilizing the protospacer duplex. The mechanism that emerges from our analysis explains how the Cas9/sgRNA complex is able to locate the correct target sequence efficiently while interrogating numerous nontarget sequences associated with correct PAMs. PMID:28484024

  7. Mechanism of duplex DNA destabilization by RNA-guided Cas9 nuclease during target interrogation.

    PubMed

    Mekler, Vladimir; Minakhin, Leonid; Severinov, Konstantin

    2017-05-23

    The prokaryotic clustered regularly interspaced short palindromic repeats (CRISPR)-associated 9 (Cas9) endonuclease cleaves double-stranded DNA sequences specified by guide RNA molecules and flanked by a protospacer adjacent motif (PAM) and is widely used for genome editing in various organisms. The RNA-programmed Cas9 locates the target site by scanning genomic DNA. We sought to elucidate the mechanism of initial DNA interrogation steps that precede the pairing of target DNA with guide RNA. Using fluorometric and biochemical assays, we studied Cas9/guide RNA complexes with model DNA substrates that mimicked early intermediates on the pathway to the final Cas9/guide RNA-DNA complex. The results show that Cas9/guide RNA binding to PAM favors separation of a few PAM-proximal protospacer base pairs allowing initial target interrogation by guide RNA. The duplex destabilization is mediated, in part, by Cas9/guide RNA affinity for unpaired segments of nontarget strand DNA close to PAM. Furthermore, our data indicate that the entry of double-stranded DNA beyond a short threshold distance from PAM into the Cas9/single-guide RNA (sgRNA) interior is hindered. We suggest that the interactions unfavorable for duplex DNA binding promote DNA bending in the PAM-proximal region during early steps of Cas9/guide RNA-DNA complex formation, thus additionally destabilizing the protospacer duplex. The mechanism that emerges from our analysis explains how the Cas9/sgRNA complex is able to locate the correct target sequence efficiently while interrogating numerous nontarget sequences associated with correct PAMs.

  8. The NASA 2003 Mars Exploration Rover Panoramic Camera (Pancam) Investigation

    NASA Astrophysics Data System (ADS)

    Bell, J. F.; Squyres, S. W.; Herkenhoff, K. E.; Maki, J.; Schwochert, M.; Morris, R. V.; Athena Team

    2002-12-01

    the ability to validate the radiometric and geometric calibration on Mars. Pancam relies heavily on use of the JPL ICER wavelet compression algorithm to maximize data return within stringent mission downlink limits. The scientific goals of the Pancam investigation are to: (a) obtain monoscopic and stereoscopic image mosaics to assess the morphology, topography, and geologic context of each MER landing site; (b) obtain multispectral visible to short-wave near-IR images of selected regions to determine surface color and mineralogic properties; (c) obtain multispectral images over a range of viewing geometries to constrain surface photometric and physical properties; and (d) obtain images of the Martian sky, including direct images of the Sun, to determine dust and aerosol opacity and physical properties. In addition, Pancam also serves a variety of operational functions on the MER mission, including (e) serving as the primary Sun-finding camera for rover navigation; (f) resolving objects on the scale of the rover wheels to distances of ~100 m to help guide navigation decisions; (g) providing stereo coverage adequate for the generation of digital terrain models to help guide and refine rover traverse decisions; (h) providing high resolution images and other context information to guide the selection of the most interesting in situ sampling targets; and (i) supporting acquisition and release of exciting E/PO products.

  9. Utilization and viability of biologically-inspired algorithms in a dynamic multiagent camera surveillance system

    NASA Astrophysics Data System (ADS)

    Mundhenk, Terrell N.; Dhavale, Nitin; Marmol, Salvador; Calleja, Elizabeth; Navalpakkam, Vidhya; Bellman, Kirstie; Landauer, Chris; Arbib, Michael A.; Itti, Laurent

    2003-10-01

    In view of the growing complexity of computational tasks and their design, we propose that certain interactive systems may be better designed by utilizing computational strategies based on the study of the human brain. Compared with current engineering paradigms, brain theory offers the promise of improved self-organization and adaptation to the current environment, freeing the programmer from having to address those issues in a procedural manner when designing and implementing large-scale complex systems. To advance this hypothesis, we discus a multi-agent surveillance system where 12 agent CPUs each with its own camera, compete and cooperate to monitor a large room. To cope with the overload of image data streaming from 12 cameras, we take inspiration from the primate"s visual system, which allows the animal to operate a real-time selection of the few most conspicuous locations in visual input. This is accomplished by having each camera agent utilize the bottom-up, saliency-based visual attention algorithm of Itti and Koch (Vision Research 2000;40(10-12):1489-1506) to scan the scene for objects of interest. Real time operation is achieved using a distributed version that runs on a 16-CPU Beowulf cluster composed of the agent computers. The algorithm guides cameras to track and monitor salient objects based on maps of color, orientation, intensity, and motion. To spread camera view points or create cooperation in monitoring highly salient targets, camera agents bias each other by increasing or decreasing the weight of different feature vectors in other cameras, using mechanisms similar to excitation and suppression that have been documented in electrophysiology, psychophysics and imaging studies of low-level visual processing. In addition, if cameras need to compete for computing resources, allocation of computational time is weighed based upon the history of each camera. A camera agent that has a history of seeing more salient targets is more likely to obtain

  10. CAOS-CMOS camera.

    PubMed

    Riza, Nabeel A; La Torre, Juan Pablo; Amin, M Junaid

    2016-06-13

    Proposed and experimentally demonstrated is the CAOS-CMOS camera design that combines the coded access optical sensor (CAOS) imager platform with the CMOS multi-pixel optical sensor. The unique CAOS-CMOS camera engages the classic CMOS sensor light staring mode with the time-frequency-space agile pixel CAOS imager mode within one programmable optical unit to realize a high dynamic range imager for extreme light contrast conditions. The experimentally demonstrated CAOS-CMOS camera is built using a digital micromirror device, a silicon point-photo-detector with a variable gain amplifier, and a silicon CMOS sensor with a maximum rated 51.3 dB dynamic range. White light imaging of three different brightness simultaneously viewed targets, that is not possible by the CMOS sensor, is achieved by the CAOS-CMOS camera demonstrating an 82.06 dB dynamic range. Applications for the camera include industrial machine vision, welding, laser analysis, automotive, night vision, surveillance and multispectral military systems.

  11. Architecture of PAU survey camera readout electronics

    NASA Astrophysics Data System (ADS)

    Castilla, Javier; Cardiel-Sas, Laia; De Vicente, Juan; Illa, Joseph; Jimenez, Jorge; Maiorino, Marino; Martinez, Gustavo

    2012-07-01

    PAUCam is a new camera for studying the physics of the accelerating universe. The camera will consist of eighteen 2Kx4K HPK CCDs: sixteen for science and two for guiding. The camera will be installed at the prime focus of the WHT (William Herschel Telescope). In this contribution, the architecture of the readout electronics system is presented. Back- End and Front-End electronics are described. Back-End consists of clock, bias and video processing boards, mounted on Monsoon crates. The Front-End is based on patch panel boards. These boards are plugged outside the camera feed-through panel for signal distribution. Inside the camera, individual preamplifier boards plus kapton cable completes the path to connect to each CCD. The overall signal distribution and grounding scheme is shown in this paper.

  12. Effects of camera location on the reconstruction of 3D flare trajectory with two cameras

    NASA Astrophysics Data System (ADS)

    Özsaraç, Seçkin; Yeşilkaya, Muhammed

    2015-05-01

    Flares are used as valuable electronic warfare assets for the battle against infrared guided missiles. The trajectory of the flare is one of the most important factors that determine the effectiveness of the counter measure. Reconstruction of the three dimensional (3D) position of a point, which is seen by multiple cameras, is a common problem. Camera placement, camera calibration, corresponding pixel determination in between the images of different cameras and also the triangulation algorithm affect the performance of 3D position estimation. In this paper, we specifically investigate the effects of camera placement on the flare trajectory estimation performance by simulations. Firstly, 3D trajectory of a flare and also the aircraft, which dispenses the flare, are generated with simple motion models. Then, we place two virtual ideal pinhole camera models on different locations. Assuming the cameras are tracking the aircraft perfectly, the view vectors of the cameras are computed. Afterwards, using the view vector of each camera and also the 3D position of the flare, image plane coordinates of the flare on both cameras are computed using the field of view (FOV) values. To increase the fidelity of the simulation, we have used two sources of error. One is used to model the uncertainties in the determination of the camera view vectors, i.e. the orientations of the cameras are measured noisy. Second noise source is used to model the imperfections of the corresponding pixel determination of the flare in between the two cameras. Finally, 3D position of the flare is estimated using the corresponding pixel indices, view vector and also the FOV of the cameras by triangulation. All the processes mentioned so far are repeated for different relative camera placements so that the optimum estimation error performance is found for the given aircraft and are trajectories.

  13. Structural Basis for Guide RNA Processing and Seed-Dependent DNA Targeting by CRISPR-Cas12a.

    PubMed

    Swarts, Daan C; van der Oost, John; Jinek, Martin

    2017-04-20

    The CRISPR-associated protein Cas12a (Cpf1), which has been repurposed for genome editing, possesses two distinct nuclease activities: endoribonuclease activity for processing its own guide RNAs and RNA-guided DNase activity for target DNA cleavage. To elucidate the molecular basis of both activities, we determined crystal structures of Francisella novicida Cas12a bound to guide RNA and in complex with an R-loop formed by a non-cleavable guide RNA precursor and a full-length target DNA. Corroborated by biochemical experiments, these structures reveal the mechanisms of guide RNA processing and pre-ordering of the seed sequence in the guide RNA that primes Cas12a for target DNA binding. Furthermore, the R-loop complex structure reveals the strand displacement mechanism that facilitates guide-target hybridization and suggests a mechanism for double-stranded DNA cleavage involving a single active site. Together, these insights advance our mechanistic understanding of Cas12a enzymes and may contribute to further development of genome editing technologies. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Target motion tracking in MRI-guided transrectal robotic prostate biopsy.

    PubMed

    Tadayyon, Hadi; Lasso, Andras; Kaushal, Aradhana; Guion, Peter; Fichtinger, Gabor

    2011-11-01

    MRI-guided prostate needle biopsy requires compensation for organ motion between target planning and needle placement. Two questions are studied and answered in this paper: 1) is rigid registration sufficient in tracking the targets with an error smaller than the clinically significant size of prostate cancer and 2) what is the effect of the number of intraoperative slices on registration accuracy and speed? we propose multislice-to-volume registration algorithms for tracking the biopsy targets within the prostate. Three orthogonal plus additional transverse intraoperative slices are acquired in the approximate center of the prostate and registered with a high-resolution target planning volume. Both rigid and deformable scenarios were implemented. Both simulated and clinical MRI-guided robotic prostate biopsy data were used to assess tracking accuracy. average registration errors in clinical patient data were 2.6 mm for the rigid algorithm and 2.1 mm for the deformable algorithm. rigid tracking appears to be promising. Three tracking slices yield significantly high registration speed with an affordable error.

  15. ACT-Vision: active collaborative tracking for multiple PTZ cameras

    NASA Astrophysics Data System (ADS)

    Broaddus, Christopher; Germano, Thomas; Vandervalk, Nicholas; Divakaran, Ajay; Wu, Shunguang; Sawhney, Harpreet

    2009-04-01

    We describe a novel scalable approach for the management of a large number of Pan-Tilt-Zoom (PTZ) cameras deployed outdoors for persistent tracking of humans and vehicles, without resorting to the large fields of view of associated static cameras. Our system, Active Collaborative Tracking - Vision (ACT-Vision), is essentially a real-time operating system that can control hundreds of PTZ cameras to ensure uninterrupted tracking of target objects while maintaining image quality and coverage of all targets using a minimal number of sensors. The system ensures the visibility of targets between PTZ cameras by using criteria such as distance from sensor and occlusion.

  16. Convergent Transmission of RNAi Guide-Target Mismatch Information across Argonaute Internal Allosteric Network

    PubMed Central

    Joseph, Thomas T.; Osman, Roman

    2012-01-01

    In RNA interference, a guide strand derived from a short dsRNA such as a microRNA (miRNA) is loaded into Argonaute, the central protein in the RNA Induced Silencing Complex (RISC) that silences messenger RNAs on a sequence-specific basis. The positions of any mismatched base pairs in an miRNA determine which Argonaute subtype is used. Subsequently, the Argonaute-guide complex binds and silences complementary target mRNAs; certain Argonautes cleave the target. Mismatches between guide strand and the target mRNA decrease cleavage efficiency. Thus, loading and silencing both require that signals about the presence of a mismatched base pair are communicated from the mismatch site to effector sites. These effector sites include the active site, to prevent target cleavage; the binding groove, to modify nucleic acid binding affinity; and surface allosteric sites, to control recruitment of additional proteins to form the RISC. To examine how such signals may be propagated, we analyzed the network of internal allosteric pathways in Argonaute exhibited through correlations of residue-residue interactions. The emerging network can be described as a set of pathways emanating from the core of the protein near the active site, distributed into the bulk of the protein, and converging upon a distributed cluster of surface residues. Nucleotides in the guide strand “seed region” have a stronger relationship with the protein than other nucleotides, concordant with their importance in sequence selectivity. Finally, any of several seed region guide-target mismatches cause certain Argonaute residues to have modified correlations with the rest of the protein. This arises from the aggregation of relatively small interaction correlation changes distributed across a large subset of residues. These residues are in effector sites: the active site, binding groove, and surface, implying that direct functional consequences of guide-target mismatches are mediated through the cumulative

  17. THE DARK ENERGY CAMERA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Flaugher, B.; Diehl, H. T.; Alvarez, O.

    2015-11-15

    The Dark Energy Camera is a new imager with a 2.°2 diameter field of view mounted at the prime focus of the Victor M. Blanco 4 m telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five-element optical corrector, seven filters, a shutter with a 60 cm aperture, and a charge-coupled device (CCD) focal plane of 250 μm thick fully depleted CCDs cooled inside a vacuummore » Dewar. The 570 megapixel focal plane comprises 62 2k × 4k CCDs for imaging and 12 2k × 2k CCDs for guiding and focus. The CCDs have 15 μm × 15 μm pixels with a plate scale of 0.″263 pixel{sup −1}. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 s with 6–9 electron readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  18. The Dark Energy Camera

    DOE PAGES

    Flaugher, B.

    2015-04-11

    The Dark Energy Camera is a new imager with a 2.2-degree diameter field of view mounted at the prime focus of the Victor M. Blanco 4-meter telescope on Cerro Tololo near La Serena, Chile. The camera was designed and constructed by the Dark Energy Survey Collaboration, and meets or exceeds the stringent requirements designed for the wide-field and supernova surveys for which the collaboration uses it. The camera consists of a five element optical corrector, seven filters, a shutter with a 60 cm aperture, and a CCD focal plane of 250-μm thick fully depleted CCDs cooled inside a vacuum Dewar.more » The 570 Mpixel focal plane comprises 62 2k x 4k CCDs for imaging and 12 2k x 2k CCDs for guiding and focus. The CCDs have 15μm x 15μm pixels with a plate scale of 0.263" per pixel. A hexapod system provides state-of-the-art focus and alignment capability. The camera is read out in 20 seconds with 6-9 electrons readout noise. This paper provides a technical description of the camera's engineering, construction, installation, and current status.« less

  19. Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.

    PubMed

    Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John

    2018-01-01

    In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.

  20. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes

    PubMed Central

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-yung

    2016-01-01

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency. PMID:27792156

  1. A Camera-Based Target Detection and Positioning UAV System for Search and Rescue (SAR) Purposes.

    PubMed

    Sun, Jingxuan; Li, Boyang; Jiang, Yifan; Wen, Chih-Yung

    2016-10-25

    Wilderness search and rescue entails performing a wide-range of work in complex environments and large regions. Given the concerns inherent in large regions due to limited rescue distribution, unmanned aerial vehicle (UAV)-based frameworks are a promising platform for providing aerial imaging. In recent years, technological advances in areas such as micro-technology, sensors and navigation have influenced the various applications of UAVs. In this study, an all-in-one camera-based target detection and positioning system is developed and integrated into a fully autonomous fixed-wing UAV. The system presented in this paper is capable of on-board, real-time target identification, post-target identification and location and aerial image collection for further mapping applications. Its performance is examined using several simulated search and rescue missions, and the test results demonstrate its reliability and efficiency.

  2. Image-guided ex-vivo targeting accuracy using a laparoscopic tissue localization system

    NASA Astrophysics Data System (ADS)

    Bieszczad, Jerry; Friets, Eric; Knaus, Darin; Rauth, Thomas; Herline, Alan; Miga, Michael; Galloway, Robert; Kynor, David

    2007-03-01

    In image-guided surgery, discrete fiducials are used to determine a spatial registration between the location of surgical tools in the operating theater and the location of targeted subsurface lesions and critical anatomic features depicted in preoperative tomographic image data. However, the lack of readily localized anatomic landmarks has greatly hindered the use of image-guided surgery in minimally invasive abdominal procedures. To address these needs, we have previously described a laser-based system for localization of internal surface anatomy using conventional laparoscopes. During a procedure, this system generates a digitized, three-dimensional representation of visible anatomic surfaces in the abdominal cavity. This paper presents the results of an experiment utilizing an ex-vivo bovine liver to assess subsurface targeting accuracy achieved using our system. During the experiment, several radiopaque targets were inserted into the liver parenchyma. The location of each target was recorded using an optically-tracked insertion probe. The liver surface was digitized using our system, and registered with the liver surface extracted from post-procedure CT images. This surface-based registration was then used to transform the position of the inserted targets into the CT image volume. The target registration error (TRE) achieved using our surface-based registration (given a suitable registration algorithm initialization) was 2.4 mm +/- 1.0 mm. A comparable TRE (2.6 mm +/- 1.7 mm) was obtained using a registration based on traditional fiducial markers placed on the surface of the same liver. These results indicate the potential of fiducial-free, surface-to-surface registration for image-guided lesion targeting in minimally invasive abdominal surgery.

  3. A dual-targeting upconversion nanoplatform for two-color fluorescence imaging-guided photodynamic therapy.

    PubMed

    Wang, Xu; Yang, Cheng-Xiong; Chen, Jia-Tong; Yan, Xiu-Ping

    2014-04-01

    The targetability of a theranostic probe is one of the keys to assuring its theranostic efficiency. Here we show the design and fabrication of a dual-targeting upconversion nanoplatform for two-color fluorescence imaging-guided photodynamic therapy (PDT). The nanoplatform was prepared from 3-aminophenylboronic acid functionalized upconversion nanocrystals (APBA-UCNPs) and hyaluronated fullerene (HAC60) via a specific diol-borate condensation. The two specific ligands of aminophenylboronic acid and hyaluronic acid provide synergistic targeting effects, high targetability, and hence a dramatically elevated uptake of the nanoplatform by cancer cells. The high generation yield of (1)O2 due to multiplexed Förster resonance energy transfer between APBA-UCNPs (donor) and HAC60 (acceptor) allows effective therapy. The present nanoplatform shows great potential for highly selective tumor-targeted imaging-guided PDT.

  4. Structural basis for the recognition of guide RNA and target DNA heteroduplex by Argonaute.

    PubMed

    Miyoshi, Tomohiro; Ito, Kosuke; Murakami, Ryo; Uchiumi, Toshio

    2016-06-21

    Argonaute proteins are key players in the gene silencing mechanisms mediated by small nucleic acids in all domains of life from bacteria to eukaryotes. However, little is known about the Argonaute protein that recognizes guide RNA/target DNA. Here, we determine the 2 Å crystal structure of Rhodobacter sphaeroides Argonaute (RsAgo) in a complex with 18-nucleotide guide RNA and its complementary target DNA. The heteroduplex maintains Watson-Crick base-pairing even in the 3'-region of the guide RNA between the N-terminal and PIWI domains, suggesting a recognition mode by RsAgo for stable interaction with the target strand. In addition, the MID/PIWI interface of RsAgo has a system that specifically recognizes the 5' base-U of the guide RNA, and the duplex-recognition loop of the PAZ domain is important for the DNA silencing activity. Furthermore, we show that Argonaute discriminates the nucleic acid type (RNA/DNA) by recognition of the duplex structure of the seed region.

  5. Guide star targeting success for the HEAO-B observatory

    NASA Technical Reports Server (NTRS)

    Farrenkopf, R. L.; Hoffman, D. P.

    1977-01-01

    The statistics associated with the successful selection and acquisition of guide stars as attitude benchmarks for use in reorientation maneuvers of the HEAO-B observatory are considered as a function of the maneuver angle, initial attitude uncertainties, and the pertinent celestial region. Success likelihoods in excess of 0.99 are predicted assuming anticipated gyro and star tracker error sources. The maneuver technique and guide star selection constraints are described in detail. The results presented are specialized numerically to the HEAO-B observatory. However, the analytical techniques developed are considered applicable to broader classes of spacecraft requiring celestial targeting.

  6. Daytime Aspect Camera for Balloon Altitudes

    NASA Technical Reports Server (NTRS)

    Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.

    2002-01-01

    We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40 km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600- to 1000-nm region of the spectrum, successfully provides daytime aspect information of approx. 10 arcsec resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models used to design the camera, but the daytime stellar magnitude limit was lower than expected due to longitudinal chromatic aberration in the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.

  7. CRISPRdirect: software for designing CRISPR/Cas guide RNA with reduced off-target sites

    PubMed Central

    Naito, Yuki; Hino, Kimihiro; Bono, Hidemasa; Ui-Tei, Kumiko

    2015-01-01

    Summary: CRISPRdirect is a simple and functional web server for selecting rational CRISPR/Cas targets from an input sequence. The CRISPR/Cas system is a promising technique for genome engineering which allows target-specific cleavage of genomic DNA guided by Cas9 nuclease in complex with a guide RNA (gRNA), that complementarily binds to a ∼20 nt targeted sequence. The target sequence requirements are twofold. First, the 5′-NGG protospacer adjacent motif (PAM) sequence must be located adjacent to the target sequence. Second, the target sequence should be specific within the entire genome in order to avoid off-target editing. CRISPRdirect enables users to easily select rational target sequences with minimized off-target sites by performing exhaustive searches against genomic sequences. The server currently incorporates the genomic sequences of human, mouse, rat, marmoset, pig, chicken, frog, zebrafish, Ciona, fruit fly, silkworm, Caenorhabditis elegans, Arabidopsis, rice, Sorghum and budding yeast. Availability: Freely available at http://crispr.dbcls.jp/. Contact: y-naito@dbcls.rois.ac.jp Supplementary information: Supplementary data are available at Bioinformatics online. PMID:25414360

  8. Structure and specificity of the RNA-guided endonuclease Cas9 during DNA interrogation, target binding and cleavage

    PubMed Central

    Josephs, Eric A.; Kocak, D. Dewran; Fitzgibbon, Christopher J.; McMenemy, Joshua; Gersbach, Charles A.; Marszalek, Piotr E.

    2015-01-01

    CRISPR-associated endonuclease Cas9 cuts DNA at variable target sites designated by a Cas9-bound RNA molecule. Cas9's ability to be directed by single ‘guide RNA’ molecules to target nearly any sequence has been recently exploited for a number of emerging biological and medical applications. Therefore, understanding the nature of Cas9's off-target activity is of paramount importance for its practical use. Using atomic force microscopy (AFM), we directly resolve individual Cas9 and nuclease-inactive dCas9 proteins as they bind along engineered DNA substrates. High-resolution imaging allows us to determine their relative propensities to bind with different guide RNA variants to targeted or off-target sequences. Mapping the structural properties of Cas9 and dCas9 to their respective binding sites reveals a progressive conformational transformation at DNA sites with increasing sequence similarity to its target. With kinetic Monte Carlo (KMC) simulations, these results provide evidence of a ‘conformational gating’ mechanism driven by the interactions between the guide RNA and the 14th–17th nucleotide region of the targeted DNA, the stabilities of which we find correlate significantly with reported off-target cleavage rates. KMC simulations also reveal potential methodologies to engineer guide RNA sequences with improved specificity by considering the invasion of guide RNAs into targeted DNA duplex. PMID:26384421

  9. Design criteria for a high energy Compton Camera and possible application to targeted cancer therapy

    NASA Astrophysics Data System (ADS)

    Conka Nurdan, T.; Nurdan, K.; Brill, A. B.; Walenta, A. H.

    2015-07-01

    The proposed research focuses on the design criteria for a Compton Camera with high spatial resolution and sensitivity, operating at high gamma energies and its possible application for molecular imaging. This application is mainly on the detection and visualization of the pharmacokinetics of tumor targeting substances specific for particular cancer sites. Expected high resolution (< 0.5 mm) permits monitoring the pharmacokinetics of labeled gene constructs in vivo in small animals with a human tumor xenograft which is one of the first steps in evaluating the potential utility of a candidate gene. The additional benefit of high sensitivity detection will be improved cancer treatment strategies in patients based on the use of specific molecules binding to cancer sites for early detection of tumors and identifying metastasis, monitoring drug delivery and radionuclide therapy for optimum cell killing at the tumor site. This new technology can provide high resolution, high sensitivity imaging of a wide range of gamma energies and will significantly extend the range of radiotracers that can be investigated and used clinically. The small and compact construction of the proposed camera system allows flexible application which will be particularly useful for monitoring residual tumor around the resection site during surgery. It is also envisaged as able to test the performance of new drug/gene-based therapies in vitro and in vivo for tumor targeting efficacy using automatic large scale screening methods.

  10. Detecting Target Objects by Natural Language Instructions Using an RGB-D Camera

    PubMed Central

    Bao, Jiatong; Jia, Yunyi; Cheng, Yu; Tang, Hongru; Xi, Ning

    2016-01-01

    Controlling robots by natural language (NL) is increasingly attracting attention for its versatility, convenience and no need of extensive training for users. Grounding is a crucial challenge of this problem to enable robots to understand NL instructions from humans. This paper mainly explores the object grounding problem and concretely studies how to detect target objects by the NL instructions using an RGB-D camera in robotic manipulation applications. In particular, a simple yet robust vision algorithm is applied to segment objects of interest. With the metric information of all segmented objects, the object attributes and relations between objects are further extracted. The NL instructions that incorporate multiple cues for object specifications are parsed into domain-specific annotations. The annotations from NL and extracted information from the RGB-D camera are matched in a computational state estimation framework to search all possible object grounding states. The final grounding is accomplished by selecting the states which have the maximum probabilities. An RGB-D scene dataset associated with different groups of NL instructions based on different cognition levels of the robot are collected. Quantitative evaluations on the dataset illustrate the advantages of the proposed method. The experiments of NL controlled object manipulation and NL-based task programming using a mobile manipulator show its effectiveness and practicability in robotic applications. PMID:27983604

  11. Demonstration of the CDMA-mode CAOS smart camera.

    PubMed

    Riza, Nabeel A; Mazhar, Mohsin A

    2017-12-11

    Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.

  12. Analytic Guided-Search Model of Human Performance Accuracy in Target- Localization Search Tasks

    NASA Technical Reports Server (NTRS)

    Eckstein, Miguel P.; Beutter, Brent R.; Stone, Leland S.

    2000-01-01

    Current models of human visual search have extended the traditional serial/parallel search dichotomy. Two successful models for predicting human visual search are the Guided Search model and the Signal Detection Theory model. Although these models are inherently different, it has been difficult to compare them because the Guided Search model is designed to predict response time, while Signal Detection Theory models are designed to predict performance accuracy. Moreover, current implementations of the Guided Search model require the use of Monte-Carlo simulations, a method that makes fitting the model's performance quantitatively to human data more computationally time consuming. We have extended the Guided Search model to predict human accuracy in target-localization search tasks. We have also developed analytic expressions that simplify simulation of the model to the evaluation of a small set of equations using only three free parameters. This new implementation and extension of the Guided Search model will enable direct quantitative comparisons with human performance in target-localization search experiments and with the predictions of Signal Detection Theory and other search accuracy models.

  13. Targeted training of the decision rule benefits rule-guided behavior in Parkinson's disease.

    PubMed

    Ell, Shawn W

    2013-12-01

    The impact of Parkinson's disease (PD) on rule-guided behavior has received considerable attention in cognitive neuroscience. The majority of research has used PD as a model of dysfunction in frontostriatal networks, but very few attempts have been made to investigate the possibility of adapting common experimental techniques in an effort to identify the conditions that are most likely to facilitate successful performance. The present study investigated a targeted training paradigm designed to facilitate rule learning and application using rule-based categorization as a model task. Participants received targeted training in which there was no selective-attention demand (i.e., stimuli varied along a single, relevant dimension) or nontargeted training in which there was selective-attention demand (i.e., stimuli varied along a relevant dimension as well as an irrelevant dimension). Following training, all participants were tested on a rule-based task with selective-attention demand. During the test phase, PD patients who received targeted training performed similarly to control participants and outperformed patients who did not receive targeted training. As a preliminary test of the generalizability of the benefit of targeted training, a subset of the PD patients were tested on the Wisconsin card sorting task (WCST). PD patients who received targeted training outperformed PD patients who did not receive targeted training on several WCST performance measures. These data further characterize the contribution of frontostriatal circuitry to rule-guided behavior. Importantly, these data also suggest that PD patient impairment, on selective-attention-demanding tasks of rule-guided behavior, is not inevitable and highlight the potential benefit of targeted training.

  14. Structural basis for the recognition of guide RNA and target DNA heteroduplex by Argonaute

    PubMed Central

    Miyoshi, Tomohiro; Ito, Kosuke; Murakami, Ryo; Uchiumi, Toshio

    2016-01-01

    Argonaute proteins are key players in the gene silencing mechanisms mediated by small nucleic acids in all domains of life from bacteria to eukaryotes. However, little is known about the Argonaute protein that recognizes guide RNA/target DNA. Here, we determine the 2 Å crystal structure of Rhodobacter sphaeroides Argonaute (RsAgo) in a complex with 18-nucleotide guide RNA and its complementary target DNA. The heteroduplex maintains Watson–Crick base-pairing even in the 3′-region of the guide RNA between the N-terminal and PIWI domains, suggesting a recognition mode by RsAgo for stable interaction with the target strand. In addition, the MID/PIWI interface of RsAgo has a system that specifically recognizes the 5′ base-U of the guide RNA, and the duplex-recognition loop of the PAZ domain is important for the DNA silencing activity. Furthermore, we show that Argonaute discriminates the nucleic acid type (RNA/DNA) by recognition of the duplex structure of the seed region. PMID:27325485

  15. Situational Awareness from a Low-Cost Camera System

    NASA Technical Reports Server (NTRS)

    Freudinger, Lawrence C.; Ward, David; Lesage, John

    2010-01-01

    A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.

  16. [CT-guided intervention by means of a laser marking and targeting aid].

    PubMed

    Klöppel, R; Wilke, W; Weisse, T; Steinecke, R

    1997-08-01

    The present study evaluates the use of a laser guidance system for CT-guided intervention. 94 cases of diagnostic biopsies and lumbar sympathectomies (54 cases with laser guidance system and 40 without) were compared. Using the laser guidance system, the number of control scans decreased by 30 to 50%, and necessary corrections of needle location were reduced by a maximum of 30%. The average target deviation of the needle decreased to less than 5 mm in 50% of cases. The laser guidance system is strongly recommended in CT-guided interventions for quality assurance and higher efficiency. The advantage is especially marked if the target area is small.

  17. Opto-mechanical system design of test system for near-infrared and visible target

    NASA Astrophysics Data System (ADS)

    Wang, Chunyan; Zhu, Guodong; Wang, Yuchao

    2014-12-01

    Guidance precision is the key indexes of the guided weapon shooting. The factors of guidance precision including: information processing precision, control system accuracy, laser irradiation accuracy and so on. The laser irradiation precision is an important factor. This paper aimed at the demand of the precision test of laser irradiator,and developed the laser precision test system. The system consists of modified cassegrain system, the wide range CCD camera, tracking turntable and industrial PC, and makes visible light and near infrared target imaging at the same time with a Near IR camera. Through the analysis of the design results, when it exposures the target of 1000 meters that the system measurement precision is43mm, fully meet the needs of the laser precision test.

  18. Development of a novel handheld intra-operative laparoscopic Compton camera for 18F-Fluoro-2-deoxy-2-D-glucose-guided surgery

    NASA Astrophysics Data System (ADS)

    Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Yoshimura, S.; Seto, Y.; Kato, S.; Takahashi, M.; Momose, T.

    2016-08-01

    As well as pre-operative roadmapping by 18F-Fluoro-2-deoxy-2-D-glucose (FDG) positron emission tomography, intra-operative localization of the tracer is important to identify local margins for less-invasive surgery, especially FDG-guided surgery. The objective of this paper is to develop a laparoscopic Compton camera and system aimed at use for intra-operative FDG imaging for accurate and less-invasive dissections. The laparoscopic Compton camera consists of four layers of a 12-pixel cross-shaped array of GFAG crystals (2× 2× 3 mm3) and through silicon via multi-pixel photon counters and dedicated individual readout electronics based on a dynamic time-over-threshold method. Experimental results yielded a spatial resolution of 4 mm (FWHM) for a 10 mm working distance and an absolute detection efficiency of 0.11 cps kBq-1, corresponding to an intrinsic detection efficiency of  ˜0.18%. In an experiment using a NEMA-like well-shaped FDG phantom, a φ 5× 10 mm cylindrical hot spot was clearly obtained even in the presence of a background distribution surrounding the Compton camera and the hot spot. We successfully obtained reconstructed images of a resected lymph node and primary tumor ex vivo after FDG administration to a patient having esophageal cancer. These performance characteristics indicate a new possibility of FDG-directed surgery by using a Compton camera intra-operatively.

  19. The development of large-aperture test system of infrared camera and visible CCD camera

    NASA Astrophysics Data System (ADS)

    Li, Yingwen; Geng, Anbing; Wang, Bo; Wang, Haitao; Wu, Yanying

    2015-10-01

    Infrared camera and CCD camera dual-band imaging system is used in many equipment and application widely. If it is tested using the traditional infrared camera test system and visible CCD test system, 2 times of installation and alignment are needed in the test procedure. The large-aperture test system of infrared camera and visible CCD camera uses the common large-aperture reflection collimator, target wheel, frame-grabber, computer which reduces the cost and the time of installation and alignment. Multiple-frame averaging algorithm is used to reduce the influence of random noise. Athermal optical design is adopted to reduce the change of focal length location change of collimator when the environmental temperature is changing, and the image quality of the collimator of large field of view and test accuracy are also improved. Its performance is the same as that of the exotic congener and is much cheaper. It will have a good market.

  20. Bioluminescence Tomography–Guided Radiation Therapy for Preclinical Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Bin; Wang, Ken Kang-Hsin, E-mail: kwang27@jhmi.edu; Yu, Jingjing

    Purpose: In preclinical radiation research, it is challenging to localize soft tissue targets based on cone beam computed tomography (CBCT) guidance. As a more effective method to localize soft tissue targets, we developed an online bioluminescence tomography (BLT) system for small-animal radiation research platform (SARRP). We demonstrated BLT-guided radiation therapy and validated targeting accuracy based on a newly developed reconstruction algorithm. Methods and Materials: The BLT system was designed to dock with the SARRP for image acquisition and to be detached before radiation delivery. A 3-mirror system was devised to reflect the bioluminescence emitted from the subject to a stationarymore » charge-coupled device (CCD) camera. Multispectral BLT and the incomplete variables truncated conjugate gradient method with a permissible region shrinking strategy were used as the optimization scheme to reconstruct bioluminescent source distributions. To validate BLT targeting accuracy, a small cylindrical light source with high CBCT contrast was placed in a phantom and also in the abdomen of a mouse carcass. The center of mass (CoM) of the source was recovered from BLT and used to guide radiation delivery. The accuracy of the BLT-guided targeting was validated with films and compared with the CBCT-guided delivery. In vivo experiments were conducted to demonstrate BLT localization capability for various source geometries. Results: Online BLT was able to recover the CoM of the embedded light source with an average accuracy of 1 mm compared to that with CBCT localization. Differences between BLT- and CBCT-guided irradiation shown on the films were consistent with the source localization revealed in the BLT and CBCT images. In vivo results demonstrated that our BLT system could potentially be applied for multiple targets and tumors. Conclusions: The online BLT/CBCT/SARRP system provides an effective solution for soft tissue targeting, particularly for small

  1. Dust deposition and removal at the MER landing sites from observations of the Panoramic Camera (Pancam) calibration targets

    NASA Astrophysics Data System (ADS)

    Kinch, K. M.; Bell, J. F.; Madsen, M. B.

    2012-12-01

    The Panoramic Cameras (Pancams) [1] on NASA's Mars Exploration Rovers have each returned in excess of 17000 images of their external calibration targets (caltargets), a set of optically well-characterized patches of materials with differing reflectance properties. During the mission dust deposition on the caltargets changed their optical reflectance properties [2]. The thickness of dust on the caltargets can be derived with high confidence from the contrast between brighter and darker colored patches. The dustier the caltarget the less contrast. We present a new history of dust deposition and removal at the two MER landing sites. Our data reveals two quite distinct dust environments. At the Spirit landing site half the Martian year is dominated by dust deposition, the other half by dust removal that usually happens during brief sharp wind events. At the Opportunity landing site the Martian year has a four-season cycle of deposition-removal-deposition-removal with dust removal happening gradually throughout the two removal seasons. Comparison to atmospheric optical depth measurements [3] shows that dust removals happen during dusty high-wind periods and that dust deposition rates are roughly proportional to the atmospheric dust load. We compare with dust deposition studies from other Mars landers and also present some early results from observation of dust on a similar camera calibration target on the Mars Science Laboratory mission. References: 1. Bell, J.F., III, et al., Mars Exploration Rover Athena Panoramic Camera (Pancam) investigation. J. Geophys. Res., 2003. 108(E12): p. 8063. 2. Kinch, K.M., et al., Dust Deposition on the Mars Exploration Rover Panoramic Camera (Pancam) Calibration Targets. J. Geophys. Res., 2007. 112(E06S03): p. doi:10.1029/2006JE002807. 3. Lemmon, M., et al., Atmospheric Imaging Results from the Mars Exploration Rovers: Spirit and Opportunity. Science, 2004. 306: p. 1753-1756. Deposited dust optical depth on the Pancam caltargets as a

  2. Evaluation of the Intel RealSense SR300 camera for image-guided interventions and application in vertebral level localization

    NASA Astrophysics Data System (ADS)

    House, Rachael; Lasso, Andras; Harish, Vinyas; Baum, Zachary; Fichtinger, Gabor

    2017-03-01

    PURPOSE: Optical pose tracking of medical instruments is often used in image-guided interventions. Unfortunately, compared to commonly used computing devices, optical trackers tend to be large, heavy, and expensive devices. Compact 3D vision systems, such as Intel RealSense cameras can capture 3D pose information at several magnitudes lower cost, size, and weight. We propose to use Intel SR300 device for applications where it is not practical or feasible to use conventional trackers and limited range and tracking accuracy is acceptable. We also put forward a vertebral level localization application utilizing the SR300 to reduce risk of wrong-level surgery. METHODS: The SR300 was utilized as an object tracker by extending the PLUS toolkit to support data collection from RealSense cameras. Accuracy of the camera was tested by comparing to a high-accuracy optical tracker. CT images of a lumbar spine phantom were obtained and used to create a 3D model in 3D Slicer. The SR300 was used to obtain a surface model of the phantom. Markers were attached to the phantom and a pointer and tracked using Intel RealSense SDK's built-in object tracking feature. 3D Slicer was used to align CT image with phantom using landmark registration and display the CT image overlaid on the optical image. RESULTS: Accuracy of the camera yielded a median position error of 3.3mm (95th percentile 6.7mm) and orientation error of 1.6° (95th percentile 4.3°) in a 20x16x10cm workspace, constantly maintaining proper marker orientation. The model and surface correctly aligned demonstrating the vertebral level localization application. CONCLUSION: The SR300 may be usable for pose tracking in medical procedures where limited accuracy is acceptable. Initial results suggest the SR300 is suitable for vertebral level localization.

  3. Optimising Camera Traps for Monitoring Small Mammals

    PubMed Central

    Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce

    2013-01-01

    Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790

  4. Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1997-01-01

    A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.

  5. A novel method to reduce time investment when processing videos from camera trap studies.

    PubMed

    Swinnen, Kristijn R R; Reijniers, Jonas; Breno, Matteo; Leirs, Herwig

    2014-01-01

    Camera traps have proven very useful in ecological, conservation and behavioral research. Camera traps non-invasively record presence and behavior of animals in their natural environment. Since the introduction of digital cameras, large amounts of data can be stored. Unfortunately, processing protocols did not evolve as fast as the technical capabilities of the cameras. We used camera traps to record videos of Eurasian beavers (Castor fiber). However, a large number of recordings did not contain the target species, but instead empty recordings or other species (together non-target recordings), making the removal of these recordings unacceptably time consuming. In this paper we propose a method to partially eliminate non-target recordings without having to watch the recordings, in order to reduce workload. Discrimination between recordings of target species and non-target recordings was based on detecting variation (changes in pixel values from frame to frame) in the recordings. Because of the size of the target species, we supposed that recordings with the target species contain on average much more movements than non-target recordings. Two different filter methods were tested and compared. We show that a partial discrimination can be made between target and non-target recordings based on variation in pixel values and that environmental conditions and filter methods influence the amount of non-target recordings that can be identified and discarded. By allowing a loss of 5% to 20% of recordings containing the target species, in ideal circumstances, 53% to 76% of non-target recordings can be identified and discarded. We conclude that adding an extra processing step in the camera trap protocol can result in large time savings. Since we are convinced that the use of camera traps will become increasingly important in the future, this filter method can benefit many researchers, using it in different contexts across the globe, on both videos and photographs.

  6. Control of target-normal-sheath-accelerated protons from a guiding cone

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zou, D. B.; Institut für Theoretische Physik I, Heinrich-Heine-Universität Düsseldorf, Düsseldorf 40225; Zhuo, H. B., E-mail: hongbin.zhuo@gmail.com

    2015-06-15

    It is demonstrated through particle-in-cell simulations that target-normal-sheath-accelerated protons can be well controlled by using a guiding cone. Compared to a conventional planar target, both the collimation and number density of proton beams are substantially improved, giving a high-quality proton beam which maintained for a longer distance without degradation. The effect is attributed to the radial electric field resulting from the charge due to the hot target electrons propagating along the cone surface. This electric field can effectively suppress the spatial spread of the protons after the expansion of the hot electrons.

  7. Vacuum compatible miniature CCD camera head

    DOEpatents

    Conder, Alan D.

    2000-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  8. Single-Molecule View of Small RNA-Guided Target Search and Recognition.

    PubMed

    Globyte, Viktorija; Kim, Sung Hyun; Joo, Chirlmin

    2018-05-20

    Most everyday processes in life involve a necessity for an entity to locate its target. On a cellular level, many proteins have to find their target to perform their function. From gene-expression regulation to DNA repair to host defense, numerous nucleic acid-interacting proteins use distinct target search mechanisms. Several proteins achieve that with the help of short RNA strands known as guides. This review focuses on single-molecule advances studying the target search and recognition mechanism of Argonaute and CRISPR (clustered regularly interspaced short palindromic repeats) systems. We discuss different steps involved in search and recognition, from the initial complex prearrangement into the target-search competent state to the final proofreading steps. We focus on target search mechanisms that range from weak interactions, to one- and three-dimensional diffusion, to conformational proofreading. We compare the mechanisms of Argonaute and CRISPR with a well-studied target search system, RecA.

  9. Dust deposition on the Mars Exploration Rover Panoramic Camera (Pancam) calibration targets

    USGS Publications Warehouse

    Kinch, K.M.; Sohl-Dickstein, J.; Bell, J.F.; Johnson, J. R.; Goetz, W.; Landis, G.A.

    2007-01-01

    The Panoramic Camera (Pancam) on the Mars Exploration Rover mission has acquired in excess of 20,000 images of the Pancam calibration targets on the rovers. Analysis of this data set allows estimates of the rate of deposition and removal of aeolian dust on both rovers. During the first 150-170 sols there was gradual dust accumulation on the rovers but no evidence for dust removal. After that time there is ample evidence for both dust removal and dust deposition on both rover decks. We analyze data from early in both rover missions using a diffusive reflectance mixing model. Assuming a dust settling rate proportional to the atmospheric optical depth, we derive spectra of optically thick layers of airfall dust that are consistent with spectra from dusty regions on the Martian surface. Airfall dust reflectance at the Opportunity site appears greater than at the Spirit site, consistent with other observations. We estimate the optical depth of dust deposited on the Spirit calibration target by sol 150 to be 0.44 ?? 0.13. For Opportunity the value was 0.39 ?? 0.12. Assuming 80% pore space, we estimate that the dust layer grew at a rate of one grain diameter per ???100 sols on the Spirit calibration target. On Opportunity the rate was one grain diameter per ???125 sols. These numbers are consistent with dust deposition rates observed by Mars Pathfinder taking into account the lower atmospheric dust optical depth during the Mars Pathfinder mission. Copyright 2007 by the American Geophysical Union.

  10. Targeted left ventricular lead placement to guide cardiac resynchronization therapy: the TARGET study: a randomized, controlled trial.

    PubMed

    Khan, Fakhar Z; Virdee, Mumohan S; Palmer, Christopher R; Pugh, Peter J; O'Halloran, Denis; Elsik, Maros; Read, Philip A; Begley, David; Fynn, Simon P; Dutka, David P

    2012-04-24

    This study sought to assess the impact of targeted left ventricular (LV) lead placement on outcomes of cardiac resynchronization therapy (CRT). Placement of the LV lead to the latest sites of contraction and away from the scar confers the best response to CRT. We conducted a randomized, controlled trial to compare a targeted approach to LV lead placement with usual care. A total of 220 patients scheduled for CRT underwent baseline echocardiographic speckle-tracking 2-dimensional radial strain imaging and were then randomized 1:1 into 2 groups. In group 1 (TARGET [Targeted Left Ventricular Lead Placement to Guide Cardiac Resynchronization Therapy]), the LV lead was positioned at the latest site of peak contraction with an amplitude of >10% to signify freedom from scar. In group 2 (control) patients underwent standard unguided CRT. Patients were classified by the relationship of the LV lead to the optimal site as concordant (at optimal site), adjacent (within 1 segment), or remote (≥2 segments away). The primary endpoint was a ≥15% reduction in LV end-systolic volume at 6 months. Secondary endpoints were clinical response (≥1 improvement in New York Heart Association functional class), all-cause mortality, and combined all-cause mortality and heart failure-related hospitalization. The groups were balanced at randomization. In the TARGET group, there was a greater proportion of responders at 6 months (70% vs. 55%, p = 0.031), giving an absolute difference in the primary endpoint of 15% (95% confidence interval: 2% to 28%). Compared with controls, TARGET patients had a higher clinical response (83% vs. 65%, p = 0.003) and lower rates of the combined endpoint (log-rank test, p = 0.031). Compared with standard CRT treatment, the use of speckle-tracking echocardiography to the target LV lead placement yields significantly improved response and clinical status and lower rates of combined death and heart failure-related hospitalization. (Targeted Left Ventricular Lead

  11. Genome-wide determination of on-target and off-target characteristics for RNA-guided DNA methylation by dCas9 methyltransferases

    PubMed Central

    Lin, Lin; Liu, Yong; Xu, Fengping; Huang, Jinrong; Daugaard, Tina Fuglsang; Petersen, Trine Skov; Hansen, Bettina; Ye, Lingfei; Zhou, Qing; Fang, Fang; Yang, Ling; Li, Shengting; Fløe, Lasse; Jensen, Kristopher Torp; Shrock, Ellen; Chen, Fang; Yang, Huanming; Wang, Jian; Liu, Xin; Xu, Xun; Bolund, Lars; Nielsen, Anders Lade; Luo, Yonglun

    2018-01-01

    Abstract Background Fusion of DNA methyltransferase domains to the nuclease-deficient clustered regularly interspaced short palindromic repeat (CRISPR) associated protein 9 (dCas9) has been used for epigenome editing, but the specificities of these dCas9 methyltransferases have not been fully investigated. Findings We generated CRISPR-guided DNA methyltransferases by fusing the catalytic domain of DNMT3A or DNMT3B to the C terminus of the dCas9 protein from Streptococcus pyogenes and validated its on-target and global off-target characteristics. Using targeted quantitative bisulfite pyrosequencing, we prove that dCas9-BFP-DNMT3A and dCas9-BFP-DNMT3B can efficiently methylate the CpG dinucleotides flanking its target sites at different genomic loci (uPA and TGFBR3) in human embryonic kidney cells (HEK293T). Furthermore, we conducted whole genome bisulfite sequencing (WGBS) to address the specificity of our dCas9 methyltransferases. WGBS revealed that although dCas9-BFP-DNMT3A and dCas9-BFP-DNMT3B did not cause global methylation changes, a substantial number (more than 1000) of the off-target differentially methylated regions (DMRs) were identified. The off-target DMRs, which were hypermethylated in cells expressing dCas9 methyltransferase and guide RNAs, were predominantly found in promoter regions, 5΄ untranslated regions, CpG islands, and DNase I hypersensitivity sites, whereas unexpected hypomethylated off-target DMRs were significantly enriched in repeated sequences. Through chromatin immunoprecipitation with massive parallel DNA sequencing analysis, we further revealed that these off-target DMRs were weakly correlated with dCas9 off-target binding sites. Using quantitative polymerase chain reaction, RNA sequencing, and fluorescence reporter cells, we also found that dCas9-BFP-DNMT3A and dCas9-BFP-DNMT3B can mediate transient inhibition of gene expression, which might be caused by dCas9-mediated de novo DNA methylation as well as interference with

  12. Genome-wide determination of on-target and off-target characteristics for RNA-guided DNA methylation by dCas9 methyltransferases.

    PubMed

    Lin, Lin; Liu, Yong; Xu, Fengping; Huang, Jinrong; Daugaard, Tina Fuglsang; Petersen, Trine Skov; Hansen, Bettina; Ye, Lingfei; Zhou, Qing; Fang, Fang; Yang, Ling; Li, Shengting; Fløe, Lasse; Jensen, Kristopher Torp; Shrock, Ellen; Chen, Fang; Yang, Huanming; Wang, Jian; Liu, Xin; Xu, Xun; Bolund, Lars; Nielsen, Anders Lade; Luo, Yonglun

    2018-03-01

    Fusion of DNA methyltransferase domains to the nuclease-deficient clustered regularly interspaced short palindromic repeat (CRISPR) associated protein 9 (dCas9) has been used for epigenome editing, but the specificities of these dCas9 methyltransferases have not been fully investigated. We generated CRISPR-guided DNA methyltransferases by fusing the catalytic domain of DNMT3A or DNMT3B to the C terminus of the dCas9 protein from Streptococcus pyogenes and validated its on-target and global off-target characteristics. Using targeted quantitative bisulfite pyrosequencing, we prove that dCas9-BFP-DNMT3A and dCas9-BFP-DNMT3B can efficiently methylate the CpG dinucleotides flanking its target sites at different genomic loci (uPA and TGFBR3) in human embryonic kidney cells (HEK293T). Furthermore, we conducted whole genome bisulfite sequencing (WGBS) to address the specificity of our dCas9 methyltransferases. WGBS revealed that although dCas9-BFP-DNMT3A and dCas9-BFP-DNMT3B did not cause global methylation changes, a substantial number (more than 1000) of the off-target differentially methylated regions (DMRs) were identified. The off-target DMRs, which were hypermethylated in cells expressing dCas9 methyltransferase and guide RNAs, were predominantly found in promoter regions, 5΄ untranslated regions, CpG islands, and DNase I hypersensitivity sites, whereas unexpected hypomethylated off-target DMRs were significantly enriched in repeated sequences. Through chromatin immunoprecipitation with massive parallel DNA sequencing analysis, we further revealed that these off-target DMRs were weakly correlated with dCas9 off-target binding sites. Using quantitative polymerase chain reaction, RNA sequencing, and fluorescence reporter cells, we also found that dCas9-BFP-DNMT3A and dCas9-BFP-DNMT3B can mediate transient inhibition of gene expression, which might be caused by dCas9-mediated de novo DNA methylation as well as interference with transcription. Our results prove that d

  13. Mini gamma camera, camera system and method of use

    DOEpatents

    Majewski, Stanislaw; Weisenberger, Andrew G.; Wojcik, Randolph F.

    2001-01-01

    A gamma camera comprising essentially and in order from the front outer or gamma ray impinging surface: 1) a collimator, 2) a scintillator layer, 3) a light guide, 4) an array of position sensitive, high resolution photomultiplier tubes, and 5) printed circuitry for receipt of the output of the photomultipliers. There is also described, a system wherein the output supplied by the high resolution, position sensitive photomultipiler tubes is communicated to: a) a digitizer and b) a computer where it is processed using advanced image processing techniques and a specific algorithm to calculate the center of gravity of any abnormality observed during imaging, and c) optional image display and telecommunications ports.

  14. Video-Camera-Based Position-Measuring System

    NASA Technical Reports Server (NTRS)

    Lane, John; Immer, Christopher; Brink, Jeffrey; Youngquist, Robert

    2005-01-01

    A prototype optoelectronic system measures the three-dimensional relative coordinates of objects of interest or of targets affixed to objects of interest in a workspace. The system includes a charge-coupled-device video camera mounted in a known position and orientation in the workspace, a frame grabber, and a personal computer running image-data-processing software. Relative to conventional optical surveying equipment, this system can be built and operated at much lower cost; however, it is less accurate. It is also much easier to operate than are conventional instrumentation systems. In addition, there is no need to establish a coordinate system through cooperative action by a team of surveyors. The system operates in real time at around 30 frames per second (limited mostly by the frame rate of the camera). It continuously tracks targets as long as they remain in the field of the camera. In this respect, it emulates more expensive, elaborate laser tracking equipment that costs of the order of 100 times as much. Unlike laser tracking equipment, this system does not pose a hazard of laser exposure. Images acquired by the camera are digitized and processed to extract all valid targets in the field of view. The three-dimensional coordinates (x, y, and z) of each target are computed from the pixel coordinates of the targets in the images to accuracy of the order of millimeters over distances of the orders of meters. The system was originally intended specifically for real-time position measurement of payload transfers from payload canisters into the payload bay of the Space Shuttle Orbiters (see Figure 1). The system may be easily adapted to other applications that involve similar coordinate-measuring requirements. Examples of such applications include manufacturing, construction, preliminary approximate land surveying, and aerial surveying. For some applications with rectangular symmetry, it is feasible and desirable to attach a target composed of black and white

  15. Calibration of asynchronous smart phone cameras from moving objects

    NASA Astrophysics Data System (ADS)

    Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel

    2015-04-01

    Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.

  16. Reliable vision-guided grasping

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1992-01-01

    Automated assembly of truss structures in space requires vision-guided servoing for grasping a strut when its position and orientation are uncertain. This paper presents a methodology for efficient and robust vision-guided robot grasping alignment. The vision-guided grasping problem is related to vision-guided 'docking' problems. It differs from other hand-in-eye visual servoing problems, such as tracking, in that the distance from the target is a relevant servo parameter. The methodology described in this paper is hierarchy of levels in which the vision/robot interface is decreasingly 'intelligent,' and increasingly fast. Speed is achieved primarily by information reduction. This reduction exploits the use of region-of-interest windows in the image plane and feature motion prediction. These reductions invariably require stringent assumptions about the image. Therefore, at a higher level, these assumptions are verified using slower, more reliable methods. This hierarchy provides for robust error recovery in that when a lower-level routine fails, the next-higher routine will be called and so on. A working system is described which visually aligns a robot to grasp a cylindrical strut. The system uses a single camera mounted on the end effector of a robot and requires only crude calibration parameters. The grasping procedure is fast and reliable, with a multi-level error recovery system.

  17. Object-based target templates guide attention during visual search.

    PubMed

    Berggren, Nick; Eimer, Martin

    2018-05-03

    During visual search, attention is believed to be controlled in a strictly feature-based fashion, without any guidance by object-based target representations. To challenge this received view, we measured electrophysiological markers of attentional selection (N2pc component) and working memory (sustained posterior contralateral negativity; SPCN) in search tasks where two possible targets were defined by feature conjunctions (e.g., blue circles and green squares). Critically, some search displays also contained nontargets with two target features (incorrect conjunction objects, e.g., blue squares). Because feature-based guidance cannot distinguish these objects from targets, any selective bias for targets will reflect object-based attentional control. In Experiment 1, where search displays always contained only one object with target-matching features, targets and incorrect conjunction objects elicited identical N2pc and SPCN components, demonstrating that attentional guidance was entirely feature-based. In Experiment 2, where targets and incorrect conjunction objects could appear in the same display, clear evidence for object-based attentional control was found. The target N2pc became larger than the N2pc to incorrect conjunction objects from 250 ms poststimulus, and only targets elicited SPCN components. This demonstrates that after an initial feature-based guidance phase, object-based templates are activated when they are required to distinguish target and nontarget objects. These templates modulate visual processing and control access to working memory, and their activation may coincide with the start of feature integration processes. Results also suggest that while multiple feature templates can be activated concurrently, only a single object-based target template can guide attention at any given time. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Calibration Method for IATS and Application in Multi-Target Monitoring Using Coded Targets

    NASA Astrophysics Data System (ADS)

    Zhou, Yueyin; Wagner, Andreas; Wunderlich, Thomas; Wasmeier, Peter

    2017-06-01

    The technique of Image Assisted Total Stations (IATS) has been studied for over ten years and is composed of two major parts: one is the calibration procedure which combines the relationship between the camera system and the theodolite system; the other is the automatic target detection on the image by various methods of photogrammetry or computer vision. Several calibration methods have been developed, mostly using prototypes with an add-on camera rigidly mounted on the total station. However, these prototypes are not commercially available. This paper proposes a calibration method based on Leica MS50 which has two built-in cameras each with a resolution of 2560 × 1920 px: an overview camera and a telescope (on-axis) camera. Our work in this paper is based on the on-axis camera which uses the 30-times magnification of the telescope. The calibration consists of 7 parameters to estimate. We use coded targets, which are common tools in photogrammetry for orientation, to detect different targets in IATS images instead of prisms and traditional ATR functions. We test and verify the efficiency and stability of this monitoring method with multi-target.

  19. FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.

    PubMed

    Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu

    2017-07-18

    Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.

  20. A Daytime Aspect Camera for Balloon Altitudes

    NASA Technical Reports Server (NTRS)

    Dietz, Kurt L.; Ramsey, Brian D.; Alexander, Cheryl D.; Apple, Jeff A.; Ghosh, Kajal K.; Swift, Wesley R.; Six, N. Frank (Technical Monitor)

    2001-01-01

    We have designed, built, and flight-tested a new star camera for daytime guiding of pointed balloon-borne experiments at altitudes around 40km. The camera and lens are commercially available, off-the-shelf components, but require a custom-built baffle to reduce stray light, especially near the sunlit limb of the balloon. This new camera, which operates in the 600-1000 nm region of the spectrum, successfully provided daytime aspect information of approximately 10 arcsecond resolution for two distinct star fields near the galactic plane. The detected scattered-light backgrounds show good agreement with the Air Force MODTRAN models, but the daytime stellar magnitude limit was lower than expected due to dispersion of red light by the lens. Replacing the commercial lens with a custom-built lens should allow the system to track stars in any arbitrary area of the sky during the daytime.

  1. Autonomous pedestrian localization technique using CMOS camera sensors

    NASA Astrophysics Data System (ADS)

    Chun, Chanwoo

    2014-09-01

    We present a pedestrian localization technique that does not need infrastructure. The proposed angle-only measurement method needs specially manufactured shoes. Each shoe has two CMOS cameras and two markers such as LEDs attached on the inward side. The line of sight (LOS) angles towards the two markers on the forward shoe are measured using the two cameras on the other rear shoe. Our simulation results shows that a pedestrian walking down in a shopping mall wearing this device can be accurately guided to the front of a destination store located 100m away, if the floor plan of the mall is available.

  2. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    NASA Astrophysics Data System (ADS)

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  3. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization

    PubMed Central

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj

    2017-01-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703

  4. Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.

    PubMed

    Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj

    2015-03-01

    Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.

  5. Joint Calibration of 3d Laser Scanner and Digital Camera Based on Dlt Algorithm

    NASA Astrophysics Data System (ADS)

    Gao, X.; Li, M.; Xing, L.; Liu, Y.

    2018-04-01

    Design a calibration target that can be scanned by 3D laser scanner while shot by digital camera, achieving point cloud and photos of a same target. A method to joint calibrate 3D laser scanner and digital camera based on Direct Linear Transformation algorithm was proposed. This method adds a distortion model of digital camera to traditional DLT algorithm, after repeating iteration, it can solve the inner and external position element of the camera as well as the joint calibration of 3D laser scanner and digital camera. It comes to prove that this method is reliable.

  6. Inflight Calibration of the Lunar Reconnaissance Orbiter Camera Wide Angle Camera

    NASA Astrophysics Data System (ADS)

    Mahanti, P.; Humm, D. C.; Robinson, M. S.; Boyd, A. K.; Stelling, R.; Sato, H.; Denevi, B. W.; Braden, S. E.; Bowman-Cisneros, E.; Brylow, S. M.; Tschimmel, M.

    2016-04-01

    The Lunar Reconnaissance Orbiter Camera (LROC) Wide Angle Camera (WAC) has acquired more than 250,000 images of the illuminated lunar surface and over 190,000 observations of space and non-illuminated Moon since 1 January 2010. These images, along with images from the Narrow Angle Camera (NAC) and other Lunar Reconnaissance Orbiter instrument datasets are enabling new discoveries about the morphology, composition, and geologic/geochemical evolution of the Moon. Characterizing the inflight WAC system performance is crucial to scientific and exploration results. Pre-launch calibration of the WAC provided a baseline characterization that was critical for early targeting and analysis. Here we present an analysis of WAC performance from the inflight data. In the course of our analysis we compare and contrast with the pre-launch performance wherever possible and quantify the uncertainty related to various components of the calibration process. We document the absolute and relative radiometric calibration, point spread function, and scattered light sources and provide estimates of sources of uncertainty for spectral reflectance measurements of the Moon across a range of imaging conditions.

  7. Attitude identification for SCOLE using two infrared cameras

    NASA Technical Reports Server (NTRS)

    Shenhar, Joram

    1991-01-01

    An algorithm is presented that incorporates real time data from two infrared cameras and computes the attitude parameters of the Spacecraft COntrol Lab Experiment (SCOLE), a lab apparatus representing an offset feed antenna attached to the Space Shuttle by a flexible mast. The algorithm uses camera position data of three miniature light emitting diodes (LEDs), mounted on the SCOLE platform, permitting arbitrary camera placement and an on-line attitude extraction. The continuous nature of the algorithm allows identification of the placement of the two cameras with respect to some initial position of the three reference LEDs, followed by on-line six degrees of freedom attitude tracking, regardless of the attitude time history. A description is provided of the algorithm in the camera identification mode as well as the mode of target tracking. Experimental data from a reduced size SCOLE-like lab model, reflecting the performance of the camera identification and the tracking processes, are presented. Computer code for camera placement identification and SCOLE attitude tracking is listed.

  8. Fuzzy logic control for camera tracking system

    NASA Technical Reports Server (NTRS)

    Lea, Robert N.; Fritz, R. H.; Giarratano, J.; Jani, Yashvant

    1992-01-01

    A concept utilizing fuzzy theory has been developed for a camera tracking system to provide support for proximity operations and traffic management around the Space Station Freedom. Fuzzy sets and fuzzy logic based reasoning are used in a control system which utilizes images from a camera and generates required pan and tilt commands to track and maintain a moving target in the camera's field of view. This control system can be implemented on a fuzzy chip to provide an intelligent sensor for autonomous operations. Capabilities of the control system can be expanded to include approach, handover to other sensors, caution and warning messages.

  9. Application of infrared uncooled cameras in surveillance systems

    NASA Astrophysics Data System (ADS)

    Dulski, R.; Bareła, J.; Trzaskawka, P.; PiÄ tkowski, T.

    2013-10-01

    The recent necessity to protect military bases, convoys and patrols gave serious impact to the development of multisensor security systems for perimeter protection. One of the most important devices used in such systems are IR cameras. The paper discusses technical possibilities and limitations to use uncooled IR camera in a multi-sensor surveillance system for perimeter protection. Effective ranges of detection depend on the class of the sensor used and the observed scene itself. Application of IR camera increases the probability of intruder detection regardless of the time of day or weather conditions. It also simultaneously decreased the false alarm rate produced by the surveillance system. The role of IR cameras in the system was discussed as well as technical possibilities to detect human being. Comparison of commercially available IR cameras, capable to achieve desired ranges was done. The required spatial resolution for detection, recognition and identification was calculated. The simulation of detection ranges was done using a new model for predicting target acquisition performance which uses the Targeting Task Performance (TTP) metric. Like its predecessor, the Johnson criteria, the new model bounds the range performance with image quality. The scope of presented analysis is limited to the estimation of detection, recognition and identification ranges for typical thermal cameras with uncooled microbolometer focal plane arrays. This type of cameras is most widely used in security systems because of competitive price to performance ratio. Detection, recognition and identification range calculations were made, and the appropriate results for the devices with selected technical specifications were compared and discussed.

  10. Active 3D camera design for target capture on Mars orbit

    NASA Astrophysics Data System (ADS)

    Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno

    2010-04-01

    During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.

  11. RNA-guided genome editing for target gene mutations in wheat.

    PubMed

    Upadhyay, Santosh Kumar; Kumar, Jitesh; Alok, Anshu; Tuli, Rakesh

    2013-12-09

    The clustered, regularly interspaced, short palindromic repeats (CRISPR) and CRISPR-associated protein (Cas) system has been used as an efficient tool for genome editing. We report the application of CRISPR-Cas-mediated genome editing to wheat (Triticum aestivum), the most important food crop plant with a very large and complex genome. The mutations were targeted in the inositol oxygenase (inox) and phytoene desaturase (pds) genes using cell suspension culture of wheat and in the pds gene in leaves of Nicotiana benthamiana. The expression of chimeric guide RNAs (cgRNA) targeting single and multiple sites resulted in indel mutations in all the tested samples. The expression of Cas9 or sgRNA alone did not cause any mutation. The expression of duplex cgRNA with Cas9 targeting two sites in the same gene resulted in deletion of DNA fragment between the targeted sequences. Multiplexing the cgRNA could target two genes at one time. Target specificity analysis of cgRNA showed that mismatches at the 3' end of the target site abolished the cleavage activity completely. The mismatches at the 5' end reduced cleavage, suggesting that the off target effects can be abolished in vivo by selecting target sites with unique sequences at 3' end. This approach provides a powerful method for genome engineering in plants.

  12. saRNA-guided Ago2 targets the RITA complex to promoters to stimulate transcription.

    PubMed

    Portnoy, Victoria; Lin, Szu Hua Sharon; Li, Kathy H; Burlingame, Alma; Hu, Zheng-Hui; Li, Hao; Li, Long-Cheng

    2016-03-01

    Small activating RNAs (saRNAs) targeting specific promoter regions are able to stimulate gene expression at the transcriptional level, a phenomenon known as RNA activation (RNAa). It is known that RNAa depends on Ago2 and is associated with epigenetic changes at the target promoters. However, the precise molecular mechanism of RNAa remains elusive. Using human CDKN1A (p21) as a model gene, we characterized the molecular nature of RNAa. We show that saRNAs guide Ago2 to and associate with target promoters. saRNA-loaded Ago2 facilitates the assembly of an RNA-induced transcriptional activation (RITA) complex, which, in addition to saRNA-Ago2 complex, includes RHA and CTR9, the latter being a component of the PAF1 complex. RITA interacts with RNA polymerase II to stimulate transcription initiation and productive elongation, accompanied by monoubiquitination of histone 2B. Our results establish the existence of a cellular RNA-guided genome-targeting and transcriptional activation mechanism and provide important new mechanistic insights into the RNAa process.

  13. Soft x-ray streak camera for laser fusion applications

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stradling, G.L.

    This thesis reviews the development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development. A brief introduction of laser fusion and laser fusion diagnostics is presented. The need for a soft x-ray streak camera as a laser fusion diagnostic is shown. Basic x-ray streak camera characteristics, design, and operation are reviewed. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV aremore » also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.« less

  14. Target coverage in image-guided stereotactic body radiotherapy of liver tumors.

    PubMed

    Wunderink, Wouter; Méndez Romero, Alejandra; Vásquez Osorio, Eliana M; de Boer, Hans C J; Brandwijk, René P; Levendag, Peter C; Heijmen, Ben J M

    2007-05-01

    To determine the effect of image-guided procedures (with computed tomography [CT] and electronic portal images before each treatment fraction) on target coverage in stereotactic body radiotherapy for liver patients using a stereotactic body frame (SBF) and abdominal compression. CT guidance was used to correct for day-to-day variations in the tumor's mean position in the SBF. By retrospectively evaluating 57 treatment sessions, tumor coverage, as obtained with the clinically applied CT-guided protocol, was compared with that of alternative procedures. The internal target volume-plus (ITV(+)) was introduced to explicitly include uncertainties in tumor delineations resulting from CT-imaging artifacts caused by residual respiratory motion. Tumor coverage was defined as the volume overlap of the ITV(+), derived from a tumor delineated in a treatment CT scan, and the planning target volume. Patient stability in the SBF, after acquisition of the treatment CT scan, was evaluated by measuring the displacement of the bony anatomy in the electronic portal images relative to CT. Application of our clinical protocol (with setup corrections following from manual measurements of the distances between the contours of the planning target volume and the daily clinical target volume in three orthogonal planes, multiple two-dimensional) increased the frequency of nearly full (> or = 99%) ITV(+) coverage to 77% compared with 63% without setup correction. An automated three-dimensional method further improved the frequency to 96%. Patient displacements in the SBF were generally small (< or = 2 mm, 1 standard deviation), but large craniocaudal displacements (maximal 7.2 mm) were occasionally observed. Daily, CT-assisted patient setup may substantially improve tumor coverage, especially with the automated three-dimensional procedure. In the present treatment design, patient stability in the SBF should be verified with portal imaging.

  15. CasA mediates Cas3-catalyzed target degradation during CRISPR RNA-guided interference.

    PubMed

    Hochstrasser, Megan L; Taylor, David W; Bhat, Prashant; Guegler, Chantal K; Sternberg, Samuel H; Nogales, Eva; Doudna, Jennifer A

    2014-05-06

    In bacteria, the clustered regularly interspaced short palindromic repeats (CRISPR)-associated (Cas) DNA-targeting complex Cascade (CRISPR-associated complex for antiviral defense) uses CRISPR RNA (crRNA) guides to bind complementary DNA targets at sites adjacent to a trinucleotide signature sequence called the protospacer adjacent motif (PAM). The Cascade complex then recruits Cas3, a nuclease-helicase that catalyzes unwinding and cleavage of foreign double-stranded DNA (dsDNA) bearing a sequence matching that of the crRNA. Cascade comprises the CasA-E proteins and one crRNA, forming a structure that binds and unwinds dsDNA to form an R loop in which the target strand of the DNA base pairs with the 32-nt RNA guide sequence. Single-particle electron microscopy reconstructions of dsDNA-bound Cascade with and without Cas3 reveal that Cascade positions the PAM-proximal end of the DNA duplex at the CasA subunit and near the site of Cas3 association. The finding that the DNA target and Cas3 colocalize with CasA implicates this subunit in a key target-validation step during DNA interference. We show biochemically that base pairing of the PAM region is unnecessary for target binding but critical for Cas3-mediated degradation. In addition, the L1 loop of CasA, previously implicated in PAM recognition, is essential for Cas3 activation following target binding by Cascade. Together, these data show that the CasA subunit of Cascade functions as an essential partner of Cas3 by recognizing DNA target sites and positioning Cas3 adjacent to the PAM to ensure cleavage.

  16. Line following using a two camera guidance system for a mobile robot

    NASA Astrophysics Data System (ADS)

    Samu, Tayib; Kelkar, Nikhal; Perdue, David; Ruthemeyer, Michael A.; Matthews, Bradley O.; Hall, Ernest L.

    1996-10-01

    Automated unmanned guided vehicles have many potential applications in manufacturing, medicine, space and defense. A mobile robot has been designed for the 1996 Automated Unmanned Vehicle Society competition which was held in Orlando, Florida on July 15, 1996. The competition required the vehicle to follow solid and dashed lines around an approximately 800 ft. path while avoiding obstacles, overcoming terrain changes such as inclines and sand traps, and attempting to maximize speed. The purpose of this paper is to describe the algorithm developed for the line following. The line following algorithm images two windows and locates their centroid and with the knowledge that the points are on the ground plane, a mathematical and geometrical relationship between the image coordinates of the points and their corresponding ground coordinates are established. The angle of the line and minimum distance from the robot centroid are then calculated and used in the steering control. Two cameras are mounted on the robot with a camera on each side. One camera guides the robot and when it loses track of the line on its side, the robot control system automatically switches to the other camera. The test bed system has provided an educational experience for all involved and permits understanding and extending the state of the art in autonomous vehicle design.

  17. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera.

    PubMed

    Miller, Brian W; Frost, Sofia H L; Frayo, Shani L; Kenoyer, Aimee L; Santos, Erlinda; Jones, Jon C; Green, Damian J; Hamlin, Donald K; Wilbur, D Scott; Fisher, Darrell R; Orozco, Johnnie J; Press, Oliver W; Pagel, John M; Sandmaier, Brenda M

    2015-07-01

    Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50-80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclide distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ((211)At) activity distributions in cryosections of murine and canine tissue samples. The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10(-4) cpm/cm(2) (40 mm diameter detector area). Simultaneous imaging of multiple tissue sections was

  18. IR Camera Report for the 7 Day Production Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Holloway, Michael Andrew

    2016-02-22

    The following report gives a summary of the IR camera performance results and data for the 7 day production run that occurred from 10 Sep 2015 thru 16 Sep 2015. During this production run our goal was to see how well the camera performed its task of monitoring the target window temperature with our improved alignment procedure and emissivity measurements. We also wanted to see if the increased shielding would be effective in protecting the camera from damage and failure.

  19. Conceptual design of a neutron camera for MAST Upgrade

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Weiszflog, M., E-mail: matthias.weiszflog@physics.uu.se; Sangaroon, S.; Cecconello, M.

    2014-11-15

    This paper presents two different conceptual designs of neutron cameras for Mega Ampere Spherical Tokamak (MAST) Upgrade. The first one consists of two horizontal cameras, one equatorial and one vertically down-shifted by 65 cm. The second design, viewing the plasma in a poloidal section, also consists of two cameras, one radial and the other one with a diagonal view. Design parameters for the different cameras were selected on the basis of neutron transport calculations and on a set of target measurement requirements taking into account the predicted neutron emissivities in the different MAST Upgrade operating scenarios. Based on a comparisonmore » of the cameras’ profile resolving power, the horizontal cameras are suggested as the best option.« less

  20. Tumor acidity-activatable TAT targeted nanomedicine for enlarged fluorescence/magnetic resonance imaging-guided photodynamic therapy.

    PubMed

    Gao, Meng; Fan, Feng; Li, Dongdong; Yu, Yue; Mao, Kuirong; Sun, Tianmeng; Qian, Haisheng; Tao, Wei; Yang, Xianzhu

    2017-07-01

    Nanoparticles simultaneously integrated the photosensitizers and diagnostic agents represent an emerging approach for imaging-guided photodynamic therapy (PDT). However, the diagnostic sensitivity and therapeutic efficacy of nanoparticles as well as the heterogeneity of tumors pose tremendous challenges for clinical imaging-guided PDT treatment. Herein, a polymeric nanoparticle with tumor acidity (pH e )-activatable TAT targeting ligand that encapsulates the photosensitizer chlorin e6 (Ce6) and chelates contrast agent Gd 3+ is successfully developed for fluorescence/magnetic resonance (MR) dual-model imaging-guided precision PDT. We show clear evidence that the resulting nanoparticle DA TAT-NP [its TAT lysine residues' amines was modified by 2,3-dimethylmaleic anhydride (DA)] efficiently avoids the rapid clearance by reticuloendothelial system (RES) by masking of the TAT peptide, resulting in the significantly prolonged circulation time in the blood. Once accumulating in the tumor tissues, DA TAT-NP is reactivated by tumor acidity to promote cellular uptake, resulting in enlarged fluorescence/MR imaging signal intensity and elevated in vivo PDT therapeutic effect. This concept provides new avenues to design tumor acidity-activatable targeted nanoparticles for imaging-guided cancer therapy. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Printed circuit board for a CCD camera head

    DOEpatents

    Conder, Alan D.

    2002-01-01

    A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.

  2. Moving Object Detection on a Vehicle Mounted Back-Up Camera

    PubMed Central

    Kim, Dong-Sun; Kwon, Jinsan

    2015-01-01

    In the detection of moving objects from vision sources one usually assumes that the scene has been captured by stationary cameras. In case of backing up a vehicle, however, the camera mounted on the vehicle moves according to the vehicle’s movement, resulting in ego-motions on the background. This results in mixed motion in the scene, and makes it difficult to distinguish between the target objects and background motions. Without further treatments on the mixed motion, traditional fixed-viewpoint object detection methods will lead to many false-positive detection results. In this paper, we suggest a procedure to be used with the traditional moving object detection methods relaxing the stationary cameras restriction, by introducing additional steps before and after the detection. We also decribe the implementation as a FPGA platform along with the algorithm. The target application of this suggestion is use with a road vehicle’s rear-view camera systems. PMID:26712761

  3. Development of CCD Cameras for Soft X-ray Imaging at the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Teruya, A. T.; Palmer, N. E.; Schneider, M. B.

    2013-09-01

    The Static X-Ray Imager (SXI) is a National Ignition Facility (NIF) diagnostic that uses a CCD camera to record time-integrated X-ray images of target features such as the laser entrance hole of hohlraums. SXI has two dedicated positioners on the NIF target chamber for viewing the target from above and below, and the X-ray energies of interest are 870 eV for the “soft” channel and 3 – 5 keV for the “hard” channels. The original cameras utilize a large format back-illuminated 2048 x 2048 CCD sensor with 24 micron pixels. Since the original sensor is no longer available, an effortmore » was recently undertaken to build replacement cameras with suitable new sensors. Three of the new cameras use a commercially available front-illuminated CCD of similar size to the original, which has adequate sensitivity for the hard X-ray channels but not for the soft. For sensitivity below 1 keV, Lawrence Livermore National Laboratory (LLNL) had additional CCDs back-thinned and converted to back-illumination for use in the other two new cameras. In this paper we describe the characteristics of the new cameras and present performance data (quantum efficiency, flat field, and dynamic range) for the front- and back-illuminated cameras, with comparisons to the original cameras.« less

  4. The Zwicky Transient Facility Camera

    NASA Astrophysics Data System (ADS)

    Dekany, Richard; Smith, Roger M.; Belicki, Justin; Delacroix, Alexandre; Duggan, Gina; Feeney, Michael; Hale, David; Kaye, Stephen; Milburn, Jennifer; Murphy, Patrick; Porter, Michael; Reiley, Daniel J.; Riddle, Reed L.; Rodriguez, Hector; Bellm, Eric C.

    2016-08-01

    The Zwicky Transient Facility Camera (ZTFC) is a key element of the ZTF Observing System, the integrated system of optoelectromechanical instrumentation tasked to acquire the wide-field, high-cadence time-domain astronomical data at the heart of the Zwicky Transient Facility. The ZTFC consists of a compact cryostat with large vacuum window protecting a mosaic of 16 large, wafer-scale science CCDs and 4 smaller guide/focus CCDs, a sophisticated vacuum interface board which carries data as electrical signals out of the cryostat, an electromechanical window frame for securing externally inserted optical filter selections, and associated cryo-thermal/vacuum system support elements. The ZTFC provides an instantaneous 47 deg2 field of view, limited by primary mirror vignetting in its Schmidt telescope prime focus configuration. We report here on the design and performance of the ZTF CCD camera cryostat and report results from extensive Joule-Thompson cryocooler tests that may be of broad interest to the instrumentation community.

  5. Technology development: Future use of NASA's large format camera is uncertain

    NASA Astrophysics Data System (ADS)

    Rey, Charles F.; Fliegel, Ilene H.; Rohner, Karl A.

    1990-06-01

    The Large Format Camera, developed as a project to verify an engineering concept or design, has been flown only once, in 1984, on the shuttle Challenger. Since this flight, the camera has been in storage. NASA had expected that, following the camera's successful demonstration, other government agencies or private companies with special interests in photographic applications would absorb the costs for further flights using the Large Format Camera. But, because shuttle transportation costs for the Large Format Camera were estimated to be approximately $20 million (in 1987 dollars) per flight and the market for selling Large Format Camera products was limited, NASA was not successful in interesting other agencies or private companies in paying the costs. Using the camera on the space station does not appear to be a realistic alternative. Using the camera aboard NASA's Earth Resources Research (ER-2) aircraft may be feasible. Until the final disposition of the camera is decided, NASA has taken actions to protect it from environmental deterioration. The Government Accounting Office (GAO) recommends that the NASA Administrator should consider, first, using the camera on an aircraft such as the ER-2. NASA plans to solicit the private sector for expressions of interest in such use of the camera, at no cost to the government, and will be guided by the private sector response. Second, GAO recommends that if aircraft use is determined to be infeasible, NASA should consider transferring the camera to a museum, such as the National Air and Space Museum.

  6. Highly efficient targeted mutagenesis in axolotl using Cas9 RNA-guided nuclease

    PubMed Central

    Flowers, G. Parker; Timberlake, Andrew T.; Mclean, Kaitlin C.; Monaghan, James R.; Crews, Craig M.

    2014-01-01

    Among tetrapods, only urodele salamanders, such as the axolotl Ambystoma mexicanum, can completely regenerate limbs as adults. The mystery of why salamanders, but not other animals, possess this ability has for generations captivated scientists seeking to induce this phenomenon in other vertebrates. Although many recent advances in molecular biology have allowed limb regeneration and tissue repair in the axolotl to be investigated in increasing detail, the molecular toolkit for the study of this process has been limited. Here, we report that the CRISPR-Cas9 RNA-guided nuclease system can efficiently create mutations at targeted sites within the axolotl genome. We identify individual animals treated with RNA-guided nucleases that have mutation frequencies close to 100% at targeted sites. We employ this technique to completely functionally ablate EGFP expression in transgenic animals and recapitulate developmental phenotypes produced by loss of the conserved gene brachyury. Thus, this advance allows a reverse genetic approach in the axolotl and will undoubtedly provide invaluable insight into the mechanisms of salamanders' unique regenerative ability. PMID:24764077

  7. Red ball ranging optimization based on dual camera ranging method

    NASA Astrophysics Data System (ADS)

    Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung

    2018-05-01

    In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.

  8. Motion compensation for MRI-compatible patient-mounted needle guide device: estimation of targeting accuracy in MRI-guided kidney cryoablations

    NASA Astrophysics Data System (ADS)

    Tokuda, Junichi; Chauvin, Laurent; Ninni, Brian; Kato, Takahisa; King, Franklin; Tuncali, Kemal; Hata, Nobuhiko

    2018-04-01

    Patient-mounted needle guide devices for percutaneous ablation are vulnerable to patient motion. The objective of this study is to develop and evaluate a software system for an MRI-compatible patient-mounted needle guide device that can adaptively compensate for displacement of the device due to patient motion using a novel image-based automatic device-to-image registration technique. We have developed a software system for an MRI-compatible patient-mounted needle guide device for percutaneous ablation. It features fully-automated image-based device-to-image registration to track the device position, and a device controller to adjust the needle trajectory to compensate for the displacement of the device. We performed: (a) a phantom study using a clinical MR scanner to evaluate registration performance; (b) simulations using intraoperative time-series MR data acquired in 20 clinical cases of MRI-guided renal cryoablations to assess its impact on motion compensation; and (c) a pilot clinical study in three patients to test its feasibility during the clinical procedure. FRE, TRE, and success rate of device-to-image registration were mm, mm, and 98.3% for the phantom images. The simulation study showed that the motion compensation reduced the targeting error for needle placement from 8.2 mm to 5.4 mm (p  <  0.0005) in patients under general anesthesia (GA), and from 14.4 mm to 10.0 mm () in patients under monitored anesthesia care (MAC). The pilot study showed that the software registered the device successfully in a clinical setting. Our simulation study demonstrated that the software system could significantly improve targeting accuracy in patients treated under both MAC and GA. Intraprocedural image-based device-to-image registration was feasible.

  9. An electrically tunable plenoptic camera using a liquid crystal microlens array

    NASA Astrophysics Data System (ADS)

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  10. An electrically tunable plenoptic camera using a liquid crystal microlens array.

    PubMed

    Lei, Yu; Tong, Qing; Zhang, Xinyu; Sang, Hongshi; Ji, An; Xie, Changsheng

    2015-05-01

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated with an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.

  11. Soft X-ray streak camera for laser fusion applications

    NASA Astrophysics Data System (ADS)

    Stradling, G. L.

    1981-04-01

    The development and significance of the soft x-ray streak camera (SXRSC) in the context of inertial confinement fusion energy development is reviewed as well as laser fusion and laser fusion diagnostics. The SXRSC design criteria, the requirement for a subkilovolt x-ray transmitting window, and the resulting camera design are explained. Theory and design of reflector-filter pair combinations for three subkilovolt channels centered at 220 eV, 460 eV, and 620 eV are also presented. Calibration experiments are explained and data showing a dynamic range of 1000 and a sweep speed of 134 psec/mm are presented. Sensitivity modifications to the soft x-ray streak camera for a high-power target shot are described. A preliminary investigation, using a stepped cathode, of the thickness dependence of the gold photocathode response is discussed. Data from a typical Argus laser gold-disk target experiment are shown.

  12. X-ray imaging using digital cameras

    NASA Astrophysics Data System (ADS)

    Winch, Nicola M.; Edgar, Andrew

    2012-03-01

    The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.

  13. Absolute colorimetric characterization of a DSLR camera

    NASA Astrophysics Data System (ADS)

    Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo

    2014-03-01

    A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.

  14. NEUTRON RADIATION DAMAGE IN CCD CAMERAS AT JOINT EUROPEAN TORUS (JET).

    PubMed

    Milocco, Alberto; Conroy, Sean; Popovichev, Sergey; Sergienko, Gennady; Huber, Alexander

    2017-10-26

    The neutron and gamma radiations in large fusion reactors are responsible for damage to charged couple device (CCD) cameras deployed for applied diagnostics. Based on the ASTM guide E722-09, the 'equivalent 1 MeV neutron fluence in silicon' was calculated for a set of CCD cameras at the Joint European Torus. Such evaluations would be useful to good practice in the operation of the video systems. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  15. Dual-mode ultrasound arrays for image-guided targeting of atheromatous plaques

    NASA Astrophysics Data System (ADS)

    Ballard, John R.; Casper, Andrew J.; Liu, Dalong; Haritonova, Alyona; Shehata, Islam A.; Troutman, Mitchell; Ebbini, Emad S.

    2012-11-01

    A feasibility study was undertaken in order to investigate alternative noninvasive treatment options for atherosclerosis. In particular, the aim of this study was to investigate the potential use of Dual-Mode Ultrasound Arrays (DMUAs) for image guided treatment of atheromatous plaques. DMUAs offer a unique treatment paradigm for image-guided surgery allowing for robust image-based identification of tissue targets for localized application of HIFU. In this study we present imaging and therapeutic results form a 3.5 MHz, 64-element fenestrated prototype DMUA for targeting lesions in the femoral artery of familial hypercholesterolemic (FH) swine. Before treatment, diagnostic ultrasound was used to verify the presence of plaque in the femoral artery of the swine. Images obtained with the DMUA and a diagnostic (HST 15-8) transducer housed in the fenestration were analyzed and used for guidance in targeting of the plaque. Discrete therapeutic shots with an estimated focal intensity of 4000-5600 W/cm2 and 500-2000 msec duration were performed at several planes in the plaque. During therapy, pulsed HIFU was interleaved with single transmit focus imaging from the DMUA and M2D imaging from the diagnostic transducer for further analysis of lesion formation. After therapy, the swine's were recovered and later sacrificed after 4 and 7 days for histological analysis of lesion formation. At sacrifice, the lower half of the swine was perfused and the femoral artery with adjoining muscle was fixed and stained with H&E to characterize HIFU-induced lesions. Histology has confirmed that localized thermal lesion formation within the plaque was achieved according to the planned lesion maps. Furthermore, the damage was confined to the plaque tissue without damage to the intima. These results offer the promise of a new treatment potentially suited for vulnerable plaques. The results also provide the first real-time demonstration of DMUA technology in targeting fine tissue structures for

  16. A Reconfigurable Real-Time Compressive-Sampling Camera for Biological Applications

    PubMed Central

    Fu, Bo; Pitter, Mark C.; Russell, Noah A.

    2011-01-01

    Many applications in biology, such as long-term functional imaging of neural and cardiac systems, require continuous high-speed imaging. This is typically not possible, however, using commercially available systems. The frame rate and the recording time of high-speed cameras are limited by the digitization rate and the capacity of on-camera memory. Further restrictions are often imposed by the limited bandwidth of the data link to the host computer. Even if the system bandwidth is not a limiting factor, continuous high-speed acquisition results in very large volumes of data that are difficult to handle, particularly when real-time analysis is required. In response to this issue many cameras allow a predetermined, rectangular region of interest (ROI) to be sampled, however this approach lacks flexibility and is blind to the image region outside of the ROI. We have addressed this problem by building a camera system using a randomly-addressable CMOS sensor. The camera has a low bandwidth, but is able to capture continuous high-speed images of an arbitrarily defined ROI, using most of the available bandwidth, while simultaneously acquiring low-speed, full frame images using the remaining bandwidth. In addition, the camera is able to use the full-frame information to recalculate the positions of targets and update the high-speed ROIs without interrupting acquisition. In this way the camera is capable of imaging moving targets at high-speed while simultaneously imaging the whole frame at a lower speed. We have used this camera system to monitor the heartbeat and blood cell flow of a water flea (Daphnia) at frame rates in excess of 1500 fps. PMID:22028852

  17. Quantitative single-particle digital autoradiography with α-particle emitters for targeted radionuclide therapy using the iQID camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Miller, Brian W., E-mail: brian.miller@pnnl.gov; Frost, Sofia H. L.; Frayo, Shani L.

    2015-07-15

    Purpose: Alpha-emitting radionuclides exhibit a potential advantage for cancer treatments because they release large amounts of ionizing energy over a few cell diameters (50–80 μm), causing localized, irreparable double-strand DNA breaks that lead to cell death. Radioimmunotherapy (RIT) approaches using monoclonal antibodies labeled with α emitters may thus inactivate targeted cells with minimal radiation damage to surrounding tissues. Tools are needed to visualize and quantify the radioactivity distribution and absorbed doses to targeted and nontargeted cells for accurate dosimetry of all treatment regimens utilizing α particles, including RIT and others (e.g., Ra-223), especially for organs and tumors with heterogeneous radionuclidemore » distributions. The aim of this study was to evaluate and characterize a novel single-particle digital autoradiography imager, the ionizing-radiation quantum imaging detector (iQID) camera, for use in α-RIT experiments. Methods: The iQID camera is a scintillator-based radiation detection system that images and identifies charged-particle and gamma-ray/x-ray emissions spatially and temporally on an event-by-event basis. It employs CCD-CMOS cameras and high-performance computing hardware for real-time imaging and activity quantification of tissue sections, approaching cellular resolutions. In this work, the authors evaluated its characteristics for α-particle imaging, including measurements of intrinsic detector spatial resolutions and background count rates at various detector configurations and quantification of activity distributions. The technique was assessed for quantitative imaging of astatine-211 ({sup 211}At) activity distributions in cryosections of murine and canine tissue samples. Results: The highest spatial resolution was measured at ∼20 μm full width at half maximum and the α-particle background was measured at a rate as low as (2.6 ± 0.5) × 10{sup −4} cpm/cm{sup 2} (40 mm diameter detector area

  18. Toward real-time endoscopically-guided robotic navigation based on a 3D virtual surgical field model

    NASA Astrophysics Data System (ADS)

    Gong, Yuanzheng; Hu, Danying; Hannaford, Blake; Seibel, Eric J.

    2015-03-01

    The challenge is to accurately guide the surgical tool within the three-dimensional (3D) surgical field for roboticallyassisted operations such as tumor margin removal from a debulked brain tumor cavity. The proposed technique is 3D image-guided surgical navigation based on matching intraoperative video frames to a 3D virtual model of the surgical field. A small laser-scanning endoscopic camera was attached to a mock minimally-invasive surgical tool that was manipulated toward a region of interest (residual tumor) within a phantom of a debulked brain tumor. Video frames from the endoscope provided features that were matched to the 3D virtual model, which were reconstructed earlier by raster scanning over the surgical field. Camera pose (position and orientation) is recovered by implementing a constrained bundle adjustment algorithm. Navigational error during the approach to fluorescence target (residual tumor) is determined by comparing the calculated camera pose to the measured camera pose using a micro-positioning stage. From these preliminary results, computation efficiency of the algorithm in MATLAB code is near real-time (2.5 sec for each estimation of pose), which can be improved by implementation in C++. Error analysis produced 3-mm distance error and 2.5 degree of orientation error on average. The sources of these errors come from 1) inaccuracy of the 3D virtual model, generated on a calibrated RAVEN robotic platform with stereo tracking; 2) inaccuracy of endoscope intrinsic parameters, such as focal length; and 3) any endoscopic image distortion from scanning irregularities. This work demonstrates feasibility of micro-camera 3D guidance of a robotic surgical tool.

  19. Automatic inference of geometric camera parameters and inter-camera topology in uncalibrated disjoint surveillance cameras

    NASA Astrophysics Data System (ADS)

    den Hollander, Richard J. M.; Bouma, Henri; Baan, Jan; Eendebak, Pieter T.; van Rest, Jeroen H. C.

    2015-10-01

    Person tracking across non-overlapping cameras and other types of video analytics benefit from spatial calibration information that allows an estimation of the distance between cameras and a relation between pixel coordinates and world coordinates within a camera. In a large environment with many cameras, or for frequent ad-hoc deployments of cameras, the cost of this calibration is high. This creates a barrier for the use of video analytics. Automating the calibration allows for a short configuration time, and the use of video analytics in a wider range of scenarios, including ad-hoc crisis situations and large scale surveillance systems. We show an autocalibration method entirely based on pedestrian detections in surveillance video in multiple non-overlapping cameras. In this paper, we show the two main components of automatic calibration. The first shows the intra-camera geometry estimation that leads to an estimate of the tilt angle, focal length and camera height, which is important for the conversion from pixels to meters and vice versa. The second component shows the inter-camera topology inference that leads to an estimate of the distance between cameras, which is important for spatio-temporal analysis of multi-camera tracking. This paper describes each of these methods and provides results on realistic video data.

  20. An electrically tunable plenoptic camera using a liquid crystal microlens array

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lei, Yu; School of Automation, Huazhong University of Science and Technology, Wuhan 430074; Wuhan National Laboratory for Optoelectronics, Huazhong University of Science and Technology, Wuhan 430074

    2015-05-15

    Plenoptic cameras generally employ a microlens array positioned between the main lens and the image sensor to capture the three-dimensional target radiation in the visible range. Because the focal length of common refractive or diffractive microlenses is fixed, the depth of field (DOF) is limited so as to restrict their imaging capability. In this paper, we propose a new plenoptic camera using a liquid crystal microlens array (LCMLA) with electrically tunable focal length. The developed LCMLA is fabricated by traditional photolithography and standard microelectronic techniques, and then, its focusing performance is experimentally presented. The fabricated LCMLA is directly integrated withmore » an image sensor to construct a prototyped LCMLA-based plenoptic camera for acquiring raw radiation of targets. Our experiments demonstrate that the focused region of the LCMLA-based plenoptic camera can be shifted efficiently through electrically tuning the LCMLA used, which is equivalent to the extension of the DOF.« less

  1. Single-photon sensitive fast ebCMOS camera system for multiple-target tracking of single fluorophores: application to nano-biophotonics

    NASA Astrophysics Data System (ADS)

    Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi

    2011-03-01

    Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.

  2. Double-Targeting Explosible Nanofirework for Tumor Ignition to Guide Tumor-Depth Photothermal Therapy.

    PubMed

    Zhang, Ming-Kang; Wang, Xiao-Gang; Zhu, Jing-Yi; Liu, Miao-Deng; Li, Chu-Xin; Feng, Jun; Zhang, Xian-Zheng

    2018-04-17

    This study reports a double-targeting "nanofirework" for tumor-ignited imaging to guide effective tumor-depth photothermal therapy (PTT). Typically, ≈30 nm upconversion nanoparticles (UCNP) are enveloped with a hybrid corona composed of ≈4 nm CuS tethered hyaluronic acid (CuS-HA). The HA corona provides active tumor-targeted functionality together with excellent stability and improved biocompatibility. The dimension of UCNP@CuS-HA is specifically set within the optimal size window for passive tumor-targeting effect, demonstrating significant contributions to both the in vivo prolonged circulation duration and the enhanced size-dependent tumor accumulation compared with ultrasmall CuS nanoparticles. The tumors featuring hyaluronidase (HAase) overexpression could induce the escape of CuS away from UCNP@CuS-HA due to HAase-catalyzed HA degradation, in turn activating the recovery of initially CuS-quenched luminescence of UCNP and also driving the tumor-depth infiltration of ultrasmall CuS for effective PTT. This in vivo transition has proven to be highly dependent on tumor occurrence like a tumor-ignited explosible firework. Together with the double-targeting functionality, the pathology-selective tumor ignition permits precise tumor detection and imaging-guided spatiotemporal control over PTT operation, leading to complete tumor ablation under near infrared (NIR) irradiation. This study offers a new paradigm of utilizing pathological characteristics to design nanotheranostics for precise detection and personalized therapy of tumors. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Harbour surveillance with cameras calibrated with AIS data

    NASA Astrophysics Data System (ADS)

    Palmieri, F. A. N.; Castaldo, F.; Marino, G.

    The inexpensive availability of surveillance cameras, easily connected in network configurations, suggests the deployment of this additional sensor modality in port surveillance. Vessels appearing within cameras fields of view can be recognized and localized providing to fusion centers information that can be added to data coming from Radar, Lidar, AIS, etc. Camera systems, that are used as localizers however, must be properly calibrated in changing scenarios where often there is limited choice on the position on which they are deployed. Automatic Identification System (AIS) data, that includes position, course and vessel's identity, freely available through inexpensive receivers, for some of the vessels appearing within the field of view, provide the opportunity to achieve proper camera calibration to be used for the localization of vessels not equipped with AIS transponders. In this paper we assume a pinhole model for camera geometry and propose perspective matrices computation using AIS positional data. Images obtained from calibrated cameras are then matched and pixel association is utilized for other vessel's localization. We report preliminary experimental results of calibration and localization using two cameras deployed on the Gulf of Naples coastline. The two cameras overlook a section of the harbour and record short video sequences that are synchronized offline with AIS positional information of easily-identified passenger ships. Other small vessels, not equipped with AIS transponders, are localized using camera matrices and pixel matching. Localization accuracy is experimentally evaluated as a function of target distance from the sensors.

  4. EGFR Targeted Theranostic Nanoemulsion For Image-Guided Ovarian Cancer Therapy

    PubMed Central

    Ganta, Srinivas; Singh, Amit; Kulkarni, Praveen; Keeler, Amanda W.; Piroyan, Aleksandr; Sawant, Rupa R.; Patel, Niravkumar R.; Davis, Barbara; Ferris, Craig; O’Neal, Sara; Zamboni, William; Amiji, Mansoor M.; Coleman, Timothy P.

    2015-01-01

    Purpose Platinum-based therapies are the first line treatments for most types of cancer including ovarian cancer. However, their use is associated with dose-limiting toxicities and resistance. We report initial translational studies of a theranostic nanoemulsion loaded with a cisplatin derivative, myrisplatin and pro-apoptotic agent, C6-ceramide. Methods The surface of the nanoemulsion is annotated with an endothelial growth factor receptor (EGFR) binding peptide to improve targeting ability and gadolinium to provide diagnostic capability for image-guided therapy of EGFR overexpressing ovarian cancers. A high shear microfludization process was employed to produce the formulation with particle size below 150 nm. Results Pharmacokinetic study showed a prolonged blood platinum and gadolinium levels with nanoemulsions in nu/nu mice. The theranostic nanoemulsions also exhibited less toxicity and enhanced the survival time of mice as compared to an equivalent cisplatin treatment. Conclusions Magnetic resonance imaging (MRI) studies indicate the theranostic nanoemulsions were effective contrast agents and could be used to track accumulation in a tumor. The MRI study additionally indicate that significantly more EGFR-targeted theranostic nanoemulsion accumulated in a tumor than non-targeted nanoemulsuion providing the feasibility of using a targeted theranostic agent in conjunction with MRI to image disease loci and quantify the disease progression. PMID:25732960

  5. Intelligent person identification system using stereo camera-based height and stride estimation

    NASA Astrophysics Data System (ADS)

    Ko, Jung-Hwan; Jang, Jae-Hun; Kim, Eun-Soo

    2005-05-01

    In this paper, a stereo camera-based intelligent person identification system is suggested. In the proposed method, face area of the moving target person is extracted from the left image of the input steros image pair by using a threshold value of YCbCr color model and by carrying out correlation between the face area segmented from this threshold value of YCbCr color model and the right input image, the location coordinates of the target face can be acquired, and then these values are used to control the pan/tilt system through the modified PID-based recursive controller. Also, by using the geometric parameters between the target face and the stereo camera system, the vertical distance between the target and stereo camera system can be calculated through a triangulation method. Using this calculated vertical distance and the angles of the pan and tilt, the target's real position data in the world space can be acquired and from them its height and stride values can be finally extracted. Some experiments with video images for 16 moving persons show that a person could be identified with these extracted height and stride parameters.

  6. Space-based infrared sensors of space target imaging effect analysis

    NASA Astrophysics Data System (ADS)

    Dai, Huayu; Zhang, Yasheng; Zhou, Haijun; Zhao, Shuang

    2018-02-01

    Target identification problem is one of the core problem of ballistic missile defense system, infrared imaging simulation is an important means of target detection and recognition. This paper first established the space-based infrared sensors ballistic target imaging model of point source on the planet's atmosphere; then from two aspects of space-based sensors camera parameters and target characteristics simulated atmosphere ballistic target of infrared imaging effect, analyzed the camera line of sight jitter, camera system noise and different imaging effects of wave on the target.

  7. Dual cameras acquisition and display system of retina-like sensor camera and rectangular sensor camera

    NASA Astrophysics Data System (ADS)

    Cao, Nan; Cao, Fengmei; Lin, Yabin; Bai, Tingzhu; Song, Shengyu

    2015-04-01

    For a new kind of retina-like senor camera and a traditional rectangular sensor camera, dual cameras acquisition and display system need to be built. We introduce the principle and the development of retina-like senor. Image coordinates transformation and interpolation based on sub-pixel interpolation need to be realized for our retina-like sensor's special pixels distribution. The hardware platform is composed of retina-like senor camera, rectangular sensor camera, image grabber and PC. Combined the MIL and OpenCV library, the software program is composed in VC++ on VS 2010. Experience results show that the system can realizes two cameras' acquisition and display.

  8. STS-52 CANEX-2 Canadian Target Assembly (CTA) held by RMS over OV-102's PLB

    NASA Image and Video Library

    1992-11-01

    STS052-71-057 (22 Oct-1 Nov 1992) --- This 70mm frame, photographed with a handheld Hasselblad camera aimed through Columbia's aft flight deck windows, captures the operation of the Space Vision System (SVS) experiment above the cargo bay. Target dots have been placed on the Canadian Target Assembly (CTA), a small satellite, in the grasp of the Canadian-built remote manipulator system (RMS) arm. SVS utilized a Shuttle TV camera to monitor the dots strategically arranged on the satellite, to be tracked. As the satellite moved via the arm, the SVS computer measured the changing position of the dots and provided real-time television display of the location and orientation of the CTA. This type of displayed information is expected to help an operator guide the RMS or the Mobile Servicing System (MSS) of the future when berthing or deploying satellites. Also visible in the frame is the U.S. Microgravity Payload (USMP-01).

  9. Laser line scan underwater imaging by complementary metal-oxide-semiconductor camera

    NASA Astrophysics Data System (ADS)

    He, Zhiyi; Luo, Meixing; Song, Xiyu; Wang, Dundong; He, Ning

    2017-12-01

    This work employs the complementary metal-oxide-semiconductor (CMOS) camera to acquire images in a scanning manner for laser line scan (LLS) underwater imaging to alleviate backscatter impact of seawater. Two operating features of the CMOS camera, namely the region of interest (ROI) and rolling shutter, can be utilized to perform image scan without the difficulty of translating the receiver above the target as the traditional LLS imaging systems have. By the dynamically reconfigurable ROI of an industrial CMOS camera, we evenly divided the image into five subareas along the pixel rows and then scanned them by changing the ROI region automatically under the synchronous illumination by the fun beams of the lasers. Another scanning method was explored by the rolling shutter operation of the CMOS camera. The fun beam lasers were turned on/off to illuminate the narrow zones on the target in a good correspondence to the exposure lines during the rolling procedure of the camera's electronic shutter. The frame synchronization between the image scan and the laser beam sweep may be achieved by either the strobe lighting output pulse or the external triggering pulse of the industrial camera. Comparison between the scanning and nonscanning images shows that contrast of the underwater image can be improved by our LLS imaging techniques, with higher stability and feasibility than the mechanically controlled scanning method.

  10. Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras

    NASA Technical Reports Server (NTRS)

    Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellutta, Paolo; Sherwin, Gary W.

    2011-01-01

    The ability to perform off-road autonomous navigation at any time of day or night is a requirement for some unmanned ground vehicle (UGV) programs. Because there are times when it is desirable for military UGVs to operate without emitting strong, detectable electromagnetic signals, a passive only terrain perception mode of operation is also often a requirement. Thermal infrared (TIR) cameras can be used to provide day and night passive terrain perception. TIR cameras have a detector sensitive to either mid-wave infrared (MWIR) radiation (3-5?m) or long-wave infrared (LWIR) radiation (8-12?m). With the recent emergence of high-quality uncooled LWIR cameras, TIR cameras have become viable passive perception options for some UGV programs. The Jet Propulsion Laboratory (JPL) has used a stereo pair of TIR cameras under several UGV programs to perform stereo ranging, terrain mapping, tree-trunk detection, pedestrian detection, negative obstacle detection, and water detection based on object reflections. In addition, we have evaluated stereo range data at a variety of UGV speeds, evaluated dual-band TIR classification of soil, vegetation, and rock terrain types, analyzed 24 hour water and 12 hour mud TIR imagery, and analyzed TIR imagery for hazard detection through smoke. Since TIR cameras do not currently provide the resolution available from megapixel color cameras, a UGV's daytime safe speed is often reduced when using TIR instead of color cameras. In this paper, we summarize the UGV terrain perception work JPL has performed with TIR cameras over the last decade and describe a calibration target developed by General Dynamics Robotic Systems (GDRS) for TIR cameras and other sensors.

  11. Camera Development for the Cherenkov Telescope Array

    NASA Astrophysics Data System (ADS)

    Moncada, Roberto Jose

    2017-01-01

    With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.

  12. A cMUT probe for ultrasound-guided focused ultrasound targeted therapy.

    PubMed

    Gross, Dominique; Coutier, Caroline; Legros, Mathieu; Bouakaz, Ayache; Certon, Dominique

    2015-06-01

    Ultrasound-mediated targeted therapy represents a promising strategy in the arsenal of modern therapy. Capacitive micromachined ultrasonic transducer (cMUT) technology could overcome some difficulties encountered by traditional piezoelectric transducers. In this study, we report on the design, fabrication, and characterization of an ultrasound-guided focused ultrasound (USgFUS) cMUT probe dedicated to preclinical evaluation of targeted therapy (hyperthermia, thermosensitive liposomes activation, and sonoporation) at low frequency (1 MHz) with simultaneous ultrasonic imaging and guidance (15 to 20 MHz). The probe embeds two types of cMUT arrays to perform the modalities of targeted therapy and imaging respectively. The wafer-bonding process flow employed for the manufacturing of the cMUTs is reported. One of its main features is the possibility of implementing two different gap heights on the same wafer. All the design and characterization steps of the devices are described and discussed, starting from the array design up to the first in vitro measurements: optical (microscopy) and electrical (impedance) measurements, arrays' electroacoustic responses, focused pressure field mapping (maximum peak-to-peak pressure = 2.5 MPa), and the first B-scan image of a wire-target phantom.

  13. Automation of the targeting and reflective alignment concept

    NASA Technical Reports Server (NTRS)

    Redfield, Robin C.

    1992-01-01

    The automated alignment system, described herein, employs a reflective, passive (requiring no power) target and includes a PC-based imaging system and one camera mounted on a six degree of freedom robot manipulator. The system detects and corrects for manipulator misalignment in three translational and three rotational directions by employing the Targeting and Reflective Alignment Concept (TRAC), which simplifies alignment by decoupling translational and rotational alignment control. The concept uses information on the camera and the target's relative position based on video feedback from the camera. These relative positions are converted into alignment errors and minimized by motions of the robot. The system is robust to exogenous lighting by virtue of a subtraction algorithm which enables the camera to only see the target. These capabilities are realized with relatively minimal complexity and expense.

  14. Color image guided depth image super resolution using fusion filter

    NASA Astrophysics Data System (ADS)

    He, Jin; Liang, Bin; He, Ying; Yang, Jun

    2018-04-01

    Depth cameras are currently playing an important role in many areas. However, most of them can only obtain lowresolution (LR) depth images. Color cameras can easily provide high-resolution (HR) color images. Using color image as a guide image is an efficient way to get a HR depth image. In this paper, we propose a depth image super resolution (SR) algorithm, which uses a HR color image as a guide image and a LR depth image as input. We use the fusion filter of guided filter and edge based joint bilateral filter to get HR depth image. Our experimental results on Middlebury 2005 datasets show that our method can provide better quality in HR depth images both numerically and visually.

  15. Dust deposition on the decks of the Mars Exploration Rovers: 10 years of dust dynamics on the Panoramic Camera calibration targets.

    PubMed

    Kinch, Kjartan M; Bell, James F; Goetz, Walter; Johnson, Jeffrey R; Joseph, Jonathan; Madsen, Morten Bo; Sohl-Dickstein, Jascha

    2015-05-01

    The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two-layer scattering model, and we present a dust reflectance spectrum derived from long-term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance-calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history.

  16. Circulating Magnetic Microbubbles for Localized Real-Time Control of Drug Delivery by Ultrasonography-Guided Magnetic Targeting and Ultrasound

    PubMed Central

    Chertok, Beata; Langer, Robert

    2018-01-01

    Image-guided and target-selective modulation of drug delivery by external physical triggers at the site of pathology has the potential to enable tailored control of drug targeting. Magnetic microbubbles that are responsive to magnetic and acoustic modulation and visible to ultrasonography have been proposed as a means to realize this drug targeting strategy. To comply with this strategy in vivo, magnetic microbubbles must circulate systemically and evade deposition in pulmonary capillaries, while also preserving magnetic and acoustic activities in circulation over time. Unfortunately, challenges in fabricating magnetic microbubbles with such characteristics have limited progress in this field. In this report, we develop magnetic microbubbles (MagMB) that display strong magnetic and acoustic activities, while also preserving the ability to circulate systemically and evade pulmonary entrapment. Methods: We systematically evaluated the characteristics of MagMB including their pharmacokinetics, biodistribution, visibility to ultrasonography and amenability to magneto-acoustic modulation in tumor-bearing mice. We further assessed the applicability of MagMB for ultrasonography-guided control of drug targeting. Results: Following intravenous injection, MagMB exhibited a 17- to 90-fold lower pulmonary entrapment compared to previously reported magnetic microbubbles and mimicked circulation persistence of the clinically utilized Definity microbubbles (>10 min). In addition, MagMB could be accumulated in tumor vasculature by magnetic targeting, monitored by ultrasonography and collapsed by focused ultrasound on demand to activate drug deposition at the target. Furthermore, drug delivery to target tumors could be enhanced by adjusting the magneto-acoustic modulation based on ultrasonographic monitoring of MagMB in real-time. Conclusions: Circulating MagMB in conjunction with ultrasonography-guided magneto-acoustic modulation may provide a strategy for tailored minimally

  17. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Jingjing; Zhang, Bin; Reyes, Juvenal

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstructmore » source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two

  18. Systematic study of target localization for bioluminescence tomography guided radiation therapy

    PubMed Central

    Yu, Jingjing; Zhang, Bin; Iordachita, Iulian I.; Reyes, Juvenal; Lu, Zhihao; Brock, Malcolm V.; Patterson, Michael S.; Wong, John W.

    2016-01-01

    Purpose: To overcome the limitation of CT/cone-beam CT (CBCT) in guiding radiation for soft tissue targets, the authors developed a spectrally resolved bioluminescence tomography (BLT) system for the small animal radiation research platform. The authors systematically assessed the performance of the BLT system in terms of target localization and the ability to resolve two neighboring sources in simulations, tissue-mimicking phantom, and in vivo environments. Methods: Multispectral measurements acquired in a single projection were used for the BLT reconstruction. The incomplete variables truncated conjugate gradient algorithm with an iterative permissible region shrinking strategy was employed as the optimization scheme to reconstruct source distributions. Simulation studies were conducted for single spherical sources with sizes from 0.5 to 3 mm radius at depth of 3–12 mm. The same configuration was also applied for the double source simulation with source separations varying from 3 to 9 mm. Experiments were performed in a standalone BLT/CBCT system. Two self-illuminated sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single-source at 6 and 9 mm depth, two sources at 3 and 5 mm separation at depth of 5 mm, or three sources in the abdomen were also used to illustrate the localization capability of the BLT system for multiple targets in vivo. Results: For simulation study, approximate 1 mm accuracy can be achieved at localizing center of mass (CoM) for single-source and grouped CoM for double source cases. For the case of 1.5 mm radius source, a common tumor size used in preclinical study, their simulation shows that for all the source separations considered, except for the 3 mm separation at 9 and 12 mm depth, the two neighboring sources can be resolved at depths from 3 to 12 mm. Phantom experiments illustrated that 2D bioluminescence imaging failed to distinguish two sources

  19. Phase and amplitude wave front sensing and reconstruction with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Nelson, William; Davis, Christopher C.

    2014-10-01

    A plenoptic camera is a camera that can retrieve the direction and intensity distribution of light rays collected by the camera and allows for multiple reconstruction functions such as: refocusing at a different depth, and for 3D microscopy. Its principle is to add a micro-lens array to a traditional high-resolution camera to form a semi-camera array that preserves redundant intensity distributions of the light field and facilitates back-tracing of rays through geometric knowledge of its optical components. Though designed to process incoherent images, we found that the plenoptic camera shows high potential in solving coherent illumination cases such as sensing both the amplitude and phase information of a distorted laser beam. Based on our earlier introduction of a prototype modified plenoptic camera, we have developed the complete algorithm to reconstruct the wavefront of the incident light field. In this paper the algorithm and experimental results will be demonstrated, and an improved version of this modified plenoptic camera will be discussed. As a result, our modified plenoptic camera can serve as an advanced wavefront sensor compared with traditional Shack- Hartmann sensors in handling complicated cases such as coherent illumination in strong turbulence where interference and discontinuity of wavefronts is common. Especially in wave propagation through atmospheric turbulence, this camera should provide a much more precise description of the light field, which would guide systems in adaptive optics to make intelligent analysis and corrections.

  20. Construct and face validity of a virtual reality-based camera navigation curriculum.

    PubMed

    Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J

    2012-10-01

    Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves

  1. Vehicular camera pedestrian detection research

    NASA Astrophysics Data System (ADS)

    Liu, Jiahui

    2018-03-01

    With the rapid development of science and technology, it has made great development, but at the same time of highway traffic more convenient in highway traffic and transportation. However, in the meantime, traffic safety accidents occur more and more frequently in China. In order to deal with the increasingly heavy traffic safety. So, protecting the safety of people's personal property and facilitating travel has become a top priority. The real-time accurate pedestrian and driving environment are obtained through a vehicular camera which are used to detection and track the preceding moving targets. It is popular in the domain of intelligent vehicle safety driving, autonomous navigation and traffic system research. Based on the pedestrian video obtained by the Vehicular Camera, this paper studies the trajectory of pedestrian detection and its algorithm.

  2. 7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA CAR, DETAIL OF 'FLARE' OR TRAJECTORY CAMERA INSIDE CAMERA CAR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  3. Image quality enhancement method for on-orbit remote sensing cameras using invariable modulation transfer function.

    PubMed

    Li, Jin; Liu, Zilong

    2017-07-24

    Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.

  4. Caught on Camera.

    ERIC Educational Resources Information Center

    Milshtein, Amy

    2002-01-01

    Describes the benefits of and rules to be followed when using surveillance cameras for school security. Discusses various camera models, including indoor and outdoor fixed position cameras, pan-tilt zoom cameras, and pinhole-lens cameras for covert surveillance. (EV)

  5. 6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    6. VAL CAMERA CAR, DETAIL OF COMMUNICATION EQUIPMENT INSIDE CAMERA CAR WITH CAMERA MOUNT IN FOREGROUND. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. The AOTF-Based NO2 Camera

    NASA Astrophysics Data System (ADS)

    Dekemper, E.; Fussen, D.; Vanhellemont, F.; Vanhamel, J.; Pieroux, D.; Berkenbosch, S.

    2017-12-01

    In an urban environment, nitrogen dioxide is emitted by a multitude of static and moving point sources (cars, industry, power plants, heating systems,…). Air quality models generally rely on a limited number of monitoring stations which do not capture the whole pattern, neither allow for full validation. So far, there has been a lack of instrument capable of measuring NO2 fields with the necessary spatio-temporal resolution above major point sources (power plants), or more extended ones (cities). We have developed a new type of passive remote sensing instrument aiming at the measurement of 2-D distributions of NO2 slant column densities (SCDs) with a high spatial (meters) and temporal (minutes) resolution. The measurement principle has some similarities with the popular filter-based SO2 camera (used in volcanic and industrial sulfur emissions monitoring) as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. But contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. A first prototype was successfully tested with the plume of a coal-firing power plant in Romania, revealing the dynamics of the formation of NO2 in the early plume. A lighter version of the NO2 camera is now being tested on other targets, such as oil refineries and urban air masses.

  7. Miniaturized Autonomous Extravehicular Robotic Camera (Mini AERCam)

    NASA Technical Reports Server (NTRS)

    Fredrickson, Steven E.

    2001-01-01

    The NASA Johnson Space Center (JSC) Engineering Directorate is developing the Autonomous Extravehicular Robotic Camera (AERCam), a low-volume, low-mass free-flying camera system . AERCam project team personnel recently initiated development of a miniaturized version of AERCam known as Mini AERCam. The Mini AERCam target design is a spherical "nanosatellite" free-flyer 7.5 inches in diameter and weighing 1 0 pounds. Mini AERCam is building on the success of the AERCam Sprint STS-87 flight experiment by adding new on-board sensing and processing capabilities while simultaneously reducing volume by 80%. Achieving enhanced capability in a smaller package depends on applying miniaturization technology across virtually all subsystems. Technology innovations being incorporated include micro electromechanical system (MEMS) gyros, "camera-on-a-chip" CMOS imagers, rechargeable xenon gas propulsion system , rechargeable lithium ion battery, custom avionics based on the PowerPC 740 microprocessor, GPS relative navigation, digital radio frequency communications and tracking, micropatch antennas, digital instrumentation, and dense mechanical packaging. The Mini AERCam free-flyer will initially be integrated into an approximate flight-like configuration for demonstration on an airbearing table. A pilot-in-the-loop and hardware-in-the-loop simulation to simulate on-orbit navigation and dynamics will complement the airbearing table demonstration. The Mini AERCam lab demonstration is intended to form the basis for future development of an AERCam flight system that provides beneficial on-orbit views unobtainable from fixed cameras, cameras on robotic manipulators, or cameras carried by EVA crewmembers.

  8. Orbital docking system centerline color television camera system test

    NASA Technical Reports Server (NTRS)

    Mongan, Philip T.

    1993-01-01

    A series of tests was run to verify that the design of the centerline color television camera (CTVC) system is adequate optically for the STS-71 Space Shuttle Orbiter docking mission with the Mir space station. In each test, a mockup of the Mir consisting of hatch, docking mechanism, and docking target was positioned above the Johnson Space Center's full fuselage trainer, which simulated the Orbiter with a mockup of the external airlock and docking adapter. Test subjects viewed the docking target through the CTVC under 30 different lighting conditions and evaluated target resolution, field of view, light levels, light placement, and methods of target alignment. Test results indicate that the proposed design will provide adequate visibility through the centerline camera for a successful docking, even with a reasonable number of light failures. It is recommended that the flight deck crew have individual switching capability for docking lights to provide maximum shadow management and that centerline lights be retained to deal with light failures and user preferences. Procedures for light management should be developed and target alignment aids should be selected during simulated docking runs.

  9. 2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    2. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING WEST TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  10. Development of tumor-targeted near infrared probes for fluorescence guided surgery.

    PubMed

    Kelderhouse, Lindsay E; Chelvam, Venkatesh; Wayua, Charity; Mahalingam, Sakkarapalayam; Poh, Scott; Kularatne, Sumith A; Low, Philip S

    2013-06-19

    Complete surgical resection of malignant disease is the only reliable method to cure cancer. Unfortunately, quantitative tumor resection is often limited by a surgeon's ability to locate all malignant disease and distinguish it from healthy tissue. Fluorescence-guided surgery has emerged as a tool to aid surgeons in the identification and removal of malignant lesions. While nontargeted fluorescent dyes have been shown to passively accumulate in some tumors, the resulting tumor-to-background ratios are often poor, and the boundaries between malignant and healthy tissues can be difficult to define. To circumvent these problems, our laboratory has developed high affinity tumor targeting ligands that bind to receptors that are overexpressed on cancer cells and deliver attached molecules selectively into these cells. In this study, we explore the use of two tumor-specific targeting ligands (i.e., folic acid that targets the folate receptor (FR) and DUPA that targets prostate specific membrane antigen (PSMA)) to deliver near-infrared (NIR) fluorescent dyes specifically to FR and PSMA expressing cancers, thereby rendering only the malignant cells highly fluorescent. We report here that all FR- and PSMA-targeted NIR probes examined bind cultured cancer cells in the low nanomolar range. Moreover, upon intravenous injection into tumor-bearing mice with metastatic disease, these same ligand-NIR dye conjugates render receptor-expressing tumor tissues fluorescent, enabling their facile resection with minimal contamination from healthy tissues.

  11. User-assisted visual search and tracking across distributed multi-camera networks

    NASA Astrophysics Data System (ADS)

    Raja, Yogesh; Gong, Shaogang; Xiang, Tao

    2011-11-01

    Human CCTV operators face several challenges in their task which can lead to missed events, people or associations, including: (a) data overload in large distributed multi-camera environments; (b) short attention span; (c) limited knowledge of what to look for; and (d) lack of access to non-visual contextual intelligence to aid search. Developing a system to aid human operators and alleviate such burdens requires addressing the problem of automatic re-identification of people across disjoint camera views, a matching task made difficult by factors such as lighting, viewpoint and pose changes and for which absolute scoring approaches are not best suited. Accordingly, we describe a distributed multi-camera tracking (MCT) system to visually aid human operators in associating people and objects effectively over multiple disjoint camera views in a large public space. The system comprises three key novel components: (1) relative measures of ranking rather than absolute scoring to learn the best features for matching; (2) multi-camera behaviour profiling as higher-level knowledge to reduce the search space and increase the chance of finding correct matches; and (3) human-assisted data mining to interactively guide search and in the process recover missing detections and discover previously unknown associations. We provide an extensive evaluation of the greater effectiveness of the system as compared to existing approaches on industry-standard i-LIDS multi-camera data.

  12. Fast and compact internal scanning CMOS-based hyperspectral camera: the Snapscan

    NASA Astrophysics Data System (ADS)

    Pichette, Julien; Charle, Wouter; Lambrechts, Andy

    2017-02-01

    Imec has developed a process for the monolithic integration of optical filters on top of CMOS image sensors, leading to compact, cost-efficient and faster hyperspectral cameras. Linescan cameras are typically used in remote sensing or for conveyor belt applications. Translation of the target is not always possible for large objects or in many medical applications. Therefore, we introduce a novel camera, the Snapscan (patent pending), exploiting internal movement of a linescan sensor enabling fast and convenient acquisition of high-resolution hyperspectral cubes (up to 2048x3652x150 in spectral range 475-925 nm). The Snapscan combines the spectral and spatial resolutions of a linescan system with the convenience of a snapshot camera.

  13. 1. VARIABLEANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    1. VARIABLE-ANGLE LAUNCHER CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH CAMERA STATION ABOVE LOOKING NORTH TAKEN FROM RESERVOIR. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  14. Scaling-up camera traps: monitoring the planet's biodiversity with networks of remote sensors

    USGS Publications Warehouse

    Steenweg, Robin; Hebblewhite, Mark; Kays, Roland; Ahumada, Jorge A.; Fisher, Jason T.; Burton, Cole; Townsend, Susan E.; Carbone, Chris; Rowcliffe, J. Marcus; Whittington, Jesse; Brodie, Jedediah; Royle, Andy; Switalski, Adam; Clevenger, Anthony P.; Heim, Nicole; Rich, Lindsey N.

    2017-01-01

    Countries committed to implementing the Convention on Biological Diversity's 2011–2020 strategic plan need effective tools to monitor global trends in biodiversity. Remote cameras are a rapidly growing technology that has great potential to transform global monitoring for terrestrial biodiversity and can be an important contributor to the call for measuring Essential Biodiversity Variables. Recent advances in camera technology and methods enable researchers to estimate changes in abundance and distribution for entire communities of animals and to identify global drivers of biodiversity trends. We suggest that interconnected networks of remote cameras will soon monitor biodiversity at a global scale, help answer pressing ecological questions, and guide conservation policy. This global network will require greater collaboration among remote-camera studies and citizen scientists, including standardized metadata, shared protocols, and security measures to protect records about sensitive species. With modest investment in infrastructure, and continued innovation, synthesis, and collaboration, we envision a global network of remote cameras that not only provides real-time biodiversity data but also serves to connect people with nature.

  15. Camera for Quasars in the Early Universe (CQUEAN)

    NASA Astrophysics Data System (ADS)

    Kim, Eunbin; Park, W.; Lim, J.; Jeong, H.; Kim, J.; Oh, H.; Pak, S.; Im, M.; Kuehne, J.

    2010-05-01

    The early universe of z ɳ is where the first stars, galaxies, and quasars formed, starting the re-ionization of the universe. The discovery and the study of quasars in the early universe allow us to witness the beginning of history of astronomical objects. In order to perform a medium-deep, medium-wide, imaging survey of quasars, we are developing an optical CCD camera, CQUEAN (Camera for QUasars in EArly uNiverse) which uses a 1024*1024 pixel deep-depletion CCD. It has an enhanced QE than conventional CCD at wavelength band around 1μm, thus it will be an efficient tool for observation of quasars at z > 7. It will be attached to the 2.1m telescope at McDonald Observatory, USA. A focal reducer is designed to secure a larger field of view at the cassegrain focus of 2.1m telescope. For long stable exposures, auto-guiding system will be implemented by using another CCD camera viewing an off-axis field. All these instruments will be controlled by the software written in python on linux platform. CQUEAN is expected to see the first light during summer in 2010.

  16. Research on the electro-optical assistant landing system based on the dual camera photogrammetry algorithm

    NASA Astrophysics Data System (ADS)

    Mi, Yuhe; Huang, Yifan; Li, Lin

    2015-08-01

    Based on the location technique of beacon photogrammetry, Dual Camera Photogrammetry (DCP) algorithm was used to assist helicopters landing on the ship. In this paper, ZEMAX was used to simulate the two Charge Coupled Device (CCD) cameras imaging four beacons on both sides of the helicopter and output the image to MATLAB. Target coordinate systems, image pixel coordinate systems, world coordinate systems and camera coordinate systems were established respectively. According to the ideal pin-hole imaging model, the rotation matrix and translation vector of the target coordinate systems and the camera coordinate systems could be obtained by using MATLAB to process the image information and calculate the linear equations. On the basis mentioned above, ambient temperature and the positions of the beacons and cameras were changed in ZEMAX to test the accuracy of the DCP algorithm in complex sea status. The numerical simulation shows that in complex sea status, the position measurement accuracy can meet the requirements of the project.

  17. External Guide Sequences Targeting the aac(6′)-Ib mRNA Induce Inhibition of Amikacin Resistance▿

    PubMed Central

    Bistué, Alfonso J. C. Soler; Ha, Hongphuc; Sarno, Renee; Don, Michelle; Zorreguieta, Angeles; Tolmasky, Marcelo E.

    2007-01-01

    The dissemination of AAC(6′)-I-type acetyltransferases have rendered amikacin and other aminoglycosides all but useless in some parts of the world. Antisense technologies could be an alternative to extend the life of these antibiotics. External guide sequences are short antisense oligoribonucleotides that induce RNase P-mediated cleavage of a target RNA by forming a precursor tRNA-like complex. Thirteen-nucleotide external guide sequences complementary to locations within five regions accessible for interaction with antisense oligonucleotides in the mRNA that encodes AAC(6′)-Ib were analyzed. While small variations in the location targeted by different external guide sequences resulted in big changes in efficiency of binding to native aac(6′)-Ib mRNA, most of them induced high levels of RNase P-mediated cleavage in vitro. Recombinant plasmids coding for selected external guide sequences were introduced into Escherichia coli harboring aac(6′)-Ib, and the transformant strains were tested to determine their resistance to amikacin. The two external guide sequences that showed the strongest binding efficiency to the mRNA in vitro, EGSC3 and EGSA2, interfered with expression of the resistance phenotype at different degrees. Growth curve experiments showed that E. coli cells harboring a plasmid coding for EGSC3, the external guide sequence with the highest mRNA binding affinity in vitro, did not grow for at least 300 min in the presence of 15 μg of amikacin/ml. EGSA2, which had a lower mRNA-binding affinity in vitro than EGSC3, inhibited the expression of amikacin resistance at a lesser level; growth of E. coli harboring a plasmid coding for EGSA2, in the presence of 15 μg of amikacin/ml was undetectable for 200 min but reached an optical density at 600 nm of 0.5 after 5 h of incubation. Our results indicate that the use of external guide sequences could be a viable strategy to preserve the efficacy of amikacin. PMID:17387154

  18. Pose estimation and tracking of non-cooperative rocket bodies using Time-of-Flight cameras

    NASA Astrophysics Data System (ADS)

    Gómez Martínez, Harvey; Giorgi, Gabriele; Eissfeller, Bernd

    2017-10-01

    This paper presents a methodology for estimating the position and orientation of a rocket body in orbit - the target - undergoing a roto-translational motion, with respect to a chaser spacecraft, whose task is to match the target dynamics for a safe rendezvous. During the rendezvous maneuver the chaser employs a Time-of-Flight camera that acquires a point cloud of 3D coordinates mapping the sensed target surface. Once the system identifies the target, it initializes the chaser-to-target relative position and orientation. After initialization, a tracking procedure enables the system to sense the evolution of the target's pose between frames. The proposed algorithm is evaluated using simulated point clouds, generated with a CAD model of the Cosmos-3M upper stage and the PMD CamCube 3.0 camera specifications.

  19. Computer vision camera with embedded FPGA processing

    NASA Astrophysics Data System (ADS)

    Lecerf, Antoine; Ouellet, Denis; Arias-Estrada, Miguel

    2000-03-01

    Traditional computer vision is based on a camera-computer system in which the image understanding algorithms are embedded in the computer. To circumvent the computational load of vision algorithms, low-level processing and imaging hardware can be integrated in a single compact module where a dedicated architecture is implemented. This paper presents a Computer Vision Camera based on an open architecture implemented in an FPGA. The system is targeted to real-time computer vision tasks where low level processing and feature extraction tasks can be implemented in the FPGA device. The camera integrates a CMOS image sensor, an FPGA device, two memory banks, and an embedded PC for communication and control tasks. The FPGA device is a medium size one equivalent to 25,000 logic gates. The device is connected to two high speed memory banks, an IS interface, and an imager interface. The camera can be accessed for architecture programming, data transfer, and control through an Ethernet link from a remote computer. A hardware architecture can be defined in a Hardware Description Language (like VHDL), simulated and synthesized into digital structures that can be programmed into the FPGA and tested on the camera. The architecture of a classical multi-scale edge detection algorithm based on a Laplacian of Gaussian convolution has been developed to show the capabilities of the system.

  20. Who Goes There? Linking Remote Cameras and Schoolyard Science to Empower Action

    ERIC Educational Resources Information Center

    Tanner, Dawn; Ernst, Julie

    2013-01-01

    Taking Action Opportunities (TAO) is a curriculum that combines guided reflection, a focus on the local environment, and innovative use of wildlife technology to empower student action toward improving the environment. TAO is experientially based and uses remote cameras as a tool for schoolyard exploration. Through TAO, students engage in research…

  1. Calibration and verification of thermographic cameras for geometric measurements

    NASA Astrophysics Data System (ADS)

    Lagüela, S.; González-Jorge, H.; Armesto, J.; Arias, P.

    2011-03-01

    Infrared thermography is a technique with an increasing degree of development and applications. Quality assessment in the measurements performed with the thermal cameras should be achieved through metrology calibration and verification. Infrared cameras acquire temperature and geometric information, although calibration and verification procedures are only usual for thermal data. Black bodies are used for these purposes. Moreover, the geometric information is important for many fields as architecture, civil engineering and industry. This work presents a calibration procedure that allows the photogrammetric restitution and a portable artefact to verify the geometric accuracy, repeatability and drift of thermographic cameras. These results allow the incorporation of this information into the quality control processes of the companies. A grid based on burning lamps is used for the geometric calibration of thermographic cameras. The artefact designed for the geometric verification consists of five delrin spheres and seven cubes of different sizes. Metrology traceability for the artefact is obtained from a coordinate measuring machine. Two sets of targets with different reflectivity are fixed to the spheres and cubes to make data processing and photogrammetric restitution possible. Reflectivity was the chosen material propriety due to the thermographic and visual cameras ability to detect it. Two thermographic cameras from Flir and Nec manufacturers, and one visible camera from Jai are calibrated, verified and compared using calibration grids and the standard artefact. The calibration system based on burning lamps shows its capability to perform the internal orientation of the thermal cameras. Verification results show repeatability better than 1 mm for all cases, being better than 0.5 mm for the visible one. As it must be expected, also accuracy appears higher in the visible camera, and the geometric comparison between thermographic cameras shows slightly better

  2. Target Acquisition for Projectile Vision-Based Navigation

    DTIC Science & Technology

    2014-03-01

    Future Work 20 8. References 21 Appendix A. Simulation Results 23 Appendix B. Derivation of Ground Resolution for a Diffraction-Limited Pinhole Camera...results for visual acquisition (left) and target recognition (right). ..........19 Figure B-1. Differential object and image areas for pinhole camera...projectile and target (measured in terms of the angle ) will depend on target heading. In particular, because we have aligned the x axis along the

  3. Dust deposition on the decks of the Mars Exploration Rovers: 10 years of dust dynamics on the Panoramic Camera calibration targets

    PubMed Central

    Bell, James F.; Goetz, Walter; Johnson, Jeffrey R.; Joseph, Jonathan; Madsen, Morten Bo; Sohl‐Dickstein, Jascha

    2015-01-01

    Abstract The Panoramic Cameras on NASA's Mars Exploration Rovers have each returned more than 17,000 images of their calibration targets. In order to make optimal use of this data set for reflectance calibration, a correction must be made for the presence of air fall dust. Here we present an improved dust correction procedure based on a two‐layer scattering model, and we present a dust reflectance spectrum derived from long‐term trends in the data set. The dust on the calibration targets appears brighter than dusty areas of the Martian surface. We derive detailed histories of dust deposition and removal revealing two distinct environments: At the Spirit landing site, half the year is dominated by dust deposition, the other half by dust removal, usually in brief, sharp events. At the Opportunity landing site the Martian year has a semiannual dust cycle with dust removal happening gradually throughout two removal seasons each year. The highest observed optical depth of settled dust on the calibration target is 1.5 on Spirit and 1.1 on Opportunity (at 601 nm). We derive a general prediction for dust deposition rates of 0.004 ± 0.001 in units of surface optical depth deposited per sol (Martian solar day) per unit atmospheric optical depth. We expect this procedure to lead to improved reflectance‐calibration of the Panoramic Camera data set. In addition, it is easily adapted to similar data sets from other missions in order to deliver improved reflectance calibration as well as data on dust reflectance properties and deposition and removal history. PMID:27981072

  4. Implementation of a sensor guided flight algorithm for target tracking by small UAS

    NASA Astrophysics Data System (ADS)

    Collins, Gaemus E.; Stankevitz, Chris; Liese, Jeffrey

    2011-06-01

    Small xed-wing UAS (SUAS) such as Raven and Unicorn have limited power, speed, and maneuverability. Their missions can be dramatically hindered by environmental conditions (wind, terrain), obstructions (buildings, trees) blocking clear line of sight to a target, and/or sensor hardware limitations (xed stare, limited gimbal motion, lack of zoom). Toyon's Sensor Guided Flight (SGF) algorithm was designed to account for SUAS hardware shortcomings and enable long-term tracking of maneuvering targets by maintaining persistent eyes-on-target. SGF was successfully tested in simulation with high-delity UAS, sensor, and environment models, but real- world ight testing with 60 Unicorn UAS revealed surprising second order challenges that were not highlighted by the simulations. This paper describes the SGF algorithm, our rst round simulation results, our second order discoveries from ight testing, and subsequent improvements that were made to the algorithm.

  5. Guiding and focusing of fast electron beams produced by ultra-intense laser pulse using a double cone funnel target

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Wen-shuai; Cai, Hong-bo, E-mail: Cai-hongbo@iapcm.ac.cn; HEDPS, Center for Applied Physics and Technology, Peking University, Beijing 100871

    A novel double cone funnel target design aiming at efficiently guiding and focusing fast electron beams produced in high intensity (>10{sup 19 }W/cm{sup 2}) laser-solid interactions is investigated via two-dimensional particle-in-cell simulations. The forward-going fast electron beams are shown to be directed and focused to a smaller size in comparison with the incident laser spot size. This plasma funnel attached on the cone target guides and focuses electrons in a manner akin to the control of liquid by a plastic funnel. Such device has the potential to add substantial design flexibility and prevent inefficiencies for important applications such as fast ignition.more » Two reasons account for the collimation of fast electron beams. First, the sheath electric fields and quasistatic magnetic fields inside the vacuum gap of the double cone provide confinement of the fast electrons in the laser-plasma interaction region. Second, the interface magnetic fields inside the beam collimator further guide and focus the fast electrons during the transport. The application of this technique to cone-guided fast ignition is considered, and it is shown that it can enhance the laser energy deposition in the compressed fuel plasma by a factor of 2 in comparison with the single cone target case.« less

  6. Graphic Arts: Process Camera, Stripping, and Platemaking. Teacher Guide.

    ERIC Educational Resources Information Center

    Feasley, Sue C., Ed.

    This curriculum guide is the second in a three-volume series of instructional materials for competency-based graphic arts instruction. Each publication is designed to include the technical content and tasks necessary for a student to be employed in an entry-level graphic arts occupation. Introductory materials include an instructional/task…

  7. Camera Optics.

    ERIC Educational Resources Information Center

    Ruiz, Michael J.

    1982-01-01

    The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…

  8. Soft tissue navigation for laparoscopic prostatectomy: evaluation of camera pose estimation for enhanced visualization

    NASA Astrophysics Data System (ADS)

    Baumhauer, M.; Simpfendörfer, T.; Schwarz, R.; Seitel, M.; Müller-Stich, B. P.; Gutt, C. N.; Rassweiler, J.; Meinzer, H.-P.; Wolf, I.

    2007-03-01

    We introduce a novel navigation system to support minimally invasive prostate surgery. The system utilizes transrectal ultrasonography (TRUS) and needle-shaped navigation aids to visualize hidden structures via Augmented Reality. During the intervention, the navigation aids are segmented once from a 3D TRUS dataset and subsequently tracked by the endoscope camera. Camera Pose Estimation methods directly determine position and orientation of the camera in relation to the navigation aids. Accordingly, our system does not require any external tracking device for registration of endoscope camera and ultrasonography probe. In addition to a preoperative planning step in which the navigation targets are defined, the procedure consists of two main steps which are carried out during the intervention: First, the preoperatively prepared planning data is registered with an intraoperatively acquired 3D TRUS dataset and the segmented navigation aids. Second, the navigation aids are continuously tracked by the endoscope camera. The camera's pose can thereby be derived and relevant medical structures can be superimposed on the video image. This paper focuses on the latter step. We have implemented several promising real-time algorithms and incorporated them into the Open Source Toolkit MITK (www.mitk.org). Furthermore, we have evaluated them for minimally invasive surgery (MIS) navigation scenarios. For this purpose, a virtual evaluation environment has been developed, which allows for the simulation of navigation targets and navigation aids, including their measurement errors. Besides evaluating the accuracy of the computed pose, we have analyzed the impact of an inaccurate pose and the resulting displacement of navigation targets in Augmented Reality.

  9. View of camera station located northeast of Building 70022, facing ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    View of camera station located northeast of Building 70022, facing northwest - Naval Ordnance Test Station Inyokern, Randsburg Wash Facility Target Test Towers, Tower Road, China Lake, Kern County, CA

  10. System Architecture of the Dark Energy Survey Camera Readout Electronics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shaw, Theresa; /FERMILAB; Ballester, Otger

    2010-05-27

    The Dark Energy Survey makes use of a new camera, the Dark Energy Camera (DECam). DECam will be installed in the Blanco 4M telescope at Cerro Tololo Inter-American Observatory (CTIO). DECam is presently under construction and is expected to be ready for observations in the fall of 2011. The focal plane will make use of 62 2Kx4K and 12 2kx2k fully depleted Charge-Coupled Devices (CCDs) for guiding, alignment and focus. This paper will describe design considerations of the system; including, the entire signal path used to read out the CCDs, the development of a custom crate and backplane, the overallmore » grounding scheme and early results of system tests.« less

  11. On the development of radiation tolerant surveillance camera from consumer-grade components

    NASA Astrophysics Data System (ADS)

    Klemen, Ambrožič; Luka, Snoj; Lars, Öhlin; Jan, Gunnarsson; Niklas, Barringer

    2017-09-01

    In this paper an overview on the process of designing a radiation tolerant surveillance camera from consumer grade components and commercially available particle shielding materials is given. This involves utilization of Monte-Carlo particle transport code MCNP6 and ENDF/B-VII.0 nuclear data libraries, as well as testing the physical electrical systems against γ radiation, utilizing JSI TRIGA mk. II fuel elements as a γ-ray sources. A new, aluminum, 20 cm × 20 cm × 30 cm irradiation facility with electrical power and signal wire guide-tube to the reactor platform, was designed and constructed and used for irradiation of large electronic and optical components assemblies with activated fuel elements. Electronic components to be used in the camera were tested against γ-radiation in an independent manner, to determine their radiation tolerance. Several camera designs were proposed and simulated using MCNP, to determine incident particle and dose attenuation factors. Data obtained from the measurements and MCNP simulations will be used to finalize the design of 3 surveillance camera models, with different radiation tolerances.

  12. Miniaturized fundus camera

    NASA Astrophysics Data System (ADS)

    Gliss, Christine; Parel, Jean-Marie A.; Flynn, John T.; Pratisto, Hans S.; Niederer, Peter F.

    2003-07-01

    We present a miniaturized version of a fundus camera. The camera is designed for the use in screening for retinopathy of prematurity (ROP). There, but also in other applications a small, light weight, digital camera system can be extremely useful. We present a small wide angle digital camera system. The handpiece is significantly smaller and lighter then in all other systems. The electronics is truly portable fitting in a standard boardcase. The camera is designed to be offered at a compatible price. Data from tests on young rabbits' eyes is presented. The development of the camera system is part of a telemedicine project screening for ROP. Telemedical applications are a perfect application for this camera system using both advantages: the portability as well as the digital image.

  13. A Motionless Camera

    NASA Technical Reports Server (NTRS)

    1994-01-01

    Omniview, a motionless, noiseless, exceptionally versatile camera was developed for NASA as a receiving device for guiding space robots. The system can see in one direction and provide as many as four views simultaneously. Developed by Omniview, Inc. (formerly TRI) under a NASA Small Business Innovation Research (SBIR) grant, the system's image transformation electronics produce a real-time image from anywhere within a hemispherical field. Lens distortion is removed, and a corrected "flat" view appears on a monitor. Key elements are a high resolution charge coupled device (CCD), image correction circuitry and a microcomputer for image processing. The system can be adapted to existing installations. Applications include security and surveillance, teleconferencing, imaging, virtual reality, broadcast video and military operations. Omniview technology is now called IPIX. The company was founded in 1986 as TeleRobotics International, became Omniview in 1995, and changed its name to Interactive Pictures Corporation in 1997.

  14. Target: Communication Skills. K-12 Curriculum Guide.

    ERIC Educational Resources Information Center

    Lincoln Public Schools, NE.

    Intended to help elementary and secondary school teachers model and teach communication skills in all subject matters, this curriculum guide is divided into four sections. The introduction describes the program's goals, explains how to use the guide, and presents grade appropriate profiles of communication skills competence. The second section…

  15. 7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    7. VAL CAMERA STATION, INTERIOR VIEW OF CAMERA MOUNT, COMMUNICATION EQUIPMENT AND STORAGE CABINET. - Variable Angle Launcher Complex, Camera Stations, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  16. In-Situ Cameras for Radiometric Correction of Remotely Sensed Data

    NASA Astrophysics Data System (ADS)

    Kautz, Jess S.

    The atmosphere distorts the spectrum of remotely sensed data, negatively affecting all forms of investigating Earth's surface. To gather reliable data, it is vital that atmospheric corrections are accurate. The current state of the field of atmospheric correction does not account well for the benefits and costs of different correction algorithms. Ground spectral data are required to evaluate these algorithms better. This dissertation explores using cameras as radiometers as a means of gathering ground spectral data. I introduce techniques to implement a camera systems for atmospheric correction using off the shelf parts. To aid the design of future camera systems for radiometric correction, methods for estimating the system error prior to construction, calibration and testing of the resulting camera system are explored. Simulations are used to investigate the relationship between the reflectance accuracy of the camera system and the quality of atmospheric correction. In the design phase, read noise and filter choice are found to be the strongest sources of system error. I explain the calibration methods for the camera system, showing the problems of pixel to angle calibration, and adapting the web camera for scientific work. The camera system is tested in the field to estimate its ability to recover directional reflectance from BRF data. I estimate the error in the system due to the experimental set up, then explore how the system error changes with different cameras, environmental set-ups and inversions. With these experiments, I learn about the importance of the dynamic range of the camera, and the input ranges used for the PROSAIL inversion. Evidence that the camera can perform within the specification set for ELM correction in this dissertation is evaluated. The analysis is concluded by simulating an ELM correction of a scene using various numbers of calibration targets, and levels of system error, to find the number of cameras needed for a full

  17. Remote hardware-reconfigurable robotic camera

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel; Torres-Huitzil, Cesar; Maya-Rueda, Selene E.

    2001-10-01

    In this work, a camera with integrated image processing capabilities is discussed. The camera is based on an imager coupled to an FPGA device (Field Programmable Gate Array) which contains an architecture for real-time computer vision low-level processing. The architecture can be reprogrammed remotely for application specific purposes. The system is intended for rapid modification and adaptation for inspection and recognition applications, with the flexibility of hardware and software reprogrammability. FPGA reconfiguration allows the same ease of upgrade in hardware as a software upgrade process. The camera is composed of a digital imager coupled to an FPGA device, two memory banks, and a microcontroller. The microcontroller is used for communication tasks and FPGA programming. The system implements a software architecture to handle multiple FPGA architectures in the device, and the possibility to download a software/hardware object from the host computer into its internal context memory. System advantages are: small size, low power consumption, and a library of hardware/software functionalities that can be exchanged during run time. The system has been validated with an edge detection and a motion processing architecture, which will be presented in the paper. Applications targeted are in robotics, mobile robotics, and vision based quality control.

  18. Sub-Camera Calibration of a Penta-Camera

    NASA Astrophysics Data System (ADS)

    Jacobsen, K.; Gerke, M.

    2016-03-01

    Penta cameras consisting of a nadir and four inclined cameras are becoming more and more popular, having the advantage of imaging also facades in built up areas from four directions. Such system cameras require a boresight calibration of the geometric relation of the cameras to each other, but also a calibration of the sub-cameras. Based on data sets of the ISPRS/EuroSDR benchmark for multi platform photogrammetry the inner orientation of the used IGI Penta DigiCAM has been analyzed. The required image coordinates of the blocks Dortmund and Zeche Zollern have been determined by Pix4Dmapper and have been independently adjusted and analyzed by program system BLUH. With 4.1 million image points in 314 images respectively 3.9 million image points in 248 images a dense matching was provided by Pix4Dmapper. With up to 19 respectively 29 images per object point the images are well connected, nevertheless the high number of images per object point are concentrated to the block centres while the inclined images outside the block centre are satisfying but not very strongly connected. This leads to very high values for the Student test (T-test) of the finally used additional parameters or in other words, additional parameters are highly significant. The estimated radial symmetric distortion of the nadir sub-camera corresponds to the laboratory calibration of IGI, but there are still radial symmetric distortions also for the inclined cameras with a size exceeding 5μm even if mentioned as negligible based on the laboratory calibration. Radial and tangential effects of the image corners are limited but still available. Remarkable angular affine systematic image errors can be seen especially in the block Zeche Zollern. Such deformations are unusual for digital matrix cameras, but it can be caused by the correlation between inner and exterior orientation if only parallel flight lines are used. With exception of the angular affinity the systematic image errors for corresponding

  19. Sniper detection using infrared camera: technical possibilities and limitations

    NASA Astrophysics Data System (ADS)

    Kastek, M.; Dulski, R.; Trzaskawka, P.; Bieszczad, G.

    2010-04-01

    The paper discusses technical possibilities to build an effective system for sniper detection using infrared cameras. Descriptions of phenomena which make it possible to detect sniper activities in infrared spectra as well as analysis of physical limitations were performed. Cooled and uncooled detectors were considered. Three phases of sniper activities were taken into consideration: before, during and after the shot. On the basis of experimental data the parameters defining the target were determined which are essential in assessing the capability of infrared camera to detect sniper activity. A sniper body and muzzle flash were analyzed as targets. The simulation of detection ranges was done for the assumed scenario of sniper detection task. The infrared sniper detection system was discussed, capable of fulfilling the requirements. The discussion of the results of analysis and simulations was finally presented.

  20. Automatic visibility retrieval from thermal camera images

    NASA Astrophysics Data System (ADS)

    Dizerens, Céline; Ott, Beat; Wellig, Peter; Wunderle, Stefan

    2017-10-01

    This study presents an automatic visibility retrieval of a FLIR A320 Stationary Thermal Imager installed on a measurement tower on the mountain Lagern located in the Swiss Jura Mountains. Our visibility retrieval makes use of edges that are automatically detected from thermal camera images. Predefined target regions, such as mountain silhouettes or buildings with high thermal differences to the surroundings, are used to derive the maximum visibility distance that is detectable in the image. To allow a stable, automatic processing, our procedure additionally removes noise in the image and includes automatic image alignment to correct small shifts of the camera. We present a detailed analysis of visibility derived from more than 24000 thermal images of the years 2015 and 2016 by comparing them to (1) visibility derived from a panoramic camera image (VISrange), (2) measurements of a forward-scatter visibility meter (Vaisala FD12 working in the NIR spectra), and (3) modeled visibility values using the Thermal Range Model TRM4. Atmospheric conditions, mainly water vapor from European Center for Medium Weather Forecast (ECMWF), were considered to calculate the extinction coefficients using MODTRAN. The automatic visibility retrieval based on FLIR A320 images is often in good agreement with the retrieval from the systems working in different spectral ranges. However, some significant differences were detected as well, depending on weather conditions, thermal differences of the monitored landscape, and defined target size.

  1. Ultrahigh sensitivity endoscopic camera using a new CMOS image sensor: providing with clear images under low illumination in addition to fluorescent images.

    PubMed

    Aoki, Hisae; Yamashita, Hiromasa; Mori, Toshiyuki; Fukuyo, Tsuneo; Chiba, Toshio

    2014-11-01

    We developed a new ultrahigh-sensitive CMOS camera using a specific sensor that has a wide range of spectral sensitivity characteristics. The objective of this study is to present our updated endoscopic technology that has successfully integrated two innovative functions; ultrasensitive imaging as well as advanced fluorescent viewing. Two different experiments were conducted. One was carried out to evaluate the function of the ultrahigh-sensitive camera. The other was to test the availability of the newly developed sensor and its performance as a fluorescence endoscope. In both studies, the distance from the endoscopic tip to the target was varied and those endoscopic images in each setting were taken for further comparison. In the first experiment, the 3-CCD camera failed to display the clear images under low illumination, and the target was hardly seen. In contrast, the CMOS camera was able to display the targets regardless of the camera-target distance under low illumination. Under high illumination, imaging quality given by both cameras was quite alike. In the second experiment as a fluorescence endoscope, the CMOS camera was capable of clearly showing the fluorescent-activated organs. The ultrahigh sensitivity CMOS HD endoscopic camera is expected to provide us with clear images under low illumination in addition to the fluorescent images under high illumination in the field of laparoscopic surgery.

  2. Targeted Ultrasound-Guided Perineural Hydrodissection of the Sciatic Nerve for the Treatment of Piriformis Syndrome.

    PubMed

    Burke, Christopher J; Walter, William R; Adler, Ronald S

    2018-05-01

    Piriformis syndrome is a common cause of lumbar, gluteal, and thigh pain, frequently associated with sciatic nerve symptoms. Potential etiologies include muscle injury or chronic muscle stretching associated with gait disturbances. There is a common pathological end pathway involving hypertrophy, spasm, contracture, inflammation, and scarring of the piriformis muscle, leading to impingement of the sciatic nerve. Ultrasound-guided piriformis injections are frequently used in the treatment of these pain syndromes, with most of the published literature describing injection of the muscle. We describe a safe, effective ultrasound-guided injection technique for the treatment of piriformis syndrome using targeted sciatic perineural hydrodissection followed by therapeutic corticosteroid injection.

  3. Motion camera based on a custom vision sensor and an FPGA architecture

    NASA Astrophysics Data System (ADS)

    Arias-Estrada, Miguel

    1998-09-01

    A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.

  4. Dynamics of laser-guided alternating current high voltage discharges

    NASA Astrophysics Data System (ADS)

    Daigle, J.-F.; Théberge, F.; Lassonde, P.; Kieffer, J.-C.; Fujii, T.; Fortin, J.; Châteauneuf, M.; Dubois, J.

    2013-10-01

    The dynamics of laser-guided alternating current high voltage discharges are characterized using a streak camera. Laser filaments were used to trigger and guide the discharges produced by a commercial Tesla coil. The streaking images revealed that the dynamics of the guided alternating current high voltage corona are different from that of a direct current source. The measured effective corona velocity and the absence of leader streamers confirmed that it evolves in a pure leader regime.

  5. 3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK ...

    Library of Congress Historic Buildings Survey, Historic Engineering Record, Historic Landscapes Survey

    3. VAL CAMERA CAR, VIEW OF CAMERA CAR AND TRACK WITH THE VAL TO THE RIGHT, LOOKING NORTHEAST. - Variable Angle Launcher Complex, Camera Car & Track, CA State Highway 39 at Morris Reservior, Azusa, Los Angeles County, CA

  6. Harpicon camera for HDTV

    NASA Astrophysics Data System (ADS)

    Tanada, Jun

    1992-08-01

    Ikegami has been involved in broadcast equipment ever since it was established as a company. In conjunction with NHK it has brought forth countless television cameras, from black-and-white cameras to color cameras, HDTV cameras, and special-purpose cameras. In the early days of HDTV (high-definition television, also known as "High Vision") cameras the specifications were different from those for the cameras of the present-day system, and cameras using all kinds of components, having different arrangements of components, and having different appearances were developed into products, with time spent on experimentation, design, fabrication, adjustment, and inspection. But recently the knowhow built up thus far in components, , printed circuit boards, and wiring methods has been incorporated in camera fabrication, making it possible to make HDTV cameras by metbods similar to the present system. In addition, more-efficient production, lower costs, and better after-sales service are being achieved by using the same circuits, components, mechanism parts, and software for both HDTV cameras and cameras that operate by the present system.

  7. Guided filter and convolutional network based tracking for infrared dim moving target

    NASA Astrophysics Data System (ADS)

    Qian, Kun; Zhou, Huixin; Qin, Hanlin; Rong, Shenghui; Zhao, Dong; Du, Juan

    2017-09-01

    The dim moving target usually submerges in strong noise, and its motion observability is debased by numerous false alarms for low signal-to-noise ratio. A tracking algorithm that integrates the Guided Image Filter (GIF) and the Convolutional neural network (CNN) into the particle filter framework is presented to cope with the uncertainty of dim targets. First, the initial target template is treated as a guidance to filter incoming templates depending on similarities between the guidance and candidate templates. The GIF algorithm utilizes the structure in the guidance and performs as an edge-preserving smoothing operator. Therefore, the guidance helps to preserve the detail of valuable templates and makes inaccurate ones blurry, alleviating the tracking deviation effectively. Besides, the two-layer CNN method is adopted to obtain a powerful appearance representation. Subsequently, a Bayesian classifier is trained with these discriminative yet strong features. Moreover, an adaptive learning factor is introduced to prevent the update of classifier's parameters when a target undergoes sever background. At last, classifier responses of particles are utilized to generate particle importance weights and a re-sample procedure preserves samples according to the weight. In the predication stage, a 2-order transition model considers the target velocity to estimate current position. Experimental results demonstrate that the presented algorithm outperforms several relative algorithms in the accuracy.

  8. Dissociable Frontal Controls during Visible and Memory-guided Eye-Tracking of Moving Targets

    PubMed Central

    Ding, Jinhong; Powell, David; Jiang, Yang

    2009-01-01

    When tracking visible or occluded moving targets, several frontal regions including the frontal eye fields (FEF), dorsal-lateral prefrontal cortex (DLPFC), and Anterior Cingulate Cortex (ACC) are involved in smooth pursuit eye movements (SPEM). To investigate how these areas play different roles in predicting future locations of moving targets, twelve healthy college students participated in a smooth pursuit task of visual and occluded targets. Their eye movements and brain responses measured by event-related functional MRI were simultaneously recorded. Our results show that different visual cues resulted in time discrepancies between physical and estimated pursuit time only when the moving dot was occluded. Visible phase velocity gain was higher than that of occlusion phase. We found bilateral FEF association with eye-movement whether moving targets are visible or occluded. However, the DLPFC and ACC showed increased activity when tracking and predicting locations of occluded moving targets, and were suppressed during smooth pursuit of visible targets. When visual cues were increasingly available, less activation in the DLPFC and the ACC was observed. Additionally, there was a significant hemisphere effect in DLPFC, where right DLPFC showed significantly increased responses over left when pursuing occluded moving targets. Correlation results revealed that DLPFC, the right DLPFC in particular, communicates more with FEF during tracking of occluded moving targets (from memory). The ACC modulates FEF more during tracking of visible targets (likely related to visual attention). Our results suggest that DLPFC and ACC modulate FEF and cortical networks differentially during visible and memory-guided eye tracking of moving targets. PMID:19434603

  9. Accurate shade image matching by using a smartphone camera.

    PubMed

    Tam, Weng-Kong; Lee, Hsi-Jian

    2017-04-01

    Dental shade matching by using digital images may be feasible when suitable color features are properly manipulated. Separating the color features into feature spaces facilitates favorable matching. We propose using support vector machines (SVM), which are outstanding classifiers, in shade classification. A total of 1300 shade tab images were captured using a smartphone camera with auto-mode settings and no flash. The images were shot at angled distances of 14-20cm from a shade guide at a clinic equipped with light tubes that produced a 4000K color temperature. The Group 1 samples comprised 1040 tab images, for which the shade guide was randomly positioned in the clinic, and the Group 2 samples comprised 260 tab images, for which the shade guide had a fixed position in the clinic. Rectangular content was cropped manually on each shade tab image and further divided into 10×2 blocks. The color features extracted from the blocks were described using a feature vector. The feature vectors in each group underwent SVM training and classification by using the "leave-one-out" strategy. The top one and three accuracies of Group 1 were 0.86 and 0.98, respectively, and those of Group 2 were 0.97 and 1.00, respectively. This study provides a feasible technique for dental shade classification that uses the camera of a mobile device. The findings reveal that the proposed SVM classification might outperform the shade-matching results of previous studies that have performed similarity measurements of ΔE levels or used an S, a*, b* feature set. Copyright © 2016 Japan Prosthodontic Society. Published by Elsevier Ltd. All rights reserved.

  10. Numerical analysis of wavefront measurement characteristics by using plenoptic camera

    NASA Astrophysics Data System (ADS)

    Lv, Yang; Ma, Haotong; Zhang, Xuanzhe; Ning, Yu; Xu, Xiaojun

    2016-01-01

    To take advantage of the large-diameter telescope for high-resolution imaging of extended targets, it is necessary to detect and compensate the wave-front aberrations induced by atmospheric turbulence. Data recorded by Plenoptic cameras can be used to extract the wave-front phases associated to the atmospheric turbulence in an astronomical observation. In order to recover the wave-front phase tomographically, a method of completing the large Field Of View (FOV), multi-perspective wave-front detection simultaneously is urgently demanded, and it is plenoptic camera that possesses this unique advantage. Our paper focuses more on the capability of plenoptic camera to extract the wave-front from different perspectives simultaneously. In this paper, we built up the corresponding theoretical model and simulation system to discuss wave-front measurement characteristics utilizing plenoptic camera as wave-front sensor. And we evaluated the performance of plenoptic camera with different types of wave-front aberration corresponding to the occasions of applications. In the last, we performed the multi-perspective wave-front sensing employing plenoptic camera as wave-front sensor in the simulation. Our research of wave-front measurement characteristics employing plenoptic camera is helpful to select and design the parameters of a plenoptic camera, when utilizing which as multi-perspective and large FOV wave-front sensor, which is expected to solve the problem of large FOV wave-front detection, and can be used for AO in giant telescopes.

  11. Calculation for simulation of archery goal value using a web camera and ultrasonic sensor

    NASA Astrophysics Data System (ADS)

    Rusjdi, Darma; Abdurrasyid, Wulandari, Dewi Arianti

    2017-08-01

    Development of the device simulator digital indoor archery-based embedded systems as a solution to the limitations of the field or open space is adequate, especially in big cities. Development of the device requires simulations to calculate the value of achieving the target based on the approach defined by the parabolic motion variable initial velocity and direction of motion of the arrow reaches the target. The simulator device should be complemented with an initial velocity measuring device using ultrasonic sensors and measuring direction of the target using a digital camera. The methodology uses research and development of application software from modeling and simulation approach. The research objective to create simulation applications calculating the value of the achievement of the target arrows. Benefits as a preliminary stage for the development of the simulator device of archery. Implementation of calculating the value of the target arrows into the application program generates a simulation game of archery that can be used as a reference development of the digital archery simulator in a room with embedded systems using ultrasonic sensors and web cameras. Applications developed with the simulation calculation comparing the outer radius of the circle produced a camera from a distance of three meters.

  12. Imaging characteristics of photogrammetric camera systems

    USGS Publications Warehouse

    Welch, R.; Halliday, J.

    1973-01-01

    In view of the current interest in high-altitude and space photographic systems for photogrammetric mapping, the United States Geological Survey (U.S.G.S.) undertook a comprehensive research project designed to explore the practical aspects of applying the latest image quality evaluation techniques to the analysis of such systems. The project had two direct objectives: (1) to evaluate the imaging characteristics of current U.S.G.S. photogrammetric camera systems; and (2) to develop methodologies for predicting the imaging capabilities of photogrammetric camera systems, comparing conventional systems with new or different types of systems, and analyzing the image quality of photographs. Image quality was judged in terms of a number of evaluation factors including response functions, resolving power, and the detectability and measurability of small detail. The limiting capabilities of the U.S.G.S. 6-inch and 12-inch focal length camera systems were established by analyzing laboratory and aerial photographs in terms of these evaluation factors. In the process, the contributing effects of relevant parameters such as lens aberrations, lens aperture, shutter function, image motion, film type, and target contrast procedures for analyzing image quality and predicting and comparing performance capabilities. ?? 1973.

  13. Superficial vessel reconstruction with a multiview camera system

    PubMed Central

    Marreiros, Filipe M. M.; Rossitti, Sandro; Karlsson, Per M.; Wang, Chunliang; Gustafsson, Torbjörn; Carleberg, Per; Smedby, Örjan

    2016-01-01

    Abstract. We aim at reconstructing superficial vessels of the brain. Ultimately, they will serve to guide the deformation methods to compensate for the brain shift. A pipeline for three-dimensional (3-D) vessel reconstruction using three mono-complementary metal-oxide semiconductor cameras has been developed. Vessel centerlines are manually selected in the images. Using the properties of the Hessian matrix, the centerline points are assigned direction information. For correspondence matching, a combination of methods was used. The process starts with epipolar and spatial coherence constraints (geometrical constraints), followed by relaxation labeling and an iterative filtering where the 3-D points are compared to surfaces obtained using the thin-plate spline with decreasing relaxation parameter. Finally, the points are shifted to their local centroid position. Evaluation in virtual, phantom, and experimental images, including intraoperative data from patient experiments, shows that, with appropriate camera positions, the error estimates (root-mean square error and mean error) are ∼1  mm. PMID:26759814

  14. Epitope targeting of tertiary protein structure enables target-guided synthesis of a potent in-cell inhibitor of botulinum neurotoxin.

    PubMed

    Farrow, Blake; Wong, Michelle; Malette, Jacquie; Lai, Bert; Deyle, Kaycie M; Das, Samir; Nag, Arundhati; Agnew, Heather D; Heath, James R

    2015-06-08

    Botulinum neurotoxin (BoNT) serotype A is the most lethal known toxin and has an occluded structure, which prevents direct inhibition of its active site before it enters the cytosol. Target-guided synthesis by in situ click chemistry is combined with synthetic epitope targeting to exploit the tertiary structure of the BoNT protein as a landscape for assembling a competitive inhibitor. A substrate-mimicking peptide macrocycle is used as a direct inhibitor of BoNT. An epitope-targeting in situ click screen is utilized to identify a second peptide macrocycle ligand that binds to an epitope that, in the folded BoNT structure, is active-site-adjacent. A second in situ click screen identifies a molecular bridge between the two macrocycles. The resulting divalent inhibitor exhibits an in vitro inhibition constant of 165 pM against the BoNT/A catalytic chain. The inhibitor is carried into cells by the intact holotoxin, and demonstrates protection and rescue of BoNT intoxication in a human neuron model. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  15. Object recognition through turbulence with a modified plenoptic camera

    NASA Astrophysics Data System (ADS)

    Wu, Chensheng; Ko, Jonathan; Davis, Christopher

    2015-03-01

    Atmospheric turbulence adds accumulated distortion to images obtained by cameras and surveillance systems. When the turbulence grows stronger or when the object is further away from the observer, increasing the recording device resolution helps little to improve the quality of the image. Many sophisticated methods to correct the distorted images have been invented, such as using a known feature on or near the target object to perform a deconvolution process, or use of adaptive optics. However, most of the methods depend heavily on the object's location, and optical ray propagation through the turbulence is not directly considered. Alternatively, selecting a lucky image over many frames provides a feasible solution, but at the cost of time. In our work, we propose an innovative approach to improving image quality through turbulence by making use of a modified plenoptic camera. This type of camera adds a micro-lens array to a traditional high-resolution camera to form a semi-camera array that records duplicate copies of the object as well as "superimposed" turbulence at slightly different angles. By performing several steps of image reconstruction, turbulence effects will be suppressed to reveal more details of the object independently (without finding references near the object). Meanwhile, the redundant information obtained by the plenoptic camera raises the possibility of performing lucky image algorithmic analysis with fewer frames, which is more efficient. In our work, the details of our modified plenoptic cameras and image processing algorithms will be introduced. The proposed method can be applied to coherently illuminated object as well as incoherently illuminated objects. Our result shows that the turbulence effect can be effectively suppressed by the plenoptic camera in the hardware layer and a reconstructed "lucky image" can help the viewer identify the object even when a "lucky image" by ordinary cameras is not achievable.

  16. Guided molecular missiles for tumor-targeting chemotherapy--case studies using the second-generation taxoids as warheads.

    PubMed

    Ojima, Iwao

    2008-01-01

    A long-standing problem in cancer chemotherapy is the lack of tumor-specific treatments. Traditional chemotherapy relies on the premise that rapidly proliferating cancer cells are more likely to be killed by a cytotoxic agent. In reality, however, cytotoxic agents have very little or no specificity, which leads to systemic toxicity, causing undesirable severe side effects. Therefore, the development of innovative and efficacious tumor-specific drug delivery protocols or systems is urgently needed. A rapidly growing tumor requires various nutrients and vitamins. Thus, tumor cells overexpress many tumor-specific receptors, which can be used as targets to deliver cytotoxic agents into tumors. This Account presents our research program on the discovery and development of novel and efficient drug delivery systems, possessing tumor-targeting ability and efficacy against various cancer types, especially multidrug-resistant tumors. In general, a tumor-targeting drug delivery system consists of a tumor recognition moiety and a cytotoxic warhead connected directly or through a suitable linker to form a conjugate. The conjugate, which can be regarded as a "guided molecular missile", should be systemically nontoxic, that is, the linker must be stable in blood circulation, but upon internalization into the cancer cell, the conjugate should be readily cleaved to regenerate the active cytotoxic warhead. These novel "guided molecular missiles" are conjugates of the highly potent second-generation taxoid anticancer agents with tumor-targeting molecules through mechanism-based cleavable linkers. These conjugates are specifically delivered to tumors and internalized into tumor cells, and the potent taxoid anticancer agents are released from the linker into the cytoplasm. We have successfully used omega-3 polyunsaturated fatty acids, in particular DHA, and monoclonal antibodies (for EGFR) as tumor-targeting molecules for the conjugates, which exhibited remarkable efficacy against

  17. A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.

    PubMed

    Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi

    2016-08-30

    This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.

  18. Development of an LYSO based gamma camera for positron and scinti-mammography

    NASA Astrophysics Data System (ADS)

    Liang, H.-C.; Jan, M.-L.; Lin, W.-C.; Yu, S.-F.; Su, J.-L.; Shen, L.-H.

    2009-08-01

    In this research, characteristics of combination of PSPMTs (position sensitive photo-multiplier tube) to form a larger detection area is studied. A home-made linear divider circuit was built for merging signals and readout. Borosilicate glasses were chosen for the scintillation light sharing in the crossover region. Deterioration effect caused by the light guide was understood. The influences of light guide and crossover region on the separable crystal size were evaluated. According to the test results, a gamma camera with a crystal block of 90 × 90 mm2 covered area, composed of 2 mm LYSO crystal pixels, was designed and fabricated. Measured performances showed that this camera worked fine in both 511 keV and lower energy gammas. The light loss behaviour within the crossover region was analyzed and realized. Through count rate measurements, the 176Lu nature background didn't show severe influence on the single photon imaging and exhibited an amount of less than 1/3 of all the events acquired. These results show that with using light sharing techniques, combination of multiple PSPMTs in both X and Y directions to build a large area imaging detector is capable to be achieved. Also this camera design is feasible to keep both the abilities for positron and single photon breast imaging applications. Separable crystal size is 2 mm with 2 mm thick glass applied for the light sharing in current status.

  19. The calibration of video cameras for quantitative measurements

    NASA Technical Reports Server (NTRS)

    Snow, Walter L.; Childers, Brooks A.; Shortis, Mark R.

    1993-01-01

    Several different recent applications of velocimetry at Langley Research Center are described in order to show the need for video camera calibration for quantitative measurements. Problems peculiar to video sensing are discussed, including synchronization and timing, targeting, and lighting. The extension of the measurements to include radiometric estimates is addressed.

  20. Tumor Penetrating Theranostic Nanoparticles for Enhancement of Targeted and Image-guided Drug Delivery into Peritoneal Tumors following Intraperitoneal Delivery.

    PubMed

    Gao, Ning; Bozeman, Erica N; Qian, Weiping; Wang, Liya; Chen, Hongyu; Lipowska, Malgorzata; Staley, Charles A; Wang, Y Andrew; Mao, Hui; Yang, Lily

    2017-01-01

    The major obstacles in intraperitoneal (i.p.) chemotherapy of peritoneal tumors are fast absorption of drugs into the blood circulation, local and systemic toxicities, inadequate drug penetration into large tumors, and drug resistance. Targeted theranostic nanoparticles offer an opportunity to enhance the efficacy of i.p. therapy by increasing intratumoral drug delivery to overcome resistance, mediating image-guided drug delivery, and reducing systemic toxicity. Herein we report that i.p. delivery of urokinase plasminogen activator receptor (uPAR) targeted magnetic iron oxide nanoparticles (IONPs) led to intratumoral accumulation of 17% of total injected nanoparticles in an orthotopic mouse pancreatic cancer model, which was three-fold higher compared with intravenous delivery. Targeted delivery of near infrared dye labeled IONPs into orthotopic tumors could be detected by non-invasive optical and magnetic resonance imaging. Histological analysis revealed that a high level of uPAR targeted, PEGylated IONPs efficiently penetrated into both the peripheral and central tumor areas in the primary tumor as well as peritoneal metastatic tumor. Improved theranostic IONP delivery into the tumor center was not mediated by nonspecific macrophage uptake and was independent from tumor blood vessel locations. Importantly, i.p. delivery of uPAR targeted theranostic IONPs carrying chemotherapeutics, cisplatin or doxorubicin, significantly inhibited the growth of pancreatic tumors without apparent systemic toxicity. The levels of proliferating tumor cells and tumor vessels in tumors treated with the above theranostic IONPs were also markedly decreased. The detection of strong optical signals in residual tumors following i.p. therapy suggested the feasibility of image-guided surgery to remove drug-resistant tumors. Therefore, our results support the translational development of i.p. delivery of uPAR-targeted theranostic IONPs for image-guided treatment of peritoneal tumors.

  1. Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications

    NASA Technical Reports Server (NTRS)

    Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)

    1995-01-01

    The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end

  2. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; MacLeod, Todd; Gagliano, Larry

    2016-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well To help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  3. Small Orbital Stereo Tracking Camera Technology Development

    NASA Technical Reports Server (NTRS)

    Bryan, Tom; Macleod, Todd; Gagliano, Larry

    2015-01-01

    On-Orbit Small Debris Tracking and Characterization is a technical gap in the current National Space Situational Awareness necessary to safeguard orbital assets and crew. This poses a major risk of MOD damage to ISS and Exploration vehicles. In 2015 this technology was added to NASA's Office of Chief Technologist roadmap. For missions flying in or assembled in or staging from LEO, the physical threat to vehicle and crew is needed in order to properly design the proper level of MOD impact shielding and proper mission design restrictions. Need to verify debris flux and size population versus ground RADAR tracking. Use of ISS for In-Situ Orbital Debris Tracking development provides attitude, power, data and orbital access without a dedicated spacecraft or restricted operations on-board a host vehicle as a secondary payload. Sensor Applicable to in-situ measuring orbital debris in flux and population in other orbits or on other vehicles. Could enhance safety on and around ISS. Some technologies extensible to monitoring of extraterrestrial debris as well to help accomplish this, new technologies must be developed quickly. The Small Orbital Stereo Tracking Camera is one such up and coming technology. It consists of flying a pair of intensified megapixel telephoto cameras to evaluate Orbital Debris (OD) monitoring in proximity of International Space Station. It will demonstrate on-orbit optical tracking (in situ) of various sized objects versus ground RADAR tracking and small OD models. The cameras are based on Flight Proven Advanced Video Guidance Sensor pixel to spot algorithms (Orbital Express) and military targeting cameras. And by using twin cameras we can provide Stereo images for ranging & mission redundancy. When pointed into the orbital velocity vector (RAM), objects approaching or near the stereo camera set can be differentiated from the stars moving upward in background.

  4. Diffuse optical tomography for breast cancer imaging guided by computed tomography: A feasibility study.

    PubMed

    Baikejiang, Reheman; Zhang, Wei; Li, Changqing

    2017-01-01

    Diffuse optical tomography (DOT) has attracted attentions in the last two decades due to its intrinsic sensitivity in imaging chromophores of tissues such as hemoglobin, water, and lipid. However, DOT has not been clinically accepted yet due to its low spatial resolution caused by strong optical scattering in tissues. Structural guidance provided by an anatomical imaging modality enhances the DOT imaging substantially. Here, we propose a computed tomography (CT) guided multispectral DOT imaging system for breast cancer imaging. To validate its feasibility, we have built a prototype DOT imaging system which consists of a laser at the wavelength of 650 nm and an electron multiplying charge coupled device (EMCCD) camera. We have validated the CT guided DOT reconstruction algorithms with numerical simulations and phantom experiments, in which different imaging setup parameters, such as projection number of measurements and width of measurement patch, have been investigated. Our results indicate that an air-cooling EMCCD camera is good enough for the transmission mode DOT imaging. We have also found that measurements at six angular projections are sufficient for DOT to reconstruct the optical targets with 2 and 4 times absorption contrast when the CT guidance is applied. Finally, we have described our future research plan on integration of a multispectral DOT imaging system into a breast CT scanner.

  5. A Fisheries Application of a Dual-Frequency Identification Sonar Acoustic Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Moursund, Russell A.; Carlson, Thomas J.; Peters, Rock D.

    2003-06-01

    The uses of an acoustic camera in fish passage research at hydropower facilities are being explored by the U.S. Army Corps of Engineers. The Dual-Frequency Identification Sonar (DIDSON) is a high-resolution imaging sonar that obtains near video-quality images for the identification of objects underwater. Developed originally for the Navy by the University of Washington?s Applied Physics Laboratory, it bridges the gap between existing fisheries assessment sonar and optical systems. Traditional fisheries assessment sonars detect targets at long ranges but cannot record the shape of targets. The images within 12 m of this acoustic camera are so clear that one canmore » see fish undulating as they swim and can tell the head from the tail in otherwise zero-visibility water. In the 1.8 MHz high-frequency mode, this system is composed of 96 beams over a 29-degree field of view. This high resolution and a fast frame rate allow the acoustic camera to produce near video-quality images of objects through time. This technology redefines many of the traditional limitations of sonar for fisheries and aquatic ecology. Images can be taken of fish in confined spaces, close to structural or surface boundaries, and in the presence of entrained air. The targets themselves can be visualized in real time. The DIDSON can be used where conventional underwater cameras would be limited in sampling range to < 1 m by low light levels and high turbidity, and where traditional sonar would be limited by the confined sample volume. Results of recent testing at The Dalles Dam, on the lower Columbia River in Oregon, USA, are shown.« less

  6. Imaging-guided preclinical trials of vascular targeting in prostate cancer

    NASA Astrophysics Data System (ADS)

    Kalmuk, James

    Purpose: Prostate cancer is the most common non-cutaneous malignancy in American men and is characterized by dependence on androgens (Testosterone/Dihydrotestosterone) for growth and survival. Although reduction of serum testosterone levels by surgical or chemical castration transiently inhibits neoplastic growth, tumor adaptation to castrate levels of androgens results in the generation of castration-resistant prostate cancer (CRPC). Progression to CRPC following androgen deprivation therapy (ADT) has been associated with changes in vascular morphology and increased angiogenesis. Based on this knowledge, we hypothesized that targeting tumor vasculature in combination with ADT would result in enhanced therapeutic efficacy against prostate cancer. Methods: To test this hypothesis, we examined the therapeutic activity of a tumor-vascular disrupting agent (tumor-VDA), EPC2407 (Crolibulin(TM)), alone and in combination with ADT in a murine model of prostate cancer (Myc-CaP). A non-invasive multimodality imaging approach based on magnetic resonance imaging (MRI), bioluminescence imaging (BLI), and ultrasound (US) was utilized to characterize tumor response to therapy and to guide preclinical trial design. Imaging results were correlated with histopathologic (H&E) and immunohistochemical (CD31) assessment as well as tumor growth inhibition and survival analyses. Results: Our imaging techniques were able to capture an acute reduction (within 24 hours) in tumor perfusion following castration and VDA monotherapy. BLI revealed onset of recurrent disease 5-7 days post castration prior to visible tumor regrowth suggestive of vascular recovery. Administration of VDA beginning 1 week post castration for 3 weeks resulted in sustained vascular suppression, inhibition of tumor regrowth, and conferred a more pronounced survival benefit compared to either monotherapy. Conclusion: The high mortality rate associated with CRPC underscores the need for investigating novel treatment

  7. Projection of controlled repeatable real-time moving targets to test and evaluate motion imagery quality

    NASA Astrophysics Data System (ADS)

    Scopatz, Stephen D.; Mendez, Michael; Trent, Randall

    2015-05-01

    The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.

  8. The Panoramic Camera (PanCam) Instrument for the ESA ExoMars Rover

    NASA Astrophysics Data System (ADS)

    Griffiths, A.; Coates, A.; Jaumann, R.; Michaelis, H.; Paar, G.; Barnes, D.; Josset, J.

    The recently approved ExoMars rover is the first element of the ESA Aurora programme and is slated to deliver the Pasteur exobiology payload to Mars by 2013. The 0.7 kg Panoramic Camera will provide multispectral stereo images with 65° field-of- view (1.1 mrad/pixel) and high resolution (85 µrad/pixel) monoscopic "zoom" images with 5° field-of-view. The stereo Wide Angle Cameras (WAC) are based on Beagle 2 Stereo Camera System heritage. The Panoramic Camera instrument is designed to fulfil the digital terrain mapping requirements of the mission as well as providing multispectral geological imaging, colour and stereo panoramic images, solar images for water vapour abundance and dust optical depth measurements and to observe retrieved subsurface samples before ingestion into the rest of the Pasteur payload. Additionally the High Resolution Camera (HRC) can be used for high resolution imaging of interesting targets detected in the WAC panoramas and of inaccessible locations on crater or valley walls.

  9. Infrared detectors and test technology of cryogenic camera

    NASA Astrophysics Data System (ADS)

    Yang, Xiaole; Liu, Xingxin; Xing, Mailing; Ling, Long

    2016-10-01

    Cryogenic camera which is widely used in deep space detection cools down optical system and support structure by cryogenic refrigeration technology, thereby improving the sensitivity. Discussing the characteristics and design points of infrared detector combined with camera's characteristics. At the same time, cryogenic background test systems of chip and detector assembly are established. Chip test system is based on variable cryogenic and multilayer Dewar, and assembly test system is based on target and background simulator in the thermal vacuum environment. The core of test is to establish cryogenic background. Non-uniformity, ratio of dead pixels and noise of test result are given finally. The establishment of test system supports for the design and calculation of infrared systems.

  10. Low power multi-camera system and algorithms for automated threat detection

    NASA Astrophysics Data System (ADS)

    Huber, David J.; Khosla, Deepak; Chen, Yang; Van Buer, Darrel J.; Martin, Kevin

    2013-05-01

    A key to any robust automated surveillance system is continuous, wide field-of-view sensor coverage and high accuracy target detection algorithms. Newer systems typically employ an array of multiple fixed cameras that provide individual data streams, each of which is managed by its own processor. This array can continuously capture the entire field of view, but collecting all the data and back-end detection algorithm consumes additional power and increases the size, weight, and power (SWaP) of the package. This is often unacceptable, as many potential surveillance applications have strict system SWaP requirements. This paper describes a wide field-of-view video system that employs multiple fixed cameras and exhibits low SWaP without compromising the target detection rate. We cycle through the sensors, fetch a fixed number of frames, and process them through a modified target detection algorithm. During this time, the other sensors remain powered-down, which reduces the required hardware and power consumption of the system. We show that the resulting gaps in coverage and irregular frame rate do not affect the detection accuracy of the underlying algorithms. This reduces the power of an N-camera system by up to approximately N-fold compared to the baseline normal operation. This work was applied to Phase 2 of DARPA Cognitive Technology Threat Warning System (CT2WS) program and used during field testing.

  11. Compact CdZnTe-based gamma camera for prostate cancer imaging

    NASA Astrophysics Data System (ADS)

    Cui, Yonggang; Lall, Terry; Tsui, Benjamin; Yu, Jianhua; Mahler, George; Bolotnikov, Aleksey; Vaska, Paul; De Geronimo, Gianluigi; O'Connor, Paul; Meinken, George; Joyal, John; Barrett, John; Camarda, Giuseppe; Hossain, Anwar; Kim, Ki Hyun; Yang, Ge; Pomper, Marty; Cho, Steve; Weisman, Ken; Seo, Youngho; Babich, John; LaFrance, Norman; James, Ralph B.

    2011-06-01

    In this paper, we discuss the design of a compact gamma camera for high-resolution prostate cancer imaging using Cadmium Zinc Telluride (CdZnTe or CZT) radiation detectors. Prostate cancer is a common disease in men. Nowadays, a blood test measuring the level of prostate specific antigen (PSA) is widely used for screening for the disease in males over 50, followed by (ultrasound) imaging-guided biopsy. However, PSA tests have a high falsepositive rate and ultrasound-guided biopsy has a high likelihood of missing small cancerous tissues. Commercial methods of nuclear medical imaging, e.g. PET and SPECT, can functionally image the organs, and potentially find cancer tissues at early stages, but their applications in diagnosing prostate cancer has been limited by the smallness of the prostate gland and the long working distance between the organ and the detectors comprising these imaging systems. CZT is a semiconductor material with wide band-gap and relatively high electron mobility, and thus can operate at room temperature without additional cooling. CZT detectors are photon-electron direct-conversion devices, thus offering high energy-resolution in detecting gamma rays, enabling energy-resolved imaging, and reducing the background of Compton-scattering events. In addition, CZT material has high stopping power for gamma rays; for medical imaging, a few-mm-thick CZT material provides adequate detection efficiency for many SPECT radiotracers. Because of these advantages, CZT detectors are becoming popular for several SPECT medical-imaging applications. Most recently, we designed a compact gamma camera using CZT detectors coupled to an application-specific-integratedcircuit (ASIC). This camera functions as a trans-rectal probe to image the prostate gland from a distance of only 1-5 cm, thus offering higher detection efficiency and higher spatial resolution. Hence, it potentially can detect prostate cancers at their early stages. The performance tests of this camera

  12. D Animation Reconstruction from Multi-Camera Coordinates Transformation

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Chou, C. M.

    2016-06-01

    Reservoir dredging issues are important to extend the life of reservoir. The most effective and cost reduction way is to construct a tunnel to desilt the bottom sediment. Conventional technique is to construct a cofferdam to separate the water, construct the intake of tunnel inside and remove the cofferdam afterwards. In Taiwan, the ZengWen reservoir dredging project will install an Elephant-trunk Steel Pipe (ETSP) in the water to connect the desilting tunnel without building the cofferdam. Since the installation is critical to the whole project, a 1:20 model was built to simulate the installation steps in a towing tank, i.e. launching, dragging, water injection, and sinking. To increase the construction safety, photogrammetry technic is adopted to record images during the simulation, compute its transformation parameters for dynamic analysis and reconstruct the 4D animations. In this study, several Australiscoded targets are fixed on the surface of ETSP for auto-recognition and measurement. The cameras orientations are computed by space resection where the 3D coordinates of coded targets are measured. Two approaches for motion parameters computation are proposed, i.e. performing 3D conformal transformation from the coordinates of cameras and relative orientation computation by the orientation of single camera. Experimental results show the 3D conformal transformation can achieve sub-mm simulation results, and relative orientation computation shows the flexibility for dynamic motion analysis which is easier and more efficiency.

  13. Multiple-camera tracking: UK government requirements

    NASA Astrophysics Data System (ADS)

    Hosmer, Paul

    2007-10-01

    The Imagery Library for Intelligent Detection Systems (i-LIDS) is the UK government's new standard for Video Based Detection Systems (VBDS). The standard was launched in November 2006 and evaluations against it began in July 2007. With the first four i-LIDS scenarios completed, the Home Office Scientific development Branch (HOSDB) are looking toward the future of intelligent vision in the security surveillance market by adding a fifth scenario to the standard. The fifth i-LIDS scenario will concentrate on the development, testing and evaluation of systems for the tracking of people across multiple cameras. HOSDB and the Centre for the Protection of National Infrastructure (CPNI) identified a requirement to track targets across a network of CCTV cameras using both live and post event imagery. The Detection and Vision Systems group at HOSDB were asked to determine the current state of the market and develop an in-depth Operational Requirement (OR) based on government end user requirements. Using this OR the i-LIDS team will develop a full i-LIDS scenario to aid the machine vision community in its development of multi-camera tracking systems. By defining a requirement for multi-camera tracking and building this into the i-LIDS standard the UK government will provide a widely available tool that developers can use to help them turn theory and conceptual demonstrators into front line application. This paper will briefly describe the i-LIDS project and then detail the work conducted in building the new tracking aspect of the standard.

  14. Volunteers Help Decide Where to Point Mars Camera

    NASA Image and Video Library

    2015-07-22

    This series of images from NASA's Mars Reconnaissance Orbiter successively zooms into "spider" features -- or channels carved in the surface in radial patterns -- in the south polar region of Mars. In a new citizen-science project, volunteers will identify features like these using wide-scale images from the orbiter. Their input will then help mission planners decide where to point the orbiter's high-resolution camera for more detailed views of interesting terrain. Volunteers will start with images from the orbiter's Context Camera (CTX), which provides wide views of the Red Planet. The first two images in this series are from CTX; the top right image zooms into a portion of the image at left. The top right image highlights the geological spider features, which are carved into the terrain in the Martian spring when dry ice turns to gas. By identifying unusual features like these, volunteers will help the mission team choose targets for the orbiter's High Resolution Imaging Science Experiment (HiRISE) camera, which can reveal more detail than any other camera ever put into orbit around Mars. The final image is this series (bottom right) shows a HiRISE close-up of one of the spider features. http://photojournal.jpl.nasa.gov/catalog/PIA19823

  15. 24/7 security system: 60-FPS color EMCCD camera with integral human recognition

    NASA Astrophysics Data System (ADS)

    Vogelsong, T. L.; Boult, T. E.; Gardner, D. W.; Woodworth, R.; Johnson, R. C.; Heflin, B.

    2007-04-01

    An advanced surveillance/security system is being developed for unattended 24/7 image acquisition and automated detection, discrimination, and tracking of humans and vehicles. The low-light video camera incorporates an electron multiplying CCD sensor with a programmable on-chip gain of up to 1000:1, providing effective noise levels of less than 1 electron. The EMCCD camera operates in full color mode under sunlit and moonlit conditions, and monochrome under quarter-moonlight to overcast starlight illumination. Sixty frame per second operation and progressive scanning minimizes motion artifacts. The acquired image sequences are processed with FPGA-compatible real-time algorithms, to detect/localize/track targets and reject non-targets due to clutter under a broad range of illumination conditions and viewing angles. The object detectors that are used are trained from actual image data. Detectors have been developed and demonstrated for faces, upright humans, crawling humans, large animals, cars and trucks. Detection and tracking of targets too small for template-based detection is achieved. For face and vehicle targets the results of the detection are passed to secondary processing to extract recognition templates, which are then compared with a database for identification. When combined with pan-tilt-zoom (PTZ) optics, the resulting system provides a reliable wide-area 24/7 surveillance system that avoids the high life-cycle cost of infrared cameras and image intensifiers.

  16. Multimodal imaging guided preclinical trials of vascular targeting in prostate cancer

    PubMed Central

    Kalmuk, James; Folaron, Margaret; Buchinger, Julian; Pili, Roberto; Seshadri, Mukund

    2015-01-01

    The high mortality rate associated with castration-resistant prostate cancer (CRPC) underscores the need for improving therapeutic options for this patient population. The purpose of this study was to examine the potential of vascular targeting in prostate cancer. Experimental studies were carried out in subcutaneous and orthotopic Myc-CaP prostate tumors implanted into male FVB mice to examine the efficacy of a novel microtubule targeted vascular disrupting agent (VDA), EPC2407 (Crolibulin™). A non-invasive multimodality imaging approach based on magnetic resonance imaging (MRI), bioluminescence imaging (BLI), and ultrasound (US) was utilized to guide preclinical trial design and monitor tumor response to therapy. Imaging results were correlated with histopathologic assessment, tumor growth and survival analysis. Contrast-enhanced MRI revealed potent antivascular activity of EPC2407 against subcutaneous and orthotopic Myc-CaP tumors. Longitudinal BLI of Myc-CaP tumors expressing luciferase under the androgen response element (Myc-CaP/ARE-luc) revealed changes in AR signaling and reduction in intratumoral delivery of luciferin substrate following castration suggestive of reduced blood flow. This reduction in blood flow was validated by US and MRI. Combination treatment resulted in sustained vascular suppression, inhibition of tumor regrowth and conferred a survival benefit in both models. These results demonstrate the therapeutic potential of vascular targeting in combination with androgen deprivation against prostate cancer. PMID:26203773

  17. Dual-stimuli responsive and reversibly activatable theranostic nanoprobe for precision tumor-targeting and fluorescence-guided photothermal therapy

    NASA Astrophysics Data System (ADS)

    Zhao, Xu; Yang, Cheng-Xiong; Chen, Li-Gong; Yan, Xiu-Ping

    2017-05-01

    The integrated functions of diagnostics and therapeutics make theranostics great potential for personalized medicine. Stimulus-responsive therapy allows spatial control of therapeutic effect only in the site of interest, and offers promising opportunities for imaging-guided precision therapy. However, the imaging strategies in previous stimulus-responsive therapies are `always on' or irreversible `turn on' modality, resulting in poor signal-to-noise ratios or even `false positive' results. Here we show the design of dual-stimuli-responsive and reversibly activatable nanoprobe for precision tumour-targeting and fluorescence-guided photothermal therapy. We fabricate the nanoprobe from asymmetric cyanine and glycosyl-functionalized gold nanorods (AuNRs) with matrix metalloproteinases (MMPs)-specific peptide as a linker to achieve MMPs/pH synergistic and pH reversible activation. The unique activation and glycosyl targetibility makes the nanoprobe bright only in tumour sites with negligible background, while AuNRs and asymmetric cyanine give synergistic photothermal effect. This work paves the way to designing efficient nanoprobes for precision theranostics.

  18. Diagnostic Accuracy of Multiparametric Magnetic Resonance Imaging and Fusion Guided Targeted Biopsy Evaluated by Transperineal Template Saturation Prostate Biopsy for the Detection and Characterization of Prostate Cancer.

    PubMed

    Mortezavi, Ashkan; Märzendorfer, Olivia; Donati, Olivio F; Rizzi, Gianluca; Rupp, Niels J; Wettstein, Marian S; Gross, Oliver; Sulser, Tullio; Hermanns, Thomas; Eberli, Daniel

    2018-02-21

    We evaluated the diagnostic accuracy of multiparametric magnetic resonance imaging and multiparametric magnetic resonance imaging/transrectal ultrasound fusion guided targeted biopsy against that of transperineal template saturation prostate biopsy to detect prostate cancer. We retrospectively analyzed the records of 415 men who consecutively presented for prostate biopsy between November 2014 and September 2016 at our tertiary care center. Multiparametric magnetic resonance imaging was performed using a 3 Tesla device without an endorectal coil, followed by transperineal template saturation prostate biopsy with the BiopSee® fusion system. Additional fusion guided targeted biopsy was done in men with a suspicious lesion on multiparametric magnetic resonance imaging, defined as Likert score 3 to 5. Any Gleason pattern 4 or greater was defined as clinically significant prostate cancer. The detection rates of multiparametric magnetic resonance imaging and fusion guided targeted biopsy were compared with the detection rate of transperineal template saturation prostate biopsy using the McNemar test. We obtained a median of 40 (range 30 to 55) and 3 (range 2 to 4) transperineal template saturation prostate biopsy and fusion guided targeted biopsy cores, respectively. Of the 124 patients (29.9%) without a suspicious lesion on multiparametric magnetic resonance imaging 32 (25.8%) were found to have clinically significant prostate cancer on transperineal template saturation prostate biopsy. Of the 291 patients (70.1%) with a Likert score of 3 to 5 clinically significant prostate cancer was detected in 129 (44.3%) by multiparametric magnetic resonance imaging fusion guided targeted biopsy, in 176 (60.5%) by transperineal template saturation prostate biopsy and in 187 (64.3%) by the combined approach. Overall 58 cases (19.9%) of clinically significant prostate cancer would have been missed if fusion guided targeted biopsy had been performed exclusively. The sensitivity of

  19. Design and realization of an AEC&AGC system for the CCD aerial camera

    NASA Astrophysics Data System (ADS)

    Liu, Hai ying; Feng, Bing; Wang, Peng; Li, Yan; Wei, Hao yun

    2015-08-01

    An AEC and AGC(Automatic Exposure Control and Automatic Gain Control) system was designed for a CCD aerial camera with fixed aperture and electronic shutter. The normal AEC and AGE algorithm is not suitable to the aerial camera since the camera always takes high-resolution photographs in high-speed moving. The AEC and AGE system adjusts electronic shutter and camera gain automatically according to the target brightness and the moving speed of the aircraft. An automatic Gamma correction is used before the image is output so that the image is better for watching and analyzing by human eyes. The AEC and AGC system could avoid underexposure, overexposure, or image blurring caused by fast moving or environment vibration. A series of tests proved that the system meet the requirements of the camera system with its fast adjusting speed, high adaptability, high reliability in severe complex environment.

  20. Target-Oriented High-Resolution SAR Image Formation via Semantic Information Guided Regularizations

    NASA Astrophysics Data System (ADS)

    Hou, Biao; Wen, Zaidao; Jiao, Licheng; Wu, Qian

    2018-04-01

    Sparsity-regularized synthetic aperture radar (SAR) imaging framework has shown its remarkable performance to generate a feature enhanced high resolution image, in which a sparsity-inducing regularizer is involved by exploiting the sparsity priors of some visual features in the underlying image. However, since the simple prior of low level features are insufficient to describe different semantic contents in the image, this type of regularizer will be incapable of distinguishing between the target of interest and unconcerned background clutters. As a consequence, the features belonging to the target and clutters are simultaneously affected in the generated image without concerning their underlying semantic labels. To address this problem, we propose a novel semantic information guided framework for target oriented SAR image formation, which aims at enhancing the interested target scatters while suppressing the background clutters. Firstly, we develop a new semantics-specific regularizer for image formation by exploiting the statistical properties of different semantic categories in a target scene SAR image. In order to infer the semantic label for each pixel in an unsupervised way, we moreover induce a novel high-level prior-driven regularizer and some semantic causal rules from the prior knowledge. Finally, our regularized framework for image formation is further derived as a simple iteratively reweighted $\\ell_1$ minimization problem which can be conveniently solved by many off-the-shelf solvers. Experimental results demonstrate the effectiveness and superiority of our framework for SAR image formation in terms of target enhancement and clutters suppression, compared with the state of the arts. Additionally, the proposed framework opens a new direction of devoting some machine learning strategies to image formation, which can benefit the subsequent decision making tasks.

  1. Small nucleolar RNAs that guide modification in trypanosomatids: repertoire, targets, genome organisation, and unique functions.

    PubMed

    Uliel, Shai; Liang, Xue-hai; Unger, Ron; Michaeli, Shulamit

    2004-03-29

    Small nucleolar RNAs constitute a family of newly discovered non-coding small RNAs, most of which function in guiding RNA modifications. Two prevalent types of modifications are 2'-O-methylation and pseudouridylation. The modification is directed by the formation of a canonical small nucleolar RNA-target duplex. Initially, RNA-guided modification was shown to take place on rRNA, but recent studies suggest that small nuclear RNA, mRNA, tRNA, and the trypanosome spliced leader RNA also undergo guided modifications. Trypanosomes contain more modifications and potentially more small nucleolar RNAs than yeast, and the increased number of modifications may help to preserve ribosome function under adverse environmental conditions during the cycling between the insect and mammalian host. The genome organisation in clusters carrying the two types of small nucleolar RNAs, C/D and H/ACA-like RNAs, resembles that in plants. However, the trypanosomatid H/ACA RNAs are similar to those found in Archaea and are composed of a single hairpin that may represent the primordial H/ACA RNA. In this review we summarise this new field of trypanosome small nucleolar RNAs, emphasising the open questions regarding the number of small nucleolar RNAs, the repertoire, genome organisation, and the unique function of guided modifications in these protozoan parasites.

  2. Constrained space camera assembly

    DOEpatents

    Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.

    1999-05-11

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.

  3. Constrained space camera assembly

    DOEpatents

    Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.

    1999-01-01

    A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.

  4. Research into a Single-aperture Light Field Camera System to Obtain Passive Ground-based 3D Imagery of LEO Objects

    NASA Astrophysics Data System (ADS)

    Bechis, K.; Pitruzzello, A.

    2014-09-01

    This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera

  5. Structural basis for microRNA targeting

    DOE PAGES

    Schirle, Nicole T.; Sheu-Gruttadauria, Jessica; MacRae, Ian J.

    2014-10-31

    MicroRNAs (miRNAs) control expression of thousands of genes in plants and animals. miRNAs function by guiding Argonaute proteins to complementary sites in messenger RNAs (mRNAs) targeted for repression. In this paper, we determined crystal structures of human Argonaute-2 (Ago2) bound to a defined guide RNA with and without target RNAs representing miRNA recognition sites. These structures suggest a stepwise mechanism, in which Ago2 primarily exposes guide nucleotides (nt) 2 to 5 for initial target pairing. Pairing to nt 2 to 5 promotes conformational changes that expose nt 2 to 8 and 13 to 16 for further target recognition. Interactions withmore » the guide-target minor groove allow Ago2 to interrogate target RNAs in a sequence-independent manner, whereas an adenosine binding-pocket opposite guide nt 1 further facilitates target recognition. Spurious slicing of miRNA targets is avoided through an inhibitory coordination of one catalytic magnesium ion. Finally, these results explain the conserved nucleotide-pairing patterns in animal miRNA target sites first observed over two decades ago.« less

  6. Microchannel plate streak camera

    DOEpatents

    Wang, Ching L.

    1989-01-01

    An improved streak camera in which a microchannel plate electron multiplier is used in place of or in combination with the photocathode used in prior streak cameras. The improved streak camera is far more sensitive to photons (UV to gamma-rays) than the conventional x-ray streak camera which uses a photocathode. The improved streak camera offers gamma-ray detection with high temporal resolution. It also offers low-energy x-ray detection without attenuation inside the cathode. Using the microchannel plate in the improved camera has resulted in a time resolution of about 150 ps, and has provided a sensitivity sufficient for 1000 KeV x-rays.

  7. Digital Pinhole Camera

    ERIC Educational Resources Information Center

    Lancor, Rachael; Lancor, Brian

    2014-01-01

    In this article we describe how the classic pinhole camera demonstration can be adapted for use with digital cameras. Students can easily explore the effects of the size of the pinhole and its distance from the sensor on exposure time, magnification, and image quality. Instructions for constructing a digital pinhole camera and our method for…

  8. Vision-guided gripping of a cylinder

    NASA Technical Reports Server (NTRS)

    Nicewarner, Keith E.; Kelley, Robert B.

    1991-01-01

    The motivation for vision-guided servoing is taken from tasks in automated or telerobotic space assembly and construction. Vision-guided servoing requires the ability to perform rapid pose estimates and provide predictive feature tracking. Monocular information from a gripper-mounted camera is used to servo the gripper to grasp a cylinder. The procedure is divided into recognition and servo phases. The recognition stage verifies the presence of a cylinder in the camera field of view. Then an initial pose estimate is computed and uncluttered scan regions are selected. The servo phase processes only the selected scan regions of the image. Given the knowledge, from the recognition phase, that there is a cylinder in the image and knowing the radius of the cylinder, 4 of the 6 pose parameters can be estimated with minimal computation. The relative motion of the cylinder is obtained by using the current pose and prior pose estimates. The motion information is then used to generate a predictive feature-based trajectory for the path of the gripper.

  9. WE-FG-BRA-06: Systematic Study of Target Localization for Bioluminescence Tomography Guided Radiation Therapy for Preclinical Research

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, B; Reyes, J; Wong, J

    Purpose: To overcome the limitation of CT/CBCT in guiding radiation for soft tissue targets, we developed a bioluminescence tomography(BLT) system for preclinical radiation research. We systematically assessed the system performance in target localization and the ability of resolving two sources in simulations, phantom and in vivo environments. Methods: Multispectral images acquired in single projection were used for the BLT reconstruction. Simulation studies were conducted for single spherical source radius from 0.5 to 3 mm at depth of 3 to 12 mm. The same configuration was also applied for the double sources simulation with source separations varying from 3 to 9more » mm. Experiments were performed in a standalone BLT/CBCT system. Two sources with 3 and 4.7 mm separations placed inside a tissue-mimicking phantom were chosen as the test cases. Live mice implanted with single source at 6 and 9 mm depth, 2 sources with 3 and 5 mm separation at depth of 5 mm or 3 sources in the abdomen were also used to illustrate the in vivo localization capability of the BLT system. Results: Simulation and phantom results illustrate that our BLT can provide 3D source localization with approximately 1 mm accuracy. The in vivo results are encouraging that 1 and 1.7 mm accuracy can be attained for the single source case at 6 and 9 mm depth, respectively. For the 2 sources study, both sources can be distinguished at 3 and 5 mm separations at approximately 1 mm accuracy using 3D BLT but not 2D bioluminescence image. Conclusion: Our BLT/CBCT system can be potentially applied to localize and resolve targets at a wide range of target sizes, depths and separations. The information provided in this study can be instructive to devise margins for BLT-guided irradiation and suggests that the BLT could guide radiation for multiple targets, such as metastasis. Drs. John W. Wong and Iulian I. Iordachita receive royalty payment from a licensing agreement between Xstrahl Ltd and Johns Hopkins

  10. On the Complexity of Digital Video Cameras in/as Research: Perspectives and Agencements

    ERIC Educational Resources Information Center

    Bangou, Francis

    2014-01-01

    The goal of this article is to consider the potential for digital video cameras to produce as part of a research agencement. Our reflection will be guided by the current literature on the use of video recordings in research, as well as by the rhizoanalysis of two vignettes. The first of these vignettes is associated with a short video clip shot by…

  11. Investigation of Parallax Issues for Multi-Lens Multispectral Camera Band Co-Registration

    NASA Astrophysics Data System (ADS)

    Jhan, J. P.; Rau, J. Y.; Haala, N.; Cramer, M.

    2017-08-01

    The multi-lens multispectral cameras (MSCs), such as Micasense Rededge and Parrot Sequoia, can record multispectral information by each separated lenses. With their lightweight and small size, which making they are more suitable for mounting on an Unmanned Aerial System (UAS) to collect high spatial images for vegetation investigation. However, due to the multi-sensor geometry of multi-lens structure induces significant band misregistration effects in original image, performing band co-registration is necessary in order to obtain accurate spectral information. A robust and adaptive band-to-band image transform (RABBIT) is proposed to perform band co-registration of multi-lens MSCs. First is to obtain the camera rig information from camera system calibration, and utilizes the calibrated results for performing image transformation and lens distortion correction. Since the calibration uncertainty leads to different amount of systematic errors, the last step is to optimize the results in order to acquire a better co-registration accuracy. Due to the potential issues of parallax that will cause significant band misregistration effects when images are closer to the targets, four datasets thus acquired from Rededge and Sequoia were applied to evaluate the performance of RABBIT, including aerial and close-range imagery. From the results of aerial images, it shows that RABBIT can achieve sub-pixel accuracy level that is suitable for the band co-registration purpose of any multi-lens MSC. In addition, the results of close-range images also has same performance, if we focus on the band co-registration on specific target for 3D modelling, or when the target has equal distance to the camera.

  12. The AOTF-based NO2 camera

    NASA Astrophysics Data System (ADS)

    Dekemper, Emmanuel; Vanhamel, Jurgen; Van Opstal, Bert; Fussen, Didier

    2016-12-01

    The abundance of NO2 in the boundary layer relates to air quality and pollution source monitoring. Observing the spatiotemporal distribution of NO2 above well-delimited (flue gas stacks, volcanoes, ships) or more extended sources (cities) allows for applications such as monitoring emission fluxes or studying the plume dynamic chemistry and its transport. So far, most attempts to map the NO2 field from the ground have been made with visible-light scanning grating spectrometers. Benefiting from a high retrieval accuracy, they only achieve a relatively low spatiotemporal resolution that hampers the detection of dynamic features. We present a new type of passive remote sensing instrument aiming at the measurement of the 2-D distributions of NO2 slant column densities (SCDs) with a high spatiotemporal resolution. The measurement principle has strong similarities with the popular filter-based SO2 camera as it relies on spectral images taken at wavelengths where the molecule absorption cross section is different. Contrary to the SO2 camera, the spectral selection is performed by an acousto-optical tunable filter (AOTF) capable of resolving the target molecule's spectral features. The NO2 camera capabilities are demonstrated by imaging the NO2 abundance in the plume of a coal-fired power plant. During this experiment, the 2-D distribution of the NO2 SCD was retrieved with a temporal resolution of 3 min and a spatial sampling of 50 cm (over a 250 × 250 m2 area). The detection limit was close to 5 × 1016 molecules cm-2, with a maximum detected SCD of 4 × 1017 molecules cm-2. Illustrating the added value of the NO2 camera measurements, the data reveal the dynamics of the NO to NO2 conversion in the early plume with an unprecedent resolution: from its release in the air, and for 100 m upwards, the observed NO2 plume concentration increased at a rate of 0.75-1.25 g s-1. In joint campaigns with SO2 cameras, the NO2 camera could also help in removing the bias introduced by the

  13. A novel SPECT camera for molecular imaging of the prostate

    NASA Astrophysics Data System (ADS)

    Cebula, Alan; Gilland, David; Su, Li-Ming; Wagenaar, Douglas; Bahadori, Amir

    2011-10-01

    The objective of this work is to develop an improved SPECT camera for dedicated prostate imaging. Complementing the recent advancements in agents for molecular prostate imaging, this device has the potential to assist in distinguishing benign from aggressive cancers, to improve site-specific localization of cancer, to improve accuracy of needle-guided prostate biopsy of cancer sites, and to aid in focal therapy procedures such as cryotherapy and radiation. Theoretical calculations show that the spatial resolution/detection sensitivity of the proposed SPECT camera can rival or exceed 3D PET and further signal-to-noise advantage is attained with the better energy resolution of the CZT modules. Based on photon transport simulation studies, the system has a reconstructed spatial resolution of 4.8 mm with a sensitivity of 0.0001. Reconstruction of a simulated prostate distribution demonstrates the focal imaging capability of the system.

  14. Inexpensive camera systems for detecting martens, fishers, and other animals: guidelines for use and standardization.

    Treesearch

    Lawrence L.C. Jones; Martin G. Raphael

    1993-01-01

    Inexpensive camera systems have been successfully used to detect the occurrence of martens, fishers, and other wildlife species. The use of cameras is becoming widespread, and we give suggestions for standardizing techniques so that comparisons of data can occur across the geographic range of the target species. Details are given on equipment needs, setting up the...

  15. High dynamic range image acquisition based on multiplex cameras

    NASA Astrophysics Data System (ADS)

    Zeng, Hairui; Sun, Huayan; Zhang, Tinghua

    2018-03-01

    High dynamic image is an important technology of photoelectric information acquisition, providing higher dynamic range and more image details, and it can better reflect the real environment, light and color information. Currently, the method of high dynamic range image synthesis based on different exposure image sequences cannot adapt to the dynamic scene. It fails to overcome the effects of moving targets, resulting in the phenomenon of ghost. Therefore, a new high dynamic range image acquisition method based on multiplex cameras system was proposed. Firstly, different exposure images sequences were captured with the camera array, using the method of derivative optical flow based on color gradient to get the deviation between images, and aligned the images. Then, the high dynamic range image fusion weighting function was established by combination of inverse camera response function and deviation between images, and was applied to generated a high dynamic range image. The experiments show that the proposed method can effectively obtain high dynamic images in dynamic scene, and achieves good results.

  16. Real-time observation of DNA target interrogation and product release by the RNA-guided endonuclease CRISPR Cpf1 (Cas12a).

    PubMed

    Singh, Digvijay; Mallon, John; Poddar, Anustup; Wang, Yanbo; Tippana, Ramreddy; Yang, Olivia; Bailey, Scott; Ha, Taekjip

    2018-05-22

    CRISPR-Cas9, which imparts adaptive immunity against foreign genomic invaders in certain prokaryotes, has been repurposed for genome-engineering applications. More recently, another RNA-guided CRISPR endonuclease called Cpf1 (also known as Cas12a) was identified and is also being repurposed. Little is known about the kinetics and mechanism of Cpf1 DNA interaction and how sequence mismatches between the DNA target and guide-RNA influence this interaction. We used single-molecule fluorescence analysis and biochemical assays to characterize DNA interrogation, cleavage, and product release by three Cpf1 orthologs. Our Cpf1 data are consistent with the DNA interrogation mechanism proposed for Cas9. They both bind any DNA in search of protospacer-adjacent motif (PAM) sequences, verify the target sequence directionally from the PAM-proximal end, and rapidly reject any targets that lack a PAM or that are poorly matched with the guide-RNA. Unlike Cas9, which requires 9 bp for stable binding and ∼16 bp for cleavage, Cpf1 requires an ∼17-bp sequence match for both stable binding and cleavage. Unlike Cas9, which does not release the DNA cleavage products, Cpf1 rapidly releases the PAM-distal cleavage product, but not the PAM-proximal product. Solution pH, reducing conditions, and 5' guanine in guide-RNA differentially affected different Cpf1 orthologs. Our findings have important implications on Cpf1-based genome engineering and manipulation applications.

  17. Use of PET and Other Functional Imaging to Guide Target Delineation in Radiation Oncology.

    PubMed

    Verma, Vivek; Choi, J Isabelle; Sawant, Amit; Gullapalli, Rao P; Chen, Wengen; Alavi, Abass; Simone, Charles B

    2018-06-01

    Molecular and functional imaging is increasingly being used to guide radiotherapy (RT) management and target delineation. This review summarizes existing data in several disease sites of various functional imaging modalities, chiefly positron emission tomography/computed tomography (PET/CT), with respect to RT target definition and management. For gliomas, differentiation between postoperative changes and viable tumor is discussed, as well as focal dose escalation and reirradiation. Head and neck neoplasms may also benefit from precise PET/CT-based target delineation, especially for cancers of unknown primary; focal dose escalation is also described. In lung cancer, PET/CT can influence coverage of tumor volumes, dose escalation, and adaptive management. For cervical cancer, PET/CT as an adjunct to magnetic resonance imaging planning is discussed, as are dose escalation and delineation of avoidance targets such as the bone marrow. The emerging role of choline-based PET for prostate cancer and its impact on dose escalation is also described. Lastly, given the essential role of PET/CT for target definition in lymphoma, phase III trials of PET-directed management are reviewed, along with novel imaging modalities. Taken together, molecular and functional imaging approaches offer a major step to individualize radiotherapeutic care going forward. Copyright © 2018 Elsevier Inc. All rights reserved.

  18. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration.

    PubMed

    Nikitichev, Daniil I; Shakir, Dzhoshkun I; Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-02-23

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community.

  19. Medical-grade Sterilizable Target for Fluid-immersed Fetoscope Optical Distortion Calibration

    PubMed Central

    Chadebecq, François; Tella, Marcel; Deprest, Jan; Stoyanov, Danail; Ourselin, Sébastien; Vercauteren, Tom

    2017-01-01

    We have developed a calibration target for use with fluid-immersed endoscopes within the context of the GIFT-Surg (Guided Instrumentation for Fetal Therapy and Surgery) project. One of the aims of this project is to engineer novel, real-time image processing methods for intra-operative use in the treatment of congenital birth defects, such as spina bifida and the twin-to-twin transfusion syndrome. The developed target allows for the sterility-preserving optical distortion calibration of endoscopes within a few minutes. Good optical distortion calibration and compensation are important for mitigating undesirable effects like radial distortions, which not only hamper accurate imaging using existing endoscopic technology during fetal surgery, but also make acquired images less suitable for potentially very useful image computing applications, like real-time mosaicing. In this paper proposes a novel fabrication method to create an affordable, sterilizable calibration target suitable for use in a clinical setup. This method involves etching a calibration pattern by laser cutting a sandblasted stainless steel sheet. This target was validated using the camera calibration module provided by OpenCV, a state-of-the-art software library popular in the computer vision community. PMID:28287588

  20. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid

    PubMed Central

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species. PMID:26698574

  1. Trapping Elusive Cats: Using Intensive Camera Trapping to Estimate the Density of a Rare African Felid.

    PubMed

    Brassine, Eléanor; Parker, Daniel

    2015-01-01

    Camera trapping studies have become increasingly popular to produce population estimates of individually recognisable mammals. Yet, monitoring techniques for rare species which occur at extremely low densities are lacking. Additionally, species which have unpredictable movements may make obtaining reliable population estimates challenging due to low detectability. Our study explores the effectiveness of intensive camera trapping for estimating cheetah (Acinonyx jubatus) numbers. Using both a more traditional, systematic grid approach and pre-determined, targeted sites for camera placement, the cheetah population of the Northern Tuli Game Reserve, Botswana was sampled between December 2012 and October 2013. Placement of cameras in a regular grid pattern yielded very few (n = 9) cheetah images and these were insufficient to estimate cheetah density. However, pre-selected cheetah scent-marking posts provided 53 images of seven adult cheetahs (0.61 ± 0.18 cheetahs/100 km²). While increasing the length of the camera trapping survey from 90 to 130 days increased the total number of cheetah images obtained (from 53 to 200), no new individuals were recorded and the estimated population density remained stable. Thus, our study demonstrates that targeted camera placement (irrespective of survey duration) is necessary for reliably assessing cheetah densities where populations are naturally very low or dominated by transient individuals. Significantly our approach can easily be applied to other rare predator species.

  2. GRACE star camera noise

    NASA Astrophysics Data System (ADS)

    Harvey, Nate

    2016-08-01

    Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.

  3. First demonstration of 10 keV-width energy-discrimination K-edge radiography using a cadmium-telluride X-ray camera with a tungsten-target tube

    NASA Astrophysics Data System (ADS)

    Watanabe, Manabu; Sato, Eiichi; Abderyim, Purkhet; Abudurexiti, Abulajiang; Hagiwara, Osahiko; Matsukiyo, Hiroshi; Osawa, Akihiro; Enomoto, Toshiyuki; Nagao, Jiro; Sato, Shigehiro; Ogawa, Akira; Onagawa, Jun

    2011-05-01

    Energy-discrimination X-ray camera is useful to perform monochromatic radiography using polychromatic X-rays. This X-ray camera was developed to carry out K-edge radiography using cerium and gadolinium-based contrast media. In this camera, objects are irradiated by a cone beam from a tungsten-target X-ray generator, and penetrating X-ray photons are detected by a cadmium-telluride detector with amplifiers. Both optimal photon-energy level and energy width are selected using a multichannel analyzer, and the photon number is counted by a counter card. Radiography was performed by the detector scanning using an x- y stage driven by a two-stage controller, and radiograms were shown on a personal computer monitor. In radiography, tube voltage and current were 90 kV and 5.8 μA, respectively, and the X-ray intensity was 0.61 μGy/s at 1.0 m from the X-ray source. The K-edge energies of cerium and gadolinium are 40.3 and 50.3 keV, respectively, and 10 keV-width enhanced K-edge radiography was performed using X-ray photons with energies just beyond K-edge energies of cerium and gadolinium. Thus, cerium K-edge radiography was carried out using X-ray photons with an energy range from 40.3 to 50. 3 keV, and gadolinium K-edge radiography was accomplished utilizing photon energies ranging from 50.3 to 60.3 keV.

  4. Ringfield lithographic camera

    DOEpatents

    Sweatt, William C.

    1998-01-01

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D.sub.source .apprxeq.0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry with an increased etendue for the camera system. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors.

  5. Structure-based cleavage mechanism of Thermus thermophilus Argonaute DNA guide strand-mediated DNA target cleavage

    PubMed Central

    Sheng, Gang; Zhao, Hongtu; Wang, Jiuyu; Rao, Yu; Tian, Wenwen; Swarts, Daan C.; van der Oost, John; Patel, Dinshaw J.; Wang, Yanli

    2014-01-01

    We report on crystal structures of ternary Thermus thermophilus Argonaute (TtAgo) complexes with 5′-phosphorylated guide DNA and a series of DNA targets. These ternary complex structures of cleavage-incompatible, cleavage-compatible, and postcleavage states solved at improved resolution up to 2.2 Å have provided molecular insights into the orchestrated positioning of catalytic residues, a pair of Mg2+ cations, and the putative water nucleophile positioned for in-line attack on the cleavable phosphate for TtAgo-mediated target cleavage by a RNase H-type mechanism. In addition, these ternary complex structures have provided insights into protein and DNA conformational changes that facilitate transition between cleavage-incompatible and cleavage-compatible states, including the role of a Glu finger in generating a cleavage-competent catalytic Asp-Glu-Asp-Asp tetrad. Following cleavage, the seed segment forms a stable duplex with the complementary segment of the target strand. PMID:24374628

  6. Students Target

    NASA Image and Video Library

    2005-12-19

    Using the JMars targeting software, eighth grade students from Charleston Middle School in Charleston, IL, selected the location of -8.37N and 276.66E for capture by the THEMIS visible camera during Mars Odyssey sixth orbit of Mars on Nov. 22, 2005

  7. Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huen, T.

    1987-07-01

    A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less

  8. Design optimisation of a TOF-based collimated camera prototype for online hadrontherapy monitoring

    NASA Astrophysics Data System (ADS)

    Pinto, M.; Dauvergne, D.; Freud, N.; Krimmer, J.; Letang, J. M.; Ray, C.; Roellinghoff, F.; Testa, E.

    2014-12-01

    Hadrontherapy is an innovative radiation therapy modality for which one of the main key advantages is the target conformality allowed by the physical properties of ion species. However, in order to maximise the exploitation of its potentialities, online monitoring is required in order to assert the treatment quality, namely monitoring devices relying on the detection of secondary radiations. Herein is presented a method based on Monte Carlo simulations to optimise a multi-slit collimated camera employing time-of-flight selection of prompt-gamma rays to be used in a clinical scenario. In addition, an analytical tool is developed based on the Monte Carlo data to predict the expected precision for a given geometrical configuration. Such a method follows the clinical workflow requirements to simultaneously have a solution that is relatively accurate and fast. Two different camera designs are proposed, considering different endpoints based on the trade-off between camera detection efficiency and spatial resolution to be used in a proton therapy treatment with active dose delivery and assuming a homogeneous target.

  9. The research of adaptive-exposure on spot-detecting camera in ATP system

    NASA Astrophysics Data System (ADS)

    Qian, Feng; Jia, Jian-jun; Zhang, Liang; Wang, Jian-Yu

    2013-08-01

    High precision acquisition, tracking, pointing (ATP) system is one of the key techniques of laser communication. The spot-detecting camera is used to detect the direction of beacon in laser communication link, so that it can get the position information of communication terminal for ATP system. The positioning accuracy of camera decides the capability of laser communication system directly. So the spot-detecting camera in satellite-to-earth laser communication ATP systems needs high precision on target detection. The positioning accuracy of cameras should be better than +/-1μ rad . The spot-detecting cameras usually adopt centroid algorithm to get the position information of light spot on detectors. When the intensity of beacon is moderate, calculation results of centroid algorithm will be precise. But the intensity of beacon changes greatly during communication for distance, atmospheric scintillation, weather etc. The output signal of detector will be insufficient when the camera underexposes to beacon because of low light intensity. On the other hand, the output signal of detector will be saturated when the camera overexposes to beacon because of high light intensity. The calculation accuracy of centroid algorithm becomes worse if the spot-detecting camera underexposes or overexposes, and then the positioning accuracy of camera will be reduced obviously. In order to improve the accuracy, space-based cameras should regulate exposure time in real time according to light intensity. The algorithm of adaptive-exposure technique for spot-detecting camera based on metal-oxide-semiconductor (CMOS) detector is analyzed. According to analytic results, a CMOS camera in space-based laser communication system is described, which utilizes the algorithm of adaptive-exposure to adapting exposure time. Test results from imaging experiment system formed verify the design. Experimental results prove that this design can restrain the reduction of positioning accuracy for the change

  10. Research on a solid state-streak camera based on an electro-optic crystal

    NASA Astrophysics Data System (ADS)

    Wang, Chen; Liu, Baiyu; Bai, Yonglin; Bai, Xiaohong; Tian, Jinshou; Yang, Wenzheng; Xian, Ouyang

    2006-06-01

    With excellent temporal resolution ranging from nanosecond to sub-picoseconds, a streak camera is widely utilized in measuring ultrafast light phenomena, such as detecting synchrotron radiation, examining inertial confinement fusion target, and making measurements of laser-induced discharge. In combination with appropriate optics or spectroscope, the streak camera delivers intensity vs. position (or wavelength) information on the ultrafast process. The current streak camera is based on a sweep electric pulse and an image converting tube with a wavelength-sensitive photocathode ranging from the x-ray to near infrared region. This kind of streak camera is comparatively costly and complex. This paper describes the design and performance of a new-style streak camera based on an electro-optic crystal with large electro-optic coefficient. Crystal streak camera accomplishes the goal of time resolution by direct photon beam deflection using the electro-optic effect which can replace the current streak camera from the visible to near infrared region. After computer-aided simulation, we design a crystal streak camera which has the potential of time resolution between 1ns and 10ns.Some further improvements in sweep electric circuits, a crystal with a larger electro-optic coefficient, for example LN (γ 33=33.6×10 -12m/v) and the optimal optic system may lead to better time resolution less than 1ns.

  11. Towards real-time MRI-guided 3D localization of deforming targets for non-invasive cardiac radiosurgery

    NASA Astrophysics Data System (ADS)

    Ipsen, S.; Blanck, O.; Lowther, N. J.; Liney, G. P.; Rai, R.; Bode, F.; Dunst, J.; Schweikard, A.; Keall, P. J.

    2016-11-01

    Radiosurgery to the pulmonary vein antrum in the left atrium (LA) has recently been proposed for non-invasive treatment of atrial fibrillation (AF). Precise real-time target localization during treatment is necessary due to complex respiratory and cardiac motion and high radiation doses. To determine the 3D position of the LA for motion compensation during radiosurgery, a tracking method based on orthogonal real-time MRI planes was developed for AF treatments with an MRI-guided radiotherapy system. Four healthy volunteers underwent cardiac MRI of the LA. Contractile motion was quantified on 3D LA models derived from 4D scans with 10 phases acquired in end-exhalation. Three localization strategies were developed and tested retrospectively on 2D real-time scans (sagittal, temporal resolution 100 ms, free breathing). The best-performing method was then used to measure 3D target positions in 2D-2D orthogonal planes (sagittal-coronal, temporal resolution 200-252 ms, free breathing) in 20 configurations of a digital phantom and in the volunteer data. The 3D target localization accuracy was quantified in the phantom and qualitatively assessed in the real data. Mean cardiac contraction was  ⩽  3.9 mm between maximum dilation and contraction but anisotropic. A template matching approach with two distinct template phases and ECG-based selection yielded the highest 2D accuracy of 1.2 mm. 3D target localization showed a mean error of 3.2 mm in the customized digital phantoms. Our algorithms were successfully applied to the 2D-2D volunteer data in which we measured a mean 3D LA motion extent of 16.5 mm (SI), 5.8 mm (AP) and 3.1 mm (LR). Real-time target localization on orthogonal MRI planes was successfully implemented for highly deformable targets treated in cardiac radiosurgery. The developed method measures target shifts caused by respiration and cardiac contraction. If the detected motion can be compensated accordingly, an MRI-guided radiotherapy

  12. Detection of Suspicious Persons using Internet Camera

    NASA Astrophysics Data System (ADS)

    Terada, Kenji; Kamogashira, Daisuke

    Recently, many brutal crimes have shocked us. Therefore, the importance of security and self-defense have increased more and more. It is necessary to develop an automatic method of detecting suspicious persons. In this paper, we propose a method of detecting suspicious persons using the internet camera. An image sequence is obtained by the internet camera. By using these images, the recognition of suspicious persons is carried out. Our method classifies the condition of the target person into 3 postures: walking, staying and sitting. The system employs the subspace method which uses three features: the value of movement, the number of looking around restlessly, and the rate of stopping and going. Some experimental results using a simple experimental system are also reported, which indicate effectiveness of the proposed method. In most scenes, the suspicious persons are able to be detected by the proposed method.

  13. Advantages of computer cameras over video cameras/frame grabbers for high-speed vision applications

    NASA Astrophysics Data System (ADS)

    Olson, Gaylord G.; Walker, Jo N.

    1997-09-01

    Cameras designed to work specifically with computers can have certain advantages in comparison to the use of cameras loosely defined as 'video' cameras. In recent years the camera type distinctions have become somewhat blurred, with a great presence of 'digital cameras' aimed more at the home markets. This latter category is not considered here. The term 'computer camera' herein is intended to mean one which has low level computer (and software) control of the CCD clocking. These can often be used to satisfy some of the more demanding machine vision tasks, and in some cases with a higher rate of measurements than video cameras. Several of these specific applications are described here, including some which use recently designed CCDs which offer good combinations of parameters such as noise, speed, and resolution. Among the considerations for the choice of camera type in any given application would be such effects as 'pixel jitter,' and 'anti-aliasing.' Some of these effects may only be relevant if there is a mismatch between the number of pixels per line in the camera CCD and the number of analog to digital (A/D) sampling points along a video scan line. For the computer camera case these numbers are guaranteed to match, which alleviates some measurement inaccuracies and leads to higher effective resolution.

  14. Sensor-guided threat countermeasure system

    DOEpatents

    Stuart, Brent C.; Hackel, Lloyd A.; Hermann, Mark R.; Armstrong, James P.

    2012-12-25

    A countermeasure system for use by a target to protect against an incoming sensor-guided threat. The system includes a laser system for producing a broadband beam and means for directing the broadband beam from the target to the threat. The countermeasure system comprises the steps of producing a broadband beam and directing the broad band beam from the target to blind or confuse the incoming sensor-guided threat.

  15. Guide-bound structures of an RNA-targeting A-cleaving CRISPR-Cas13a enzyme

    PubMed Central

    Knott, Gavin J.; East-Seletsky, Alexandra; Cofsky, Joshua C.; Holton, James M.; Charles, Emeric; O’Connell, Mitchell R.; Doudna, Jennifer A.

    2018-01-01

    CRISPR adaptive immune systems protect bacteria from infections by deploying CRISPR RNA (crRNA)-guided enzymes to recognize and cut foreign nucleic acids. Type VI-A CRISPR-Cas systems include the Cas13a enzyme, an RNA-activated ribonuclease (RNase) capable of crRNA processing and single-stranded RNA degradation upon target transcript binding. Here we present the 2.0 Å resolution crystal structure of a crRNA-bound L. bacterium Cas13a (LbaCas13a), representing a recently discovered Cas13a enzyme subtype. This structure and accompanying biochemical experiments define for the first time the Cas13a catalytic residues that are directly responsible for crRNA maturation. In addition, the orientation of the foreign-derived target RNA-specifying sequence in the protein interior explains the conformational gating of Cas13a nuclease activation. These results describe how Cas13a enzymes generate functional crRNAs and how catalytic activity is blocked prior to target RNA recognition, with implications for both bacterial immunity and diagnostic applications. PMID:28892041

  16. Guide-bound structures of an RNA-targeting A-cleaving CRISPR–Cas13a enzyme

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Knott, Gavin J.; East-Seletsky, Alexandra; Cofsky, Joshua C.

    CRISPR adaptive immune systems protect bacteria from infections by deploying CRISPR RNA (crRNA)-guided enzymes to recognize and cut foreign nucleic acids. Type VI-A CRISPR–Cas systems include the Cas13a enzyme, an RNA-activated RNase capable of crRNA processing and single-stranded RNA degradation upon target-transcript binding. Here we present the 2.0-Å resolution crystal structure of a crRNA-bound Lachnospiraceae bacterium Cas13a (LbaCas13a), representing a recently discovered Cas13a enzyme subtype. This structure and accompanying biochemical experiments define the Cas13a catalytic residues that are directly responsible for crRNA maturation. In addition, the orientation of the foreign-derived target-RNA-specifying sequence in the protein interior explains the conformational gatingmore » of Cas13a nuclease activation. These results describe how Cas13a enzymes generate functional crRNAs and how catalytic activity is blocked before target-RNA recognition, with implications for both bacterial immunity and diagnostic applications.« less

  17. Guide-bound structures of an RNA-targeting A-cleaving CRISPR–Cas13a enzyme

    DOE PAGES

    Knott, Gavin J.; East-Seletsky, Alexandra; Cofsky, Joshua C.; ...

    2017-09-11

    CRISPR adaptive immune systems protect bacteria from infections by deploying CRISPR RNA (crRNA)-guided enzymes to recognize and cut foreign nucleic acids. Type VI-A CRISPR–Cas systems include the Cas13a enzyme, an RNA-activated RNase capable of crRNA processing and single-stranded RNA degradation upon target-transcript binding. Here we present the 2.0-Å resolution crystal structure of a crRNA-bound Lachnospiraceae bacterium Cas13a (LbaCas13a), representing a recently discovered Cas13a enzyme subtype. This structure and accompanying biochemical experiments define the Cas13a catalytic residues that are directly responsible for crRNA maturation. In addition, the orientation of the foreign-derived target-RNA-specifying sequence in the protein interior explains the conformational gatingmore » of Cas13a nuclease activation. These results describe how Cas13a enzymes generate functional crRNAs and how catalytic activity is blocked before target-RNA recognition, with implications for both bacterial immunity and diagnostic applications.« less

  18. Real-time non-rigid target tracking for ultrasound-guided clinical interventions

    NASA Astrophysics Data System (ADS)

    Zachiu, C.; Ries, M.; Ramaekers, P.; Guey, J.-L.; Moonen, C. T. W.; de Senneville, B. Denis

    2017-10-01

     ˜1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  19. Real-time non-rigid target tracking for ultrasound-guided clinical interventions.

    PubMed

    Zachiu, C; Ries, M; Ramaekers, P; Guey, J-L; Moonen, C T W; de Senneville, B Denis

    2017-10-04

     ∼1.5 mm and submillimeter precision. This, together with a computational performance of 20 images per second make the proposed method an attractive solution for real-time target tracking during US-guided clinical interventions.

  20. Target guided synthesis using DNA nano-templates for selectively assembling a G-quadruplex binding c-MYC inhibitor

    NASA Astrophysics Data System (ADS)

    Panda, Deepanjan; Saha, Puja; Das, Tania; Dash, Jyotirmayee

    2017-07-01

    The development of small molecules is essential to modulate the cellular functions of biological targets in living system. Target Guided Synthesis (TGS) approaches have been used for the identification of potent small molecules for biological targets. We herein demonstrate an innovative example of TGS using DNA nano-templates that promote Huisgen cycloaddition from an array of azide and alkyne fragments. A G-quadruplex and a control duplex DNA nano-template have been prepared by assembling the DNA structures on gold-coated magnetic nanoparticles. The DNA nano-templates facilitate the regioselective formation of 1,4-substituted triazole products, which are easily isolated by magnetic decantation. The G-quadruplex nano-template can be easily recovered and reused for five reaction cycles. The major triazole product, generated by the G-quadruplex inhibits c-MYC expression by directly targeting the c-MYC promoter G-quadruplex. This work highlights that the nano-TGS approach may serve as a valuable strategy to generate target-selective ligands for drug discovery.

  1. Wrist Camera Orientation for Effective Telerobotic Orbital Replaceable Unit (ORU) Changeout

    NASA Technical Reports Server (NTRS)

    Jones, Sharon Monica; Aldridge, Hal A.; Vazquez, Sixto L.

    1997-01-01

    The Hydraulic Manipulator Testbed (HMTB) is the kinematic replica of the Flight Telerobotic Servicer (FTS). One use of the HMTB is to evaluate advanced control techniques for accomplishing robotic maintenance tasks on board the Space Station. Most maintenance tasks involve the direct manipulation of the robot by a human operator when high-quality visual feedback is important for precise control. An experiment was conducted in the Systems Integration Branch at the Langley Research Center to compare several configurations of the manipulator wrist camera for providing visual feedback during an Orbital Replaceable Unit changeout task. Several variables were considered such as wrist camera angle, camera focal length, target location, lighting. Each study participant performed the maintenance task by using eight combinations of the variables based on a Latin square design. The results of this experiment and conclusions based on data collected are presented.

  2. Hypervelocity impact studies using a rotating mirror framing laser shadowgraph camera

    NASA Technical Reports Server (NTRS)

    Parker, Vance C.; Crews, Jeanne Lee

    1988-01-01

    The need to study the effects of the impact of micrometeorites and orbital debris on various space-based systems has brought together the technologies of several companies and individuals in order to provide a successful instrumentation package. A light gas gun was employed to accelerate small projectiles to speeds in excess of 7 km/sec. Their impact on various targets is being studied with the help of a specially designed continuous-access rotating-mirror framing camera. The camera provides 80 frames of data at up to 1 x 10 to the 6th frames/sec with exposure times of 20 nsec.

  3. Onboard calibration igneous targets for the Mars Science Laboratory Curiosity rover and the Chemistry Camera laser induced breakdown spectroscopy instrument

    NASA Astrophysics Data System (ADS)

    Fabre, C.; Maurice, S.; Cousin, A.; Wiens, R. C.; Forni, O.; Sautter, V.; Guillaume, D.

    2011-03-01

    Accurate characterization of the Chemistry Camera (ChemCam) laser-induced breakdown spectroscopy (LIBS) on-board composition targets is of prime importance for the ChemCam instrument. The Mars Science Laboratory (MSL) science and operations teams expect ChemCam to provide the first compositional results at remote distances (1.5-7 m) during the in situ analyses of the Martian surface starting in 2012. Thus, establishing LIBS reference spectra from appropriate calibration standards must be undertaken diligently. Considering the global mineralogy of the Martian surface, and the possible landing sites, three specific compositions of igneous targets have been determined. Picritic, noritic, and shergottic glasses have been produced, along with a Macusanite natural glass. A sample of each target will fly on the MSL Curiosity rover deck, 1.56 m from the ChemCam instrument, and duplicates are available on the ground. Duplicates are considered to be identical, as the relative standard deviation (RSD) of the composition dispersion is around 8%. Electronic microprobe and laser ablation inductively coupled plasma mass spectrometry (LA ICP-MS) analyses give evidence that the chemical composition of the four silicate targets is very homogeneous at microscopic scales larger than the instrument spot size, with RSD < 5% for concentration variations > 0.1 wt.% using electronic microprobe, and < 10% for concentration variations > 0.01 wt.% using LA ICP-MS. The LIBS campaign on the igneous targets performed under flight-like Mars conditions establishes reference spectra for the entire mission. The LIBS spectra between 240 and 900 nm are extremely rich, hundreds of lines with high signal-to-noise, and a dynamical range sufficient to identify unambiguously major, minor and trace elements. For instance, a first LIBS calibration curve has been established for strontium from [Sr] = 284 ppm to [Sr] = 1480 ppm, showing the potential for the future calibrations for other major or minor

  4. How long is enough to detect terrestrial animals? Estimating the minimum trapping effort on camera traps

    PubMed Central

    Si, Xingfeng; Kays, Roland

    2014-01-01

    Camera traps is an important wildlife inventory tool for estimating species diversity at a site. Knowing what minimum trapping effort is needed to detect target species is also important to designing efficient studies, considering both the number of camera locations, and survey length. Here, we take advantage of a two-year camera trapping dataset from a small (24-ha) study plot in Gutianshan National Nature Reserve, eastern China to estimate the minimum trapping effort actually needed to sample the wildlife community. We also evaluated the relative value of adding new camera sites or running cameras for a longer period at one site. The full dataset includes 1727 independent photographs captured during 13,824 camera days, documenting 10 resident terrestrial species of birds and mammals. Our rarefaction analysis shows that a minimum of 931 camera days would be needed to detect the resident species sufficiently in the plot, and c. 8700 camera days to detect all 10 resident species. In terms of detecting a diversity of species, the optimal sampling period for one camera site was c. 40, or long enough to record about 20 independent photographs. Our analysis of evaluating the increasing number of additional camera sites shows that rotating cameras to new sites would be more efficient for measuring species richness than leaving cameras at fewer sites for a longer period. PMID:24868493

  5. CHAMP (Camera, Handlens, and Microscope Probe)

    NASA Technical Reports Server (NTRS)

    Mungas, Greg S.; Boynton, John E.; Balzer, Mark A.; Beegle, Luther; Sobel, Harold R.; Fisher, Ted; Klein, Dan; Deans, Matthew; Lee, Pascal; Sepulveda, Cesar A.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe)is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As a robotic arm-mounted imager, CHAMP supports stereo imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision rangefinding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. CHAMP was originally developed through the Mars Instrument Development Program (MIDP) in support of robotic field investigations, but may also find application in new areas such as robotic in-orbit servicing and maintenance operations associated with spacecraft and human operations. We overview CHAMP'S instrument performance and basic design considerations below.

  6. Autocalibrating vision guided navigation of unmanned air vehicles via tactical monocular cameras in GPS denied environments

    NASA Astrophysics Data System (ADS)

    Celik, Koray

    This thesis presents a novel robotic navigation strategy by using a conventional tactical monocular camera, proving the feasibility of using a monocular camera as the sole proximity sensing, object avoidance, mapping, and path-planning mechanism to fly and navigate small to medium scale unmanned rotary-wing aircraft in an autonomous manner. The range measurement strategy is scalable, self-calibrating, indoor-outdoor capable, and has been biologically inspired by the key adaptive mechanisms for depth perception and pattern recognition found in humans and intelligent animals (particularly bats), designed to assume operations in previously unknown, GPS-denied environments. It proposes novel electronics, aircraft, aircraft systems, systems, and procedures and algorithms that come together to form airborne systems which measure absolute ranges from a monocular camera via passive photometry, mimicking that of a human-pilot like judgement. The research is intended to bridge the gap between practical GPS coverage and precision localization and mapping problem in a small aircraft. In the context of this study, several robotic platforms, airborne and ground alike, have been developed, some of which have been integrated in real-life field trials, for experimental validation. Albeit the emphasis on miniature robotic aircraft this research has been tested and found compatible with tactical vests and helmets, and it can be used to augment the reliability of many other types of proximity sensors.

  7. Creating History Documentaries: A Step-by-Step Guide to Video Projects in the Classroom.

    ERIC Educational Resources Information Center

    Escobar, Deborah

    This guide offers an easy introduction to social studies teachers wanting to challenge their students with creative media by bringing the past to life. The 14-step guide shows teachers and students the techniques needed for researching, scripting, and editing a historical documentary. Using a video camera and computer software, students can…

  8. Image Sensors Enhance Camera Technologies

    NASA Technical Reports Server (NTRS)

    2010-01-01

    In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.

  9. An attentive multi-camera system

    NASA Astrophysics Data System (ADS)

    Napoletano, Paolo; Tisato, Francesco

    2014-03-01

    Intelligent multi-camera systems that integrate computer vision algorithms are not error free, and thus both false positive and negative detections need to be revised by a specialized human operator. Traditional multi-camera systems usually include a control center with a wall of monitors displaying videos from each camera of the network. Nevertheless, as the number of cameras increases, switching from a camera to another becomes hard for a human operator. In this work we propose a new method that dynamically selects and displays the content of a video camera from all the available contents in the multi-camera system. The proposed method is based on a computational model of human visual attention that integrates top-down and bottom-up cues. We believe that this is the first work that tries to use a model of human visual attention for the dynamic selection of the camera view of a multi-camera system. The proposed method has been experimented in a given scenario and has demonstrated its effectiveness with respect to the other methods and manually generated ground-truth. The effectiveness has been evaluated in terms of number of correct best-views generated by the method with respect to the camera views manually generated by a human operator.

  10. Goal-oriented rectification of camera-based document images.

    PubMed

    Stamatopoulos, Nikolaos; Gatos, Basilis; Pratikakis, Ioannis; Perantonis, Stavros J

    2011-04-01

    Document digitization with either flatbed scanners or camera-based systems results in document images which often suffer from warping and perspective distortions that deteriorate the performance of current OCR approaches. In this paper, we present a goal-oriented rectification methodology to compensate for undesirable document image distortions aiming to improve the OCR result. Our approach relies upon a coarse-to-fine strategy. First, a coarse rectification is accomplished with the aid of a computationally low cost transformation which addresses the projection of a curved surface to a 2-D rectangular area. The projection of the curved surface on the plane is guided only by the textual content's appearance in the document image while incorporating a transformation which does not depend on specific model primitives or camera setup parameters. Second, pose normalization is applied on the word level aiming to restore all the local distortions of the document image. Experimental results on various document images with a variety of distortions demonstrate the robustness and effectiveness of the proposed rectification methodology using a consistent evaluation methodology that encounters OCR accuracy and a newly introduced measure using a semi-automatic procedure.

  11. Endoscopic laser range scanner for minimally invasive, image guided kidney surgery

    NASA Astrophysics Data System (ADS)

    Friets, Eric; Bieszczad, Jerry; Kynor, David; Norris, James; Davis, Brynmor; Allen, Lindsay; Chambers, Robert; Wolf, Jacob; Glisson, Courtenay; Herrell, S. Duke; Galloway, Robert L.

    2013-03-01

    Image guided surgery (IGS) has led to significant advances in surgical procedures and outcomes. Endoscopic IGS is hindered, however, by the lack of suitable intraoperative scanning technology for registration with preoperative tomographic image data. This paper describes implementation of an endoscopic laser range scanner (eLRS) system for accurate, intraoperative mapping of the kidney surface, registration of the measured kidney surface with preoperative tomographic images, and interactive image-based surgical guidance for subsurface lesion targeting. The eLRS comprises a standard stereo endoscope coupled to a steerable laser, which scans a laser fan beam across the kidney surface, and a high-speed color camera, which records the laser-illuminated pixel locations on the kidney. Through calibrated triangulation, a dense set of 3-D surface coordinates are determined. At maximum resolution, the eLRS acquires over 300,000 surface points in less than 15 seconds. Lower resolution scans of 27,500 points are acquired in one second. Measurement accuracy of the eLRS, determined through scanning of reference planar and spherical phantoms, is estimated to be 0.38 +/- 0.27 mm at a range of 2 to 6 cm. Registration of the scanned kidney surface with preoperative image data is achieved using a modified iterative closest point algorithm. Surgical guidance is provided through graphical overlay of the boundaries of subsurface lesions, vasculature, ducts, and other renal structures labeled in the CT or MR images, onto the eLRS camera image. Depth to these subsurface targets is also displayed. Proof of clinical feasibility has been established in an explanted perfused porcine kidney experiment.

  12. A digital ISO expansion technique for digital cameras

    NASA Astrophysics Data System (ADS)

    Yoo, Youngjin; Lee, Kangeui; Choe, Wonhee; Park, SungChan; Lee, Seong-Deok; Kim, Chang-Yong

    2010-01-01

    Market's demands of digital cameras for higher sensitivity capability under low-light conditions are remarkably increasing nowadays. The digital camera market is now a tough race for providing higher ISO capability. In this paper, we explore an approach for increasing maximum ISO capability of digital cameras without changing any structure of an image sensor or CFA. Our method is directly applied to the raw Bayer pattern CFA image to avoid non-linearity characteristics and noise amplification which are usually deteriorated after ISP (Image Signal Processor) of digital cameras. The proposed method fuses multiple short exposed images which are noisy, but less blurred. Our approach is designed to avoid the ghost artifact caused by hand-shaking and object motion. In order to achieve a desired ISO image quality, both low frequency chromatic noise and fine-grain noise that usually appear in high ISO images are removed and then we modify the different layers which are created by a two-scale non-linear decomposition of an image. Once our approach is performed on an input Bayer pattern CFA image, the resultant Bayer image is further processed by ISP to obtain a fully processed RGB image. The performance of our proposed approach is evaluated by comparing SNR (Signal to Noise Ratio), MTF50 (Modulation Transfer Function), color error ~E*ab and visual quality with reference images whose exposure times are properly extended into a variety of target sensitivity.

  13. A Robust Camera-Based Interface for Mobile Entertainment

    PubMed Central

    Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier

    2016-01-01

    Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288

  14. Sensors for 3D Imaging: Metric Evaluation and Calibration of a CCD/CMOS Time-of-Flight Camera.

    PubMed

    Chiabrando, Filiberto; Chiabrando, Roberto; Piatti, Dario; Rinaudo, Fulvio

    2009-01-01

    3D imaging with Time-of-Flight (ToF) cameras is a promising recent technique which allows 3D point clouds to be acquired at video frame rates. However, the distance measurements of these devices are often affected by some systematic errors which decrease the quality of the acquired data. In order to evaluate these errors, some experimental tests on a CCD/CMOS ToF camera sensor, the SwissRanger (SR)-4000 camera, were performed and reported in this paper. In particular, two main aspects are treated: the calibration of the distance measurements of the SR-4000 camera, which deals with evaluation of the camera warm up time period, the distance measurement error evaluation and a study of the influence on distance measurements of the camera orientation with respect to the observed object; the second aspect concerns the photogrammetric calibration of the amplitude images delivered by the camera using a purpose-built multi-resolution field made of high contrast targets.

  15. In-Bore 3-T MR-guided Transrectal Targeted Prostate Biopsy: Prostate Imaging Reporting and Data System Version 2–based Diagnostic Performance for Detection of Prostate Cancer

    PubMed Central

    Tan, Nelly; Lin, Wei-Chan; Khoshnoodi, Pooria; Asvadi, Nazanin H.; Yoshida, Jeffrey; Margolis, Daniel J. A.; Lu, David S. K.; Wu, Holden; Lu, David Y.; Huang, Jaioti

    2017-01-01

    Purpose To determine the diagnostic yield of in-bore 3-T magnetic resonance (MR) imaging–guided prostate biopsy and stratify performance according to Prostate Imaging Reporting and Data System (PI-RADS) versions 1 and 2. Materials and Methods This study was HIPAA compliant and institution review board approved. In-bore 3-T MR-guided prostate biopsy was performed in 134 targets in 106 men who (a) had not previously undergone prostate biopsy, (b) had prior negative biopsy findings with increased prostate-specific antigen (PSA) level, or (c) had a prior history of prostate cancer with increasing PSA level. Clinical, diagnostic 3-T MR imaging was performed with in-bore guided prostate biopsy, and pathology data were collected. The diagnostic yields of MR-guided biopsy per patient and target were analyzed, and differences between biopsy targets with negative and positive findings were determined. Results of logistic regression and areas under the curve were compared between PI-RADS versions 1 and 2. Results Prostate cancer was detected in 63 of 106 patients (59.4%) and in 72 of 134 targets (53.7%) with 3-T MR imaging. Forty-nine of 72 targets (68.0%) had clinically significant cancer (Gleason score ≥ 7). One complication occurred (urosepsis, 0.9%). Patients who had positive target findings had lower apparent diffusion coefficient values (875 × 10−6 mm2/sec vs 1111 × 10−6 mm2/sec, respectively; P < .01), smaller prostate volume (47.2 cm3 vs 75.4 cm3, respectively; P < .01), higher PSA density (0.16 vs 0.10, respectively; P < .01), and higher proportion of PI-RADS version 2 category 3–5 scores when compared with patients with negative target findings. MR targets with PI-RADS version 2 category 2, 3, 4, and 5 scores had a positive diagnostic yield of three of 23 (13.0%), six of 31 (19.4%), 39 of 50 (78.0%), and 24 of 29 (82.8%) targets, respectively. No differences were detected in areas under the curve for PI-RADS version 2 versus 1. Conclusion In-bore 3-T MR-guided

  16. Making Ceramic Cameras

    ERIC Educational Resources Information Center

    Squibb, Matt

    2009-01-01

    This article describes how to make a clay camera. This idea of creating functional cameras from clay allows students to experience ceramics, photography, and painting all in one unit. (Contains 1 resource and 3 online resources.)

  17. MRI-guided brachytherapy

    PubMed Central

    Tanderup, Kari; Viswanathan, Akila; Kirisits, Christian; Frank, Steven J.

    2014-01-01

    The application of MRI-guided brachytherapy has demonstrated significant growth during the last two decades. Clinical improvements in cervix cancer outcomes have been linked to the application of repeated MRI for identification of residual tumor volumes during radiotherapy. This has changed clinical practice in the direction of individualized dose administration, and mounting evidence of improved clinical outcome with regard to local control, overall survival as well as morbidity. MRI-guided prostate HDR and LDR brachytherapy has improved the accuracy of target and organs-at-risk (OAR) delineation, and the potential exists for improved dose prescription and reporting for the prostate gland and organs at risk. Furthermore, MRI-guided prostate brachytherapy has significant potential to identify prostate subvolumes and dominant lesions to allow for dose administration reflecting the differential risk of recurrence. MRI-guided brachytherapy involves advanced imaging, target concepts, and dose planning. The key issue for safe dissemination and implementation of high quality MRI-guided brachytherapy is establishment of qualified multidisciplinary teams and strategies for training and education. PMID:24931089

  18. Indoor space 3D visual reconstruction using mobile cart with laser scanner and cameras

    NASA Astrophysics Data System (ADS)

    Gashongore, Prince Dukundane; Kawasue, Kikuhito; Yoshida, Kumiko; Aoki, Ryota

    2017-02-01

    Indoor space 3D visual reconstruction has many applications and, once done accurately, it enables people to conduct different indoor activities in an efficient manner. For example, an effective and efficient emergency rescue response can be accomplished in a fire disaster situation by using 3D visual information of a destroyed building. Therefore, an accurate Indoor Space 3D visual reconstruction system which can be operated in any given environment without GPS has been developed using a Human-Operated mobile cart equipped with a laser scanner, CCD camera, omnidirectional camera and a computer. By using the system, accurate indoor 3D Visual Data is reconstructed automatically. The obtained 3D data can be used for rescue operations, guiding blind or partially sighted persons and so forth.

  19. Mars Science Laboratory Engineering Cameras

    NASA Technical Reports Server (NTRS)

    Maki, Justin N.; Thiessen, David L.; Pourangi, Ali M.; Kobzeff, Peter A.; Lee, Steven W.; Dingizian, Arsham; Schwochert, Mark A.

    2012-01-01

    NASA's Mars Science Laboratory (MSL) Rover, which launched to Mars in 2011, is equipped with a set of 12 engineering cameras. These cameras are build-to-print copies of the Mars Exploration Rover (MER) cameras, which were sent to Mars in 2003. The engineering cameras weigh less than 300 grams each and use less than 3 W of power. Images returned from the engineering cameras are used to navigate the rover on the Martian surface, deploy the rover robotic arm, and ingest samples into the rover sample processing system. The navigation cameras (Navcams) are mounted to a pan/tilt mast and have a 45-degree square field of view (FOV) with a pixel scale of 0.82 mrad/pixel. The hazard avoidance cameras (Haz - cams) are body-mounted to the rover chassis in the front and rear of the vehicle and have a 124-degree square FOV with a pixel scale of 2.1 mrad/pixel. All of the cameras utilize a frame-transfer CCD (charge-coupled device) with a 1024x1024 imaging region and red/near IR bandpass filters centered at 650 nm. The MSL engineering cameras are grouped into two sets of six: one set of cameras is connected to rover computer A and the other set is connected to rover computer B. The MSL rover carries 8 Hazcams and 4 Navcams.

  20. 1970 Supplement to the Guide to Microreproduction Equipment.

    ERIC Educational Resources Information Center

    Ballou, Hubbard W., Ed.

    The time period covered by this guide runs from the end of 1968 to the middle of 1970. Microreproduction cameras, microform readers, reader/printers, processors, contact printers, computer output microfilm equipment, and other special microform equipment and accessories produced during this time span are listed. Most of the equipment is domestic,…

  1. Mechanism of Genome Interrogation: How CRISPR RNA-Guided Cas9 Proteins Locate Specific Targets on DNA.

    PubMed

    Shvets, Alexey A; Kolomeisky, Anatoly B

    2017-10-03

    The ability to precisely edit and modify a genome opens endless opportunities to investigate fundamental properties of living systems as well as to advance various medical techniques and bioengineering applications. This possibility is now close to reality due to a recent discovery of the adaptive bacterial immune system, which is based on clustered regularly interspaced short palindromic repeats (CRISPR)-associated proteins (Cas) that utilize RNA to find and cut the double-stranded DNA molecules at specific locations. Here we develop a quantitative theoretical approach to analyze the mechanism of target search on DNA by CRISPR RNA-guided Cas9 proteins, which is followed by a selective cleavage of nucleic acids. It is based on a discrete-state stochastic model that takes into account the most relevant physical-chemical processes in the system. Using a method of first-passage processes, a full dynamic description of the target search is presented. It is found that the location of specific sites on DNA by CRISPR Cas9 proteins is governed by binding first to protospacer adjacent motif sequences on DNA, which is followed by reversible transitions into DNA interrogation states. In addition, the search dynamics is strongly influenced by the off-target cutting. Our theoretical calculations allow us to explain the experimental observations and to give experimentally testable predictions. Thus, the presented theoretical model clarifies some molecular aspects of the genome interrogation by CRISPR RNA-guided Cas9 proteins. Copyright © 2017 Biophysical Society. Published by Elsevier Inc. All rights reserved.

  2. Accurate estimation of camera shot noise in the real-time

    NASA Astrophysics Data System (ADS)

    Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.

    2017-10-01

    Nowadays digital cameras are essential parts of various technological processes and daily tasks. They are widely used in optics and photonics, astronomy, biology and other various fields of science and technology such as control systems and video-surveillance monitoring. One of the main information limitations of photo- and videocameras are noises of photosensor pixels. Camera's photosensor noise can be divided into random and pattern components. Temporal noise includes random noise component while spatial noise includes pattern noise component. Temporal noise can be divided into signal-dependent shot noise and signal-nondependent dark temporal noise. For measurement of camera noise characteristics, the most widely used methods are standards (for example, EMVA Standard 1288). It allows precise shot and dark temporal noise measurement but difficult in implementation and time-consuming. Earlier we proposed method for measurement of temporal noise of photo- and videocameras. It is based on the automatic segmentation of nonuniform targets (ASNT). Only two frames are sufficient for noise measurement with the modified method. In this paper, we registered frames and estimated shot and dark temporal noises of cameras consistently in the real-time. The modified ASNT method is used. Estimation was performed for the cameras: consumer photocamera Canon EOS 400D (CMOS, 10.1 MP, 12 bit ADC), scientific camera MegaPlus II ES11000 (CCD, 10.7 MP, 12 bit ADC), industrial camera PixeLink PL-B781F (CMOS, 6.6 MP, 10 bit ADC) and video-surveillance camera Watec LCL-902C (CCD, 0.47 MP, external 8 bit ADC). Experimental dependencies of temporal noise on signal value are in good agreement with fitted curves based on a Poisson distribution excluding areas near saturation. Time of registering and processing of frames used for temporal noise estimation was measured. Using standard computer, frames were registered and processed during a fraction of second to several seconds only. Also the

  3. Depth perception camera for autonomous vehicle applications

    NASA Astrophysics Data System (ADS)

    Kornreich, Philipp

    2013-05-01

    An imager that can measure the distance from each pixel to the point on the object that is in focus at the pixel is described. Since it provides numeric information of the distance from the camera to all points in its field of view it is ideally suited for autonomous vehicle navigation and robotic vision. This eliminates the LIDAR conventionally used for range measurements. The light arriving at a pixel through a convex lens adds constructively only if it comes from the object point in focus at this pixel. The light from all other object points cancels. Thus, the lens selects the point on the object who's range is to be determined. The range measurement is accomplished by short light guides at each pixel. The light guides contain a p - n junction and a pair of contacts along its length. They, too, contain light sensing elements along the length. The device uses ambient light that is only coherent in spherical shell shaped light packets of thickness of one coherence length. Each of the frequency components of the broad band light arriving at a pixel has a phase proportional to the distance from an object point to its image pixel.

  4. Prototype of a single probe Compton camera for laparoscopic surgery

    NASA Astrophysics Data System (ADS)

    Koyama, A.; Nakamura, Y.; Shimazoe, K.; Takahashi, H.; Sakuma, I.

    2017-02-01

    Image-guided surgery (IGS) is performed using a real-time surgery navigation system with three-dimensional (3D) position tracking of surgical tools. IGS is fast becoming an important technology for high-precision laparoscopic surgeries, in which the field of view is limited. In particular, recent developments in intraoperative imaging using radioactive biomarkers may enable advanced IGS for supporting malignant tumor removal surgery. In this light, we develop a novel intraoperative probe with a Compton camera and a position tracking system for performing real-time radiation-guided surgery. A prototype probe consisting of Ce :Gd3 Al2 Ga3 O12 (GAGG) crystals and silicon photomultipliers was fabricated, and its reconstruction algorithm was optimized to enable real-time position tracking. The results demonstrated the visualization capability of the radiation source with ARM = ∼ 22.1 ° and the effectiveness of the proposed system.

  5. Collaborative real-time scheduling of multiple PTZ cameras for multiple object tracking in video surveillance

    NASA Astrophysics Data System (ADS)

    Liu, Yu-Che; Huang, Chung-Lin

    2013-03-01

    This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.

  6. SPARTAN Near-IR Camera | SOAR

    Science.gov Websites

    SPARTAN Near-IR Camera SPARTAN Cookbook Ohio State Infrared Imager/Spectrograph (OSIRIS) - NO LONGER Instrumentation at SOAR»SPARTAN Near-IR Camera SPARTAN Near-IR Camera System Overview The Spartan Infrared Camera is a high spatial resolution near-IR imager. Spartan has a focal plane conisisting of four "

  7. Evaluation of multispectral plenoptic camera

    NASA Astrophysics Data System (ADS)

    Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin

    2013-01-01

    Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.

  8. VUV testing of science cameras at MSFC: QE measurement of the CLASP flight cameras

    NASA Astrophysics Data System (ADS)

    Champey, P.; Kobayashi, K.; Winebarger, A.; Cirtain, J.; Hyde, D.; Robertson, B.; Beabout, B.; Beabout, D.; Stewart, M.

    2015-08-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint MSFC, National Astronomical Observatory of Japan (NAOJ), Instituto de Astrofisica de Canarias (IAC) and Institut D'Astrophysique Spatiale (IAS) sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512 × 512 detector, dual channel analog readout and an internally mounted cold block. At the flight CCD temperature of -20C, the CLASP cameras exceeded the low-noise performance requirements (<= 25 e- read noise and <= 10 e- /sec/pixel dark current), in addition to maintaining a stable gain of ≍ 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Three flight cameras and one engineering camera were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise and dark current of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV, EUV and X-ray science cameras at MSFC.

  9. SFR test fixture for hemispherical and hyperhemispherical camera systems

    NASA Astrophysics Data System (ADS)

    Tamkin, John M.

    2017-08-01

    Optical testing of camera systems in volume production environments can often require expensive tooling and test fixturing. Wide field (fish-eye, hemispheric and hyperhemispheric) optical systems create unique challenges because of the inherent distortion, and difficulty in controlling reflections from front-lit high resolution test targets over the hemisphere. We present a unique design for a test fixture that uses low-cost manufacturing methods and equipment such as 3D printing and an Arduino processor to control back-lit multi-color (VIS/NIR) targets and sources. Special care with LED drive electronics is required to accommodate both global and rolling shutter sensors.

  10. Visual Target Tracking in the Presence of Unknown Observer Motion

    NASA Technical Reports Server (NTRS)

    Williams, Stephen; Lu, Thomas

    2009-01-01

    Much attention has been given to the visual tracking problem due to its obvious uses in military surveillance. However, visual tracking is complicated by the presence of motion of the observer in addition to the target motion, especially when the image changes caused by the observer motion are large compared to those caused by the target motion. Techniques for estimating the motion of the observer based on image registration techniques and Kalman filtering are presented and simulated. With the effects of the observer motion removed, an additional phase is implemented to track individual targets. This tracking method is demonstrated on an image stream from a buoy-mounted or periscope-mounted camera, where large inter-frame displacements are present due to the wave action on the camera. This system has been shown to be effective at tracking and predicting the global position of a planar vehicle (boat) being observed from a single, out-of-plane camera. Finally, the tracking system has been extended to a multi-target scenario.

  11. Attentional Control via Parallel Target-Templates in Dual-Target Search

    PubMed Central

    Barrett, Doug J. K.; Zobay, Oliver

    2014-01-01

    Simultaneous search for two targets has been shown to be slower and less accurate than independent searches for the same two targets. Recent research suggests this ‘dual-target cost’ may be attributable to a limit in the number of target-templates than can guide search at any one time. The current study investigated this possibility by comparing behavioural responses during single- and dual-target searches for targets defined by their orientation. The results revealed an increase in reaction times for dual- compared to single-target searches that was largely independent of the number of items in the display. Response accuracy also decreased on dual- compared to single-target searches: dual-target accuracy was higher than predicted by a model restricting search guidance to a single target-template and lower than predicted by a model simulating two independent single-target searches. These results are consistent with a parallel model of dual-target search in which attentional control is exerted by more than one target-template at a time. The requirement to maintain two target-templates simultaneously, however, appears to impose a reduction in the specificity of the memory representation that guides search for each target. PMID:24489793

  12. Low, slow, small target recognition based on spatial vision network

    NASA Astrophysics Data System (ADS)

    Cheng, Zhao; Guo, Pei; Qi, Xin

    2018-03-01

    Traditional photoelectric monitoring is monitored using a large number of identical cameras. In order to ensure the full coverage of the monitoring area, this monitoring method uses more cameras, which leads to more monitoring and repetition areas, and higher costs, resulting in more waste. In order to reduce the monitoring cost and solve the difficult problem of finding, identifying and tracking a low altitude, slow speed and small target, this paper presents spatial vision network for low-slow-small targets recognition. Based on camera imaging principle and monitoring model, spatial vision network is modeled and optimized. Simulation experiment results demonstrate that the proposed method has good performance.

  13. [Research Award providing funds for a tracking video camera

    NASA Technical Reports Server (NTRS)

    Collett, Thomas

    2000-01-01

    The award provided funds for a tracking video camera. The camera has been installed and the system calibrated. It has enabled us to follow in real time the tracks of individual wood ants (Formica rufa) within a 3m square arena as they navigate singly in-doors guided by visual cues. To date we have been using the system on two projects. The first is an analysis of the navigational strategies that ants use when guided by an extended landmark (a low wall) to a feeding site. After a brief training period, ants are able to keep a defined distance and angle from the wall, using their memory of the wall's height on the retina as a controlling parameter. By training with walls of one height and length and testing with walls of different heights and lengths, we can show that ants adjust their distance from the wall so as to keep the wall at the height that they learned during training. Thus, their distance from the base of a tall wall is further than it is from the training wall, and the distance is shorter when the wall is low. The stopping point of the trajectory is defined precisely by the angle that the far end of the wall makes with the trajectory. Thus, ants walk further if the wall is extended in length and not so far if the wall is shortened. These experiments represent the first case in which the controlling parameters of an extended trajectory can be defined with some certainty. It raises many questions for future research that we are now pursuing.

  14. Easily Accessible Camera Mount

    NASA Technical Reports Server (NTRS)

    Chalson, H. E.

    1986-01-01

    Modified mount enables fast alinement of movie cameras in explosionproof housings. Screw on side and readily reached through side door of housing. Mount includes right-angle drive mechanism containing two miter gears that turn threaded shaft. Shaft drives movable dovetail clamping jaw that engages fixed dovetail plate on camera. Mechanism alines camera in housing and secures it. Reduces installation time by 80 percent.

  15. Electronic still camera

    NASA Astrophysics Data System (ADS)

    Holland, S. Douglas

    1992-09-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  16. Electronic Still Camera

    NASA Technical Reports Server (NTRS)

    Holland, S. Douglas (Inventor)

    1992-01-01

    A handheld, programmable, digital camera is disclosed that supports a variety of sensors and has program control over the system components to provide versatility. The camera uses a high performance design which produces near film quality images from an electronic system. The optical system of the camera incorporates a conventional camera body that was slightly modified, thus permitting the use of conventional camera accessories, such as telephoto lenses, wide-angle lenses, auto-focusing circuitry, auto-exposure circuitry, flash units, and the like. An image sensor, such as a charge coupled device ('CCD') collects the photons that pass through the camera aperture when the shutter is opened, and produces an analog electrical signal indicative of the image. The analog image signal is read out of the CCD and is processed by preamplifier circuitry, a correlated double sampler, and a sample and hold circuit before it is converted to a digital signal. The analog-to-digital converter has an accuracy of eight bits to insure accuracy during the conversion. Two types of data ports are included for two different data transfer needs. One data port comprises a general purpose industrial standard port and the other a high speed/high performance application specific port. The system uses removable hard disks as its permanent storage media. The hard disk receives the digital image signal from the memory buffer and correlates the image signal with other sensed parameters, such as longitudinal or other information. When the storage capacity of the hard disk has been filled, the disk can be replaced with a new disk.

  17. Mars Exploration Rover engineering cameras

    USGS Publications Warehouse

    Maki, J.N.; Bell, J.F.; Herkenhoff, K. E.; Squyres, S. W.; Kiely, A.; Klimesh, M.; Schwochert, M.; Litwin, T.; Willson, R.; Johnson, Aaron H.; Maimone, M.; Baumgartner, E.; Collins, A.; Wadsworth, M.; Elliot, S.T.; Dingizian, A.; Brown, D.; Hagerott, E.C.; Scherr, L.; Deen, R.; Alexander, D.; Lorre, J.

    2003-01-01

    NASA's Mars Exploration Rover (MER) Mission will place a total of 20 cameras (10 per rover) onto the surface of Mars in early 2004. Fourteen of the 20 cameras are designated as engineering cameras and will support the operation of the vehicles on the Martian surface. Images returned from the engineering cameras will also be of significant importance to the scientific community for investigative studies of rock and soil morphology. The Navigation cameras (Navcams, two per rover) are a mast-mounted stereo pair each with a 45?? square field of view (FOV) and an angular resolution of 0.82 milliradians per pixel (mrad/pixel). The Hazard Avoidance cameras (Hazcams, four per rover) are a body-mounted, front- and rear-facing set of stereo pairs, each with a 124?? square FOV and an angular resolution of 2.1 mrad/pixel. The Descent camera (one per rover), mounted to the lander, has a 45?? square FOV and will return images with spatial resolutions of ???4 m/pixel. All of the engineering cameras utilize broadband visible filters and 1024 x 1024 pixel detectors. Copyright 2003 by the American Geophysical Union.

  18. Clickable and imageable multiblock polymer micelles with magnetically guided and PEG-switched targeting and release property for precise tumor theranosis.

    PubMed

    Wei, Jing; Shuai, Xiaoyu; Wang, Rui; He, Xueling; Li, Yiwen; Ding, Mingming; Li, Jiehua; Tan, Hong; Fu, Qiang

    2017-11-01

    Targeted delivery of therapeutics and diagnostics using nanotechnology holds great promise to minimize the side effects of conventional chemotherapy and enable specific and real-time detection of diseases. To realize this goal, we report a clickable and imageable nanovehicle assembled from multiblock polyurethanes (MPUs). The soft segments of the polymers are based on detachable poly(ethylene glycol) (PEG) and degradable poly(ε-caprolactone) (PCL), and the hard segments are constructed from lysine- and cystine-derivatives bearing reduction-responsive disulfide linkages and click-active alkynyl moieties, allowing for post-conjugation of targeting ligands via a click chemistry. It was found that the cleavage of PEG corona bearing a pH-sensitive benzoic-imine linkage (BPEG) could act as an on-off switch, which is capable of activating the clicked targeting ligands under extracellular acidic condition, followed by triggering the core degradation and payload release within tumor cells. In combination with superparamagnetic iron oxide nanoparticles (SPION) clustered within the micellar core, the MPUs exhibit excellent magnetic resonance imaging (MRI) contrast effects and T 2 relaxation in vitro, as well as magnetically guided MR imaging and multimodal targeting of therapeutics to tumor precisely, leading to significant inhibition of cancer with minimal side effect. This work provides a safe and versatile platform for the further development of smart theranostic systems for potential magnetically-targeted and imaging-guided personalized medicine. Copyright © 2017 Elsevier Ltd. All rights reserved.

  19. Mixel camera--a new push-broom camera concept for high spatial resolution keystone-free hyperspectral imaging.

    PubMed

    Høye, Gudrun; Fridman, Andrei

    2013-05-06

    Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.

  20. Advanced High-Definition Video Cameras

    NASA Technical Reports Server (NTRS)

    Glenn, William

    2007-01-01

    A product line of high-definition color video cameras, now under development, offers a superior combination of desirable characteristics, including high frame rates, high resolutions, low power consumption, and compactness. Several of the cameras feature a 3,840 2,160-pixel format with progressive scanning at 30 frames per second. The power consumption of one of these cameras is about 25 W. The size of the camera, excluding the lens assembly, is 2 by 5 by 7 in. (about 5.1 by 12.7 by 17.8 cm). The aforementioned desirable characteristics are attained at relatively low cost, largely by utilizing digital processing in advanced field-programmable gate arrays (FPGAs) to perform all of the many functions (for example, color balance and contrast adjustments) of a professional color video camera. The processing is programmed in VHDL so that application-specific integrated circuits (ASICs) can be fabricated directly from the program. ["VHDL" signifies VHSIC Hardware Description Language C, a computing language used by the United States Department of Defense for describing, designing, and simulating very-high-speed integrated circuits (VHSICs).] The image-sensor and FPGA clock frequencies in these cameras have generally been much higher than those used in video cameras designed and manufactured elsewhere. Frequently, the outputs of these cameras are converted to other video-camera formats by use of pre- and post-filters.

  1. Ringfield lithographic camera

    DOEpatents

    Sweatt, W.C.

    1998-09-08

    A projection lithography camera is presented with a wide ringfield optimized so as to make efficient use of extreme ultraviolet radiation from a large area radiation source (e.g., D{sub source} {approx_equal} 0.5 mm). The camera comprises four aspheric mirrors optically arranged on a common axis of symmetry. The camera includes an aperture stop that is accessible through a plurality of partial aperture stops to synthesize the theoretical aperture stop. Radiation from a mask is focused to form a reduced image on a wafer, relative to the mask, by reflection from the four aspheric mirrors. 11 figs.

  2. Development of the radial neutron camera system for the HL-2A tokamak

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Y. P., E-mail: zhangyp@swip.ac.cn; Yang, J. W.; Liu, Yi

    2016-06-15

    A new radial neutron camera system has been developed and operated recently in the HL-2A tokamak to measure the spatial and time resolved 2.5 MeV D-D fusion neutron, enhancing the understanding of the energetic-ion physics. The camera mainly consists of a multichannel collimator, liquid-scintillation detectors, shielding systems, and a data acquisition system. Measurements of the D-D fusion neutrons using the camera have been successfully performed during the 2015 HL-2A experiment campaign. The measurements show that the distribution of the fusion neutrons in the HL-2A plasma has a peaked profile, suggesting that the neutral beam injection beam ions in the plasmamore » have a peaked distribution. It also suggests that the neutrons are primarily produced from beam-target reactions in the plasma core region. The measurement results from the neutron camera are well consistent with the results of both a standard {sup 235}U fission chamber and NUBEAM neutron calculations. In this paper, the new radial neutron camera system on HL-2A and the first experimental results are described.« less

  3. Near infra-red astronomy with adaptive optics and laser guide stars at the Keck Observatory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Max, C.E.; Gavel, D.T.; Olivier, S.S.

    1995-08-03

    A laser guide star adaptive optics system is being built for the W. M. Keck Observatory`s 10-meter Keck II telescope. Two new near infra-red instruments will be used with this system: a high-resolution camera (NIRC 2) and an echelle spectrometer (NIRSPEC). The authors describe the expected capabilities of these instruments for high-resolution astronomy, using adaptive optics with either a natural star or a sodium-layer laser guide star as a reference. They compare the expected performance of these planned Keck adaptive optics instruments with that predicted for the NICMOS near infra-red camera, which is scheduled to be installed on the Hubblemore » Space Telescope in 1997.« less

  4. LSST Camera Optics Design

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Riot, V J; Olivier, S; Bauman, B

    2012-05-24

    The Large Synoptic Survey Telescope (LSST) uses a novel, three-mirror, telescope design feeding a camera system that includes a set of broad-band filters and three refractive corrector lenses to produce a flat field at the focal plane with a wide field of view. Optical design of the camera lenses and filters is integrated in with the optical design of telescope mirrors to optimize performance. We discuss the rationale for the LSST camera optics design, describe the methodology for fabricating, coating, mounting and testing the lenses and filters, and present the results of detailed analyses demonstrating that the camera optics willmore » meet their performance goals.« less

  5. Hyperspectral imaging using a color camera and its application for pathogen detection

    USDA-ARS?s Scientific Manuscript database

    This paper reports the results of a feasibility study for the development of a hyperspectral image recovery (reconstruction) technique using a RGB color camera and regression analysis in order to detect and classify colonies of foodborne pathogens. The target bacterial pathogens were the six represe...

  6. The MKID Camera

    NASA Astrophysics Data System (ADS)

    Maloney, P. R.; Czakon, N. G.; Day, P. K.; Duan, R.; Gao, J.; Glenn, J.; Golwala, S.; Hollister, M.; LeDuc, H. G.; Mazin, B.; Noroozian, O.; Nguyen, H. T.; Sayers, J.; Schlaerth, J.; Vaillancourt, J. E.; Vayonakis, A.; Wilson, P.; Zmuidzinas, J.

    2009-12-01

    The MKID Camera project is a collaborative effort of Caltech, JPL, the University of Colorado, and UC Santa Barbara to develop a large-format, multi-color millimeter and submillimeter-wavelength camera for astronomy using microwave kinetic inductance detectors (MKIDs). These are superconducting, micro-resonators fabricated from thin aluminum and niobium films. We couple the MKIDs to multi-slot antennas and measure the change in surface impedance produced by photon-induced breaking of Cooper pairs. The readout is almost entirely at room temperature and can be highly multiplexed; in principle hundreds or even thousands of resonators could be read out on a single feedline. The camera will have 576 spatial pixels that image simultaneously in four bands at 750, 850, 1100 and 1300 microns. It is scheduled for deployment at the Caltech Submillimeter Observatory in the summer of 2010. We present an overview of the camera design and readout and describe the current status of testing and fabrication.

  7. Development of a real time multiple target, multi camera tracker for civil security applications

    NASA Astrophysics Data System (ADS)

    Åkerlund, Hans

    2009-09-01

    A surveillance system has been developed that can use multiple TV-cameras to detect and track personnel and objects in real time in public areas. The document describes the development and the system setup. The system is called NIVS Networked Intelligent Video Surveillance. Persons in the images are tracked and displayed on a 3D map of the surveyed area.

  8. Vibration extraction based on fast NCC algorithm and high-speed camera.

    PubMed

    Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an

    2015-09-20

    In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.

  9. Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission

    NASA Astrophysics Data System (ADS)

    Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.

    2018-02-01

    NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.

  10. A versatile photogrammetric camera automatic calibration suite for multispectral fusion and optical helmet tracking

    NASA Astrophysics Data System (ADS)

    de Villiers, Jason; Jermy, Robert; Nicolls, Fred

    2014-06-01

    This paper presents a system to determine the photogrammetric parameters of a camera. The lens distortion, focal length and camera six degree of freedom (DOF) position are calculated. The system caters for cameras of different sensitivity spectra and fields of view without any mechanical modifications. The distortion characterization, a variant of Brown's classic plumb line method, allows many radial and tangential distortion coefficients and finds the optimal principal point. Typical values are 5 radial and 3 tangential coefficients. These parameters are determined stably and demonstrably produce superior results to low order models despite popular and prevalent misconceptions to the contrary. The system produces coefficients to model both the distorted to undistorted pixel coordinate transformation (e.g. for target designation) and the inverse transformation (e.g. for image stitching and fusion) allowing deterministic rates far exceeding real time. The focal length is determined to minimise the error in absolute photogrammetric positional measurement for both multi camera systems or monocular (e.g. helmet tracker) systems. The system determines the 6 DOF position of the camera in a chosen coordinate system. It can also determine the 6 DOF offset of the camera relative to its mechanical mount. This allows faulty cameras to be replaced without requiring a recalibration of the entire system (such as an aircraft cockpit). Results from two simple applications of the calibration results are presented: stitching and fusion of the images from a dual-band visual/ LWIR camera array, and a simple laboratory optical helmet tracker.

  11. Localization accuracy from automatic and semi-automatic rigid registration of locally-advanced lung cancer targets during image-guided radiation therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Robertson, Scott P.; Weiss, Elisabeth; Hugo, Geoffrey D.

    2012-01-15

    Purpose: To evaluate localization accuracy resulting from rigid registration of locally-advanced lung cancer targets using fully automatic and semi-automatic protocols for image-guided radiation therapy. Methods: Seventeen lung cancer patients, fourteen also presenting with involved lymph nodes, received computed tomography (CT) scans once per week throughout treatment under active breathing control. A physician contoured both lung and lymph node targets for all weekly scans. Various automatic and semi-automatic rigid registration techniques were then performed for both individual and simultaneous alignments of the primary gross tumor volume (GTV{sub P}) and involved lymph nodes (GTV{sub LN}) to simulate the localization process in image-guidedmore » radiation therapy. Techniques included ''standard'' (direct registration of weekly images to a planning CT), ''seeded'' (manual prealignment of targets to guide standard registration), ''transitive-based'' (alignment of pretreatment and planning CTs through one or more intermediate images), and ''rereferenced'' (designation of a new reference image for registration). Localization error (LE) was assessed as the residual centroid and border distances between targets from planning and weekly CTs after registration. Results: Initial bony alignment resulted in centroid LE of 7.3 {+-} 5.4 mm and 5.4 {+-} 3.4 mm for the GTV{sub P} and GTV{sub LN}, respectively. Compared to bony alignment, transitive-based and seeded registrations significantly reduced GTV{sub P} centroid LE to 4.7 {+-} 3.7 mm (p = 0.011) and 4.3 {+-} 2.5 mm (p < 1 x 10{sup -3}), respectively, but the smallest GTV{sub P} LE of 2.4 {+-} 2.1 mm was provided by rereferenced registration (p < 1 x 10{sup -6}). Standard registration significantly reduced GTV{sub LN} centroid LE to 3.2 {+-} 2.5 mm (p < 1 x 10{sup -3}) compared to bony alignment, with little additional gain offered by the other registration techniques. For simultaneous target alignment, centroid LE

  12. Recent developments for the Large Binocular Telescope Guiding Control Subsystem

    NASA Astrophysics Data System (ADS)

    Golota, T.; De La Peña, M. D.; Biddick, C.; Lesser, M.; Leibold, T.; Miller, D.; Meeks, R.; Hahn, T.; Storm, J.; Sargent, T.; Summers, D.; Hill, J.; Kraus, J.; Hooper, S.; Fisher, D.

    2014-07-01

    The Large Binocular Telescope (LBT) has eight Acquisition, Guiding, and wavefront Sensing Units (AGw units). They provide guiding and wavefront sensing capability at eight different locations at both direct and bent Gregorian focal stations. Recent additions of focal stations for PEPSI and MODS instruments doubled the number of focal stations in use including respective motion, camera controller server computers, and software infrastructure communicating with Guiding Control Subsystem (GCS). This paper describes the improvements made to the LBT GCS and explains how these changes have led to better maintainability and contributed to increased reliability. This paper also discusses the current GCS status and reviews potential upgrades to further improve its performance.

  13. Researches on hazard avoidance cameras calibration of Lunar Rover

    NASA Astrophysics Data System (ADS)

    Li, Chunyan; Wang, Li; Lu, Xin; Chen, Jihua; Fan, Shenghong

    2017-11-01

    Lunar Lander and Rover of China will be launched in 2013. It will finish the mission targets of lunar soft landing and patrol exploration. Lunar Rover has forward facing stereo camera pair (Hazcams) for hazard avoidance. Hazcams calibration is essential for stereo vision. The Hazcam optics are f-theta fish-eye lenses with a 120°×120° horizontal/vertical field of view (FOV) and a 170° diagonal FOV. They introduce significant distortion in images and the acquired images are quite warped, which makes conventional camera calibration algorithms no longer work well. A photogrammetric calibration method of geometric model for the type of optical fish-eye constructions is investigated in this paper. In the method, Hazcams model is represented by collinearity equations with interior orientation and exterior orientation parameters [1] [2]. For high-precision applications, the accurate calibration model is formulated with the radial symmetric distortion and the decentering distortion as well as parameters to model affinity and shear based on the fisheye deformation model [3] [4]. The proposed method has been applied to the stereo camera calibration system for Lunar Rover.

  14. Expansion of the CRISPR-Cas9 genome targeting space through the use of H1 promoter-expressed guide RNAs.

    PubMed

    Ranganathan, Vinod; Wahlin, Karl; Maruotti, Julien; Zack, Donald J

    2014-08-08

    The repurposed CRISPR-Cas9 system has recently emerged as a revolutionary genome-editing tool. Here we report a modification in the expression of the guide RNA (gRNA) required for targeting that greatly expands the targetable genome. gRNA expression through the commonly used U6 promoter requires a guanosine nucleotide to initiate transcription, thus constraining genomic-targeting sites to GN19NGG. We demonstrate the ability to modify endogenous genes using H1 promoter-expressed gRNAs, which can be used to target both AN19NGG and GN19NGG genomic sites. AN19NGG sites occur ~15% more frequently than GN19NGG sites in the human genome and the increase in targeting space is also enriched at human genes and disease loci. Together, our results enhance the versatility of the CRISPR technology by more than doubling the number of targetable sites within the human genome and other eukaryotic species.

  15. VUV Testing of Science Cameras at MSFC: QE Measurement of the CLASP Flight Cameras

    NASA Technical Reports Server (NTRS)

    Champey, Patrick; Kobayashi, Ken; Winebarger, Amy; Cirtain, Jonathan; Hyde, David; Robertson, Bryan; Beabout, Brent; Beabout, Dyana; Stewart, Mike

    2015-01-01

    The NASA Marshall Space Flight Center (MSFC) has developed a science camera suitable for sub-orbital missions for observations in the UV, EUV and soft X-ray. Six cameras were built and tested for the Chromospheric Lyman-Alpha Spectro-Polarimeter (CLASP), a joint National Astronomical Observatory of Japan (NAOJ) and MSFC sounding rocket mission. The CLASP camera design includes a frame-transfer e2v CCD57-10 512x512 detector, dual channel analog readout electronics and an internally mounted cold block. At the flight operating temperature of -20 C, the CLASP cameras achieved the low-noise performance requirements (less than or equal to 25 e- read noise and greater than or equal to 10 e-/sec/pix dark current), in addition to maintaining a stable gain of approximately equal to 2.0 e-/DN. The e2v CCD57-10 detectors were coated with Lumogen-E to improve quantum efficiency (QE) at the Lyman- wavelength. A vacuum ultra-violet (VUV) monochromator and a NIST calibrated photodiode were employed to measure the QE of each camera. Four flight-like cameras were tested in a high-vacuum chamber, which was configured to operate several tests intended to verify the QE, gain, read noise, dark current and residual non-linearity of the CCD. We present and discuss the QE measurements performed on the CLASP cameras. We also discuss the high-vacuum system outfitted for testing of UV and EUV science cameras at MSFC.

  16. Cameras and settings for optimal image capture from UAVs

    NASA Astrophysics Data System (ADS)

    Smith, Mike; O'Connor, James; James, Mike R.

    2017-04-01

    Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.

  17. Behavioral patterns and in-situ target strength of the hairtail ( Trichiurus lepturus) via coupling of scientific echosounder and acoustic camera data

    NASA Astrophysics Data System (ADS)

    Hwang, Kangseok; Yoon, Eun-A.; Kang, Sukyung; Cha, Hyungkee; Lee, Kyounghoon

    2017-12-01

    The present study focuses on the influence of target strength (TS) changes in the swimming angle of the hairtail ( Trichiurus lepturus). We measured in-situ TS at 38 and 120 kHz with luring lamps at a fishing ground for jigging boats near the coastal waters of Jeju-do in Korea. Swimming angle and size of hairtails were measured using an acoustic camera. Results showed that mean preanal length was estimated to be 13.5 cm (SD = 2.7 cm) and mean swimming tilt angle was estimated to be 43.9° (SD = 17.6°). The mean TS values were -35.7 and -41.2 dB at 38 and 120 kHz, respectively. The results will assist in understanding the influence of swimming angle on the TS of hairtails and, thus, improve the accuracy of biomass estimates.

  18. Guided exploration in virtual environments

    NASA Astrophysics Data System (ADS)

    Beckhaus, Steffi; Eckel, Gerhard; Strothotte, Thomas

    2001-06-01

    We describe an application supporting alternating interaction and animation for the purpose of exploration in a surround- screen projection-based virtual reality system. The exploration of an environment is a highly interactive and dynamic process in which the presentation of objects of interest can give the user guidance while exploring the scene. Previous systems for automatic presentation of models or scenes need either cinematographic rules, direct human interaction, framesets or precalculation (e.g. precalculation of paths to a predefined goal). We report on the development of a system that can deal with rapidly changing user interest in objects of a scene or model as well as with dynamic models and changes of the camera position introduced interactively by the user. It is implemented as a potential-field based camera data generating system. In this paper we describe the implementation of our approach in a virtual art museum on the CyberStage, our surround-screen projection-based stereoscopic display. The paradigm of guided exploration is introduced describing the freedom of the user to explore the museum autonomously. At the same time, if requested by the user, guided exploration provides just-in-time navigational support. The user controls this support by specifying the current field of interest in high-level search criteria. We also present an informal user study evaluating this approach.

  19. Comparing scat detection dogs, cameras, and hair snares for surveying carnivores

    USGS Publications Warehouse

    Long, Robert A.; Donovan, T.M.; MacKay, Paula; Zielinski, William J.; Buzas, Jeffrey S.

    2007-01-01

    Carnivores typically require large areas of habitat, exist at low natural densities, and exhibit elusive behavior - characteristics that render them difficult to study. Noninvasive survey methods increasingly provide means to collect extensive data on carnivore occupancy, distribution, and abundance. During the summers of 2003-2004, we compared the abilities of scat detection dogs, remote cameras, and hair snares to detect black bears (Ursus americanus), fishers (Martes pennanti), and bobcats (Lynx rufus) at 168 sites throughout Vermont. All 3 methods detected black bears; neither fishers nor bobcats were detected by hair snares. Scat detection dogs yielded the highest raw detection rate and probability of detection (given presence) for each of the target species, as well as the greatest number of unique detections (i.e., occasions when only one method detected the target species). We estimated that the mean probability of detecting the target species during a single visit to a site with a detection dog was 0.87 for black bears, 0.84 for fishers, and 0.27 for bobcats. Although the cost of surveying with detection dogs was higher than that of remote cameras or hair snares, the efficiency of this method rendered it the most cost-effective survey method.

  20. A Semi-Automatic Image-Based Close Range 3D Modeling Pipeline Using a Multi-Camera Configuration

    PubMed Central

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum. PMID:23112656

  1. A semi-automatic image-based close range 3D modeling pipeline using a multi-camera configuration.

    PubMed

    Rau, Jiann-Yeou; Yeh, Po-Chia

    2012-01-01

    The generation of photo-realistic 3D models is an important task for digital recording of cultural heritage objects. This study proposes an image-based 3D modeling pipeline which takes advantage of a multi-camera configuration and multi-image matching technique that does not require any markers on or around the object. Multiple digital single lens reflex (DSLR) cameras are adopted and fixed with invariant relative orientations. Instead of photo-triangulation after image acquisition, calibration is performed to estimate the exterior orientation parameters of the multi-camera configuration which can be processed fully automatically using coded targets. The calibrated orientation parameters of all cameras are applied to images taken using the same camera configuration. This means that when performing multi-image matching for surface point cloud generation, the orientation parameters will remain the same as the calibrated results, even when the target has changed. Base on this invariant character, the whole 3D modeling pipeline can be performed completely automatically, once the whole system has been calibrated and the software was seamlessly integrated. Several experiments were conducted to prove the feasibility of the proposed system. Images observed include that of a human being, eight Buddhist statues, and a stone sculpture. The results for the stone sculpture, obtained with several multi-camera configurations were compared with a reference model acquired by an ATOS-I 2M active scanner. The best result has an absolute accuracy of 0.26 mm and a relative accuracy of 1:17,333. It demonstrates the feasibility of the proposed low-cost image-based 3D modeling pipeline and its applicability to a large quantity of antiques stored in a museum.

  2. Streak camera receiver definition study

    NASA Technical Reports Server (NTRS)

    Johnson, C. B.; Hunkler, L. T., Sr.; Letzring, S. A.; Jaanimagi, P.

    1990-01-01

    Detailed streak camera definition studies were made as a first step toward full flight qualification of a dual channel picosecond resolution streak camera receiver for the Geoscience Laser Altimeter and Ranging System (GLRS). The streak camera receiver requirements are discussed as they pertain specifically to the GLRS system, and estimates of the characteristics of the streak camera are given, based upon existing and near-term technological capabilities. Important problem areas are highlighted, and possible corresponding solutions are discussed.

  3. In Vivo Tumor Targeting and Image-Guided Drug Delivery with Antibody-Conjugated, Radiolabeled Mesoporous Silica Nanoparticles

    PubMed Central

    Chen, Feng; Hong, Hao; Zhang, Yin; Valdovinos, Hector F.; Shi, Sixiang; Kwon, Glen S.; Theuer, Charles P.; Barnhart, Todd E.; Cai, Weibo

    2013-01-01

    Since the first use of biocompatible mesoporous silica (mSiO2) nanoparticles as drug delivery vehicles, in vivo tumor targeted imaging and enhanced anti-cancer drug delivery has remained a major challenge. In this work, we describe the development of functionalized mSiO2 nanoparticles for actively targeted positron emission tomography (PET) imaging and drug delivery in 4T1 murine breast tumor-bearing mice. Our structural design involves the synthesis, surface functionalization with thiol groups, PEGylation, TRC105 antibody (specific for CD105/endoglin) conjugation, and 64Cu-labeling of uniform 80 nm sized mSiO2 nanoparticles. Systematic in vivo tumor targeting studies clearly demonstrated that 64Cu-NOTA-mSiO2-PEG-TRC105 could accumulate prominently at the 4T1 tumor site via both the enhanced permeability and retention effect and TRC105-mediated binding to tumor vasculature CD105. As a proof-of-concept, we also demonstrated successful enhanced tumor targeted delivery of doxorubicin (DOX) in 4T1 tumor-bearing mice after intravenous injection of DOX-loaded NOTA-mSiO2-PEG-TRC105, which holds great potential for future image-guided drug delivery and targeted cancer therapy. PMID:24083623

  4. Supermarket Special Departments. [Student Manual] and Answer Book/Teacher's Guide.

    ERIC Educational Resources Information Center

    Gaskill, Melissa Lynn; Summerall, Mary

    This document on food marketing for supermarket special departments contains both a student's manual and an answer book/teacher's guide. The student's manual contains the following 11 assignments: (1) supermarkets of today; (2) merchandising; (3) pharmacy and cosmetics department; (4) housewares and home hardware; (5) video/camera/electronics…

  5. Automatic calibration method for plenoptic camera

    NASA Astrophysics Data System (ADS)

    Luan, Yinsen; He, Xing; Xu, Bing; Yang, Ping; Tang, Guomao

    2016-04-01

    An automatic calibration method is proposed for a microlens-based plenoptic camera. First, all microlens images on the white image are searched and recognized automatically based on digital morphology. Then, the center points of microlens images are rearranged according to their relative position relationships. Consequently, the microlens images are located, i.e., the plenoptic camera is calibrated without the prior knowledge of camera parameters. Furthermore, this method is appropriate for all types of microlens-based plenoptic cameras, even the multifocus plenoptic camera, the plenoptic camera with arbitrarily arranged microlenses, or the plenoptic camera with different sizes of microlenses. Finally, we verify our method by the raw data of Lytro. The experiments show that our method has higher intelligence than the methods published before.

  6. Apollo 8 Mission image,Target of Opportunity (T/O) 10

    NASA Image and Video Library

    1968-12-21

    Apollo 8,Moon,Target of Opportunity (T/O) 10, Various targets. Latitude 18 degrees South,Longitude 163.50 degrees West. Camera Tilt Mode: High Oblique. Direction: South. Sun Angle 12 degrees. Original Film Magazine was labeled E. Camera Data: 70mm Hasselblad; F-Stop: F-5.6; Shutter Speed: 1/250 second. Film Type: Kodak SO-3400 Black and White,ASA 40. Other Photographic Coverage: Lunar Orbiter 1 (LO I) S-3. Flight Date: December 21-27,1968.

  7. Enhancement of low light level images using color-plus-mono dual camera.

    PubMed

    Jung, Yong Ju

    2017-05-15

    In digital photography, the improvement of imaging quality in low light shooting is one of the users' needs. Unfortunately, conventional smartphone cameras that use a single, small image sensor cannot provide satisfactory quality in low light level images. A color-plus-mono dual camera that consists of two horizontally separate image sensors, which simultaneously captures both a color and mono image pair of the same scene, could be useful for improving the quality of low light level images. However, an incorrect image fusion between the color and mono image pair could also have negative effects, such as the introduction of severe visual artifacts in the fused images. This paper proposes a selective image fusion technique that applies an adaptive guided filter-based denoising and selective detail transfer to only those pixels deemed reliable with respect to binocular image fusion. We employ a dissimilarity measure and binocular just-noticeable-difference (BJND) analysis to identify unreliable pixels that are likely to cause visual artifacts during image fusion via joint color image denoising and detail transfer from the mono image. By constructing an experimental system of color-plus-mono camera, we demonstrate that the BJND-aware denoising and selective detail transfer is helpful in improving the image quality during low light shooting.

  8. IMAX camera (12-IML-1)

    NASA Technical Reports Server (NTRS)

    1992-01-01

    The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.

  9. Multiple Sensor Camera for Enhanced Video Capturing

    NASA Astrophysics Data System (ADS)

    Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko

    A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.

  10. Electronic camera-management system for 35-mm and 70-mm film cameras

    NASA Astrophysics Data System (ADS)

    Nielsen, Allan

    1993-01-01

    Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.

  11. Using DSLR cameras in digital holography

    NASA Astrophysics Data System (ADS)

    Hincapié-Zuluaga, Diego; Herrera-Ramírez, Jorge; García-Sucerquia, Jorge

    2017-08-01

    In Digital Holography (DH), the size of the bidimensional image sensor to record the digital hologram, plays a key role on the performance of this imaging technique; the larger the size of the camera sensor, the better the quality of the final reconstructed image. Scientific cameras with large formats are offered in the market, but their cost and availability limit their use as a first option when implementing DH. Nowadays, DSLR cameras provide an easy-access alternative that is worthwhile to be explored. The DSLR cameras are a wide, commercial, and available option that in comparison with traditional scientific cameras, offer a much lower cost per effective pixel over a large sensing area. However, in the DSLR cameras, with their RGB pixel distribution, the sampling of information is different to the sampling in monochrome cameras usually employed in DH. This fact has implications in their performance. In this work, we discuss why DSLR cameras are not extensively used for DH, taking into account the problem reported by different authors of object replication. Simulations of DH using monochromatic and DSLR cameras are presented and a theoretical deduction for the replication problem using the Fourier theory is also shown. Experimental results of DH implementation using a DSLR camera show the replication problem.

  12. Intravascular ultrasound guided directional atherectomy versus directional atherectomy guided by angiography for the treatment of femoropopliteal in-stent restenosis.

    PubMed

    Krishnan, Prakash; Tarricone, Arthur; K-Raman, Purushothaman; Majeed, Farhan; Kapur, Vishal; Gujja, Karthik; Wiley, Jose; Vasquez, Miguel; Lascano, Rheoneil A; Quiles, Katherine G; Distin, Tashanne; Fontenelle, Ran; Atallah-Lajam, Farah; Kini, Annapoorna; Sharma, Samin

    2018-01-01

    The aim of this study was to compare 1-year outcomes for patients with femoropopliteal in-stent restenosis using directional atherectomy guided by intravascular ultrasound (IVUS) versus directional atherectomy guided by angiography. This was a retrospective analysis for patients with femoropopliteal in-stent restenosis treated with IVUS-guided directional atherectomy versus directional atherectomy guided by angiography from a single center between March 2012 and February 2016. Clinically driven target lesion revascularization was the primary endpoint and was evaluated through medical chart review as well as phone call follow up. Directional atherectomy guided by IVUS reduces clinically driven target lesion revascularization for patients with femoropopliteal in-stent restenosis.

  13. X-ray and optical stereo-based 3D sensor fusion system for image-guided neurosurgery.

    PubMed

    Kim, Duk Nyeon; Chae, You Seong; Kim, Min Young

    2016-04-01

    In neurosurgery, an image-guided operation is performed to confirm that the surgical instruments reach the exact lesion position. Among the multiple imaging modalities, an X-ray fluoroscope mounted on C- or O-arm is widely used for monitoring the position of surgical instruments and the target position of the patient. However, frequently used fluoroscopy can result in relatively high radiation doses, particularly for complex interventional procedures. The proposed system can reduce radiation exposure and provide the accurate three-dimensional (3D) position information of surgical instruments and the target position. X-ray and optical stereo vision systems have been proposed for the C- or O-arm. Two subsystems have same optical axis and are calibrated simultaneously. This provides easy augmentation of the camera image and the X-ray image. Further, the 3D measurement of both systems can be defined in a common coordinate space. The proposed dual stereoscopic imaging system is designed and implemented for mounting on an O-arm. The calibration error of the 3D coordinates of the optical stereo and X-ray stereo is within 0.1 mm in terms of the mean and the standard deviation. Further, image augmentation with the camera image and the X-ray image using an artificial skull phantom is achieved. As the developed dual stereoscopic imaging system provides 3D coordinates of the point of interest in both optical images and fluoroscopic images, it can be used by surgeons to confirm the position of surgical instruments in a 3D space with minimum radiation exposure and to verify whether the instruments reach the surgical target observed in fluoroscopic images.

  14. Depth estimation and camera calibration of a focused plenoptic camera for visual odometry

    NASA Astrophysics Data System (ADS)

    Zeller, Niclas; Quint, Franz; Stilla, Uwe

    2016-08-01

    This paper presents new and improved methods of depth estimation and camera calibration for visual odometry with a focused plenoptic camera. For depth estimation we adapt an algorithm previously used in structure-from-motion approaches to work with images of a focused plenoptic camera. In the raw image of a plenoptic camera, scene patches are recorded in several micro-images under slightly different angles. This leads to a multi-view stereo-problem. To reduce the complexity, we divide this into multiple binocular stereo problems. For each pixel with sufficient gradient we estimate a virtual (uncalibrated) depth based on local intensity error minimization. The estimated depth is characterized by the variance of the estimate and is subsequently updated with the estimates from other micro-images. Updating is performed in a Kalman-like fashion. The result of depth estimation in a single image of the plenoptic camera is a probabilistic depth map, where each depth pixel consists of an estimated virtual depth and a corresponding variance. Since the resulting image of the plenoptic camera contains two plains: the optical image and the depth map, camera calibration is divided into two separate sub-problems. The optical path is calibrated based on a traditional calibration method. For calibrating the depth map we introduce two novel model based methods, which define the relation of the virtual depth, which has been estimated based on the light-field image, and the metric object distance. These two methods are compared to a well known curve fitting approach. Both model based methods show significant advantages compared to the curve fitting method. For visual odometry we fuse the probabilistic depth map gained from one shot of the plenoptic camera with the depth data gained by finding stereo correspondences between subsequent synthesized intensity images of the plenoptic camera. These images can be synthesized totally focused and thus finding stereo correspondences is enhanced

  15. Selecting a digital camera for telemedicine.

    PubMed

    Patricoski, Chris; Ferguson, A Stewart

    2009-06-01

    The digital camera is an essential component of store-and-forward telemedicine (electronic consultation). There are numerous makes and models of digital cameras on the market, and selecting a suitable consumer-grade camera can be complicated. Evaluation of digital cameras includes investigating the features and analyzing image quality. Important features include the camera settings, ease of use, macro capabilities, method of image transfer, and power recharging. Consideration needs to be given to image quality, especially as it relates to color (skin tones) and detail. It is important to know the level of the photographer and the intended application. The goal is to match the characteristics of the camera with the telemedicine program requirements. In the end, selecting a digital camera is a combination of qualitative (subjective) and quantitative (objective) analysis. For the telemedicine program in Alaska in 2008, the camera evaluation and decision process resulted in a specific selection based on the criteria developed for our environment.

  16. On the accuracy potential of focused plenoptic camera range determination in long distance operation

    NASA Astrophysics Data System (ADS)

    Sardemann, Hannes; Maas, Hans-Gerd

    2016-04-01

    Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.

  17. SPLASSH: Open source software for camera-based high-speed, multispectral in-vivo optical image acquisition

    PubMed Central

    Sun, Ryan; Bouchard, Matthew B.; Hillman, Elizabeth M. C.

    2010-01-01

    Camera-based in-vivo optical imaging can provide detailed images of living tissue that reveal structure, function, and disease. High-speed, high resolution imaging can reveal dynamic events such as changes in blood flow and responses to stimulation. Despite these benefits, commercially available scientific cameras rarely include software that is suitable for in-vivo imaging applications, making this highly versatile form of optical imaging challenging and time-consuming to implement. To address this issue, we have developed a novel, open-source software package to control high-speed, multispectral optical imaging systems. The software integrates a number of modular functions through a custom graphical user interface (GUI) and provides extensive control over a wide range of inexpensive IEEE 1394 Firewire cameras. Multispectral illumination can be incorporated through the use of off-the-shelf light emitting diodes which the software synchronizes to image acquisition via a programmed microcontroller, allowing arbitrary high-speed illumination sequences. The complete software suite is available for free download. Here we describe the software’s framework and provide details to guide users with development of this and similar software. PMID:21258475

  18. Augmented reality system for CT-guided interventions: system description and initial phantom trials

    NASA Astrophysics Data System (ADS)

    Sauer, Frank; Schoepf, Uwe J.; Khamene, Ali; Vogt, Sebastian; Das, Marco; Silverman, Stuart G.

    2003-05-01

    We are developing an augmented reality (AR) image guidance system, in which information derived from medical images is overlaid onto a video view of the patient. The interventionalist wears a head-mounted display (HMD) that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture the stereo view of the scene. A third video camera, operating in the near IR, is also attached to the HMD and is used for head tracking. The system achieves real-time performance of 30 frames per second. The graphics appears firmly anchored in the scne, without any noticeable swimming or jitter or time lag. For the application of CT-guided interventions, we extended our original prototype system to include tracking of a biopsy needle to which we attached a set of optical markers. The AR visualization provides very intuitive guidance for planning and placement of the needle and reduces radiation to patient and radiologist. We used an interventional abdominal phantom with simulated liver lesions to perform an inital set of experiments. The users were consistently able to locate the target lesion with the first needle pass. These results provide encouragement to move the system towards clinical trials.

  19. Guiding of relativistic electron beams in solid targets by resistively controlled magnetic fields.

    PubMed

    Kar, S; Robinson, A P L; Carroll, D C; Lundh, O; Markey, K; McKenna, P; Norreys, P; Zepf, M

    2009-02-06

    Guided transport of a relativistic electron beam in solid is achieved experimentally by exploiting the strong magnetic fields created at the interface of two metals of different electrical resistivities. This is of substantial relevance to the Fast Ignitor approach to fusion energy production [M. Tabak, Phys. Plasmas 12, 057305 (2005)10.1063/1.1871246], since it allows the electron deposition to be spatially tailored-thus adding substantial design flexibility and preventing inefficiencies due to electron beam spreading. In the experiment, optical transition radiation and thermal emission from the target rear surface provide a clear signature of the electron confinement within a high resistivity tin layer sandwiched transversely between two low resistivity aluminum slabs. The experimental data are found to agree well with numerical simulations.

  20. Spacecraft camera image registration

    NASA Technical Reports Server (NTRS)

    Kamel, Ahmed A. (Inventor); Graul, Donald W. (Inventor); Chan, Fred N. T. (Inventor); Gamble, Donald W. (Inventor)

    1987-01-01

    A system for achieving spacecraft camera (1, 2) image registration comprises a portion external to the spacecraft and an image motion compensation system (IMCS) portion onboard the spacecraft. Within the IMCS, a computer (38) calculates an image registration compensation signal (60) which is sent to the scan control loops (84, 88, 94, 98) of the onboard cameras (1, 2). At the location external to the spacecraft, the long-term orbital and attitude perturbations on the spacecraft are modeled. Coefficients (K, A) from this model are periodically sent to the onboard computer (38) by means of a command unit (39). The coefficients (K, A) take into account observations of stars and landmarks made by the spacecraft cameras (1, 2) themselves. The computer (38) takes as inputs the updated coefficients (K, A) plus synchronization information indicating the mirror position (AZ, EL) of each of the spacecraft cameras (1, 2), operating mode, and starting and stopping status of the scan lines generated by these cameras (1, 2), and generates in response thereto the image registration compensation signal (60). The sources of periodic thermal errors on the spacecraft are discussed. The system is checked by calculating measurement residuals, the difference between the landmark and star locations predicted at the external location and the landmark and star locations as measured by the spacecraft cameras (1, 2).

  1. PET with the HIDAC camera?

    NASA Astrophysics Data System (ADS)

    Townsend, D. W.

    1988-06-01

    In 1982 the first prototype high density avalanche chamber (HIDAC) positron camera became operational in the Division of Nuclear Medicine of Geneva University Hospital. The camera consisted of dual 20 cm × 20 cm HIDAC detectors mounted on a rotating gantry. In 1984, these detectors were replaced by 30 cm × 30 cm detectors with improved performance and reliability. Since then, the larger detectors have undergone clinical evaluation. This article discusses certain aspects of the evaluation program and the conclusions that can be drawn from the results. The potential of the HIDAC camera for quantitative positron emission tomography (PET) is critically examined, and its performance compared with a state-of-the-art, commercial ring camera. Guidelines for the design of a future HIDAC camera are suggested.

  2. Mobile in vivo camera robots provide sole visual feedback for abdominal exploration and cholecystectomy.

    PubMed

    Rentschler, M E; Dumpert, J; Platt, S R; Ahmed, S I; Farritor, S M; Oleynikov, D

    2006-01-01

    The use of small incisions in laparoscopy reduces patient trauma, but also limits the surgeon's ability to view and touch the surgical environment directly. These limitations generally restrict the application of laparoscopy to procedures less complex than those performed during open surgery. Although current robot-assisted laparoscopy improves the surgeon's ability to manipulate and visualize the target organs, the instruments and cameras remain fundamentally constrained by the entry incisions. This limits tool tip orientation and optimal camera placement. The current work focuses on developing a new miniature mobile in vivo adjustable-focus camera robot to provide sole visual feedback to surgeons during laparoscopic surgery. A miniature mobile camera robot was inserted through a trocar into the insufflated abdominal cavity of an anesthetized pig. The mobile robot allowed the surgeon to explore the abdominal cavity remotely and view trocar and tool insertion and placement without entry incision constraints. The surgeon then performed a cholecystectomy using the robot camera alone for visual feedback. This successful trial has demonstrated that miniature in vivo mobile robots can provide surgeons with sufficient visual feedback to perform common procedures while reducing patient trauma.

  3. Uncooled radiometric camera performance

    NASA Astrophysics Data System (ADS)

    Meyer, Bill; Hoelter, T.

    1998-07-01

    Thermal imaging equipment utilizing microbolometer detectors operating at room temperature has found widespread acceptance in both military and commercial applications. Uncooled camera products are becoming effective solutions to applications currently using traditional, photonic infrared sensors. The reduced power consumption and decreased mechanical complexity offered by uncooled cameras have realized highly reliable, low-cost, hand-held instruments. Initially these instruments displayed only relative temperature differences which limited their usefulness in applications such as Thermography. Radiometrically calibrated microbolometer instruments are now available. The ExplorIR Thermography camera leverages the technology developed for Raytheon Systems Company's first production microbolometer imaging camera, the Sentinel. The ExplorIR camera has a demonstrated temperature measurement accuracy of 4 degrees Celsius or 4% of the measured value (whichever is greater) over scene temperatures ranges of minus 20 degrees Celsius to 300 degrees Celsius (minus 20 degrees Celsius to 900 degrees Celsius for extended range models) and camera environmental temperatures of minus 10 degrees Celsius to 40 degrees Celsius. Direct temperature measurement with high resolution video imaging creates some unique challenges when using uncooled detectors. A temperature controlled, field-of-view limiting aperture (cold shield) is not typically included in the small volume dewars used for uncooled detector packages. The lack of a field-of-view shield allows a significant amount of extraneous radiation from the dewar walls and lens body to affect the sensor operation. In addition, the transmission of the Germanium lens elements is a function of ambient temperature. The ExplorIR camera design compensates for these environmental effects while maintaining the accuracy and dynamic range required by today's predictive maintenance and condition monitoring markets.

  4. Martian Terrain Near Curiosity Precipice Target

    NASA Image and Video Library

    2016-12-06

    This view from the Navigation Camera (Navcam) on the mast of NASA's Curiosity Mars rover shows rocky ground within view while the rover was working at an intended drilling site called "Precipice" on lower Mount Sharp. The right-eye camera of the stereo Navcam took this image on Dec. 2, 2016, during the 1,537th Martian day, or sol, of Curiosity's work on Mars. On the previous sol, an attempt to collect a rock-powder sample with the rover's drill ended before drilling began. This led to several days of diagnostic work while the rover remained in place, during which it continued to use cameras and a spectrometer on its mast, plus environmental monitoring instruments. In this view, hardware visible at lower right includes the sundial-theme calibration target for Curiosity's Mast Camera. http://photojournal.jpl.nasa.gov/catalog/PIA21140

  5. Deployable Wireless Camera Penetrators

    NASA Technical Reports Server (NTRS)

    Badescu, Mircea; Jones, Jack; Sherrit, Stewart; Wu, Jiunn Jeng

    2008-01-01

    A lightweight, low-power camera dart has been designed and tested for context imaging of sampling sites and ground surveys from an aerobot or an orbiting spacecraft in a microgravity environment. The camera penetrators also can be used to image any line-of-sight surface, such as cliff walls, that is difficult to access. Tethered cameras to inspect the surfaces of planetary bodies use both power and signal transmission lines to operate. A tether adds the possibility of inadvertently anchoring the aerobot, and requires some form of station-keeping capability of the aerobot if extended examination time is required. The new camera penetrators are deployed without a tether, weigh less than 30 grams, and are disposable. They are designed to drop from any altitude with the boost in transmitting power currently demonstrated at approximately 100-m line-of-sight. The penetrators also can be deployed to monitor lander or rover operations from a distance, and can be used for surface surveys or for context information gathering from a touch-and-go sampling site. Thanks to wireless operation, the complexity of the sampling or survey mechanisms may be reduced. The penetrators may be battery powered for short-duration missions, or have solar panels for longer or intermittent duration missions. The imaging device is embedded in the penetrator, which is dropped or projected at the surface of a study site at 90 to the surface. Mirrors can be used in the design to image the ground or the horizon. Some of the camera features were tested using commercial "nanny" or "spy" camera components with the charge-coupled device (CCD) looking at a direction parallel to the ground. Figure 1 shows components of one camera that weighs less than 8 g and occupies a volume of 11 cm3. This camera could transmit a standard television signal, including sound, up to 100 m. Figure 2 shows the CAD models of a version of the penetrator. A low-volume array of such penetrator cameras could be deployed from an

  6. Video Guidance Sensors Using Remotely Activated Targets

    NASA Technical Reports Server (NTRS)

    Bryan, Thomas C.; Howard, Richard T.; Book, Michael L.

    2004-01-01

    Four updated video guidance sensor (VGS) systems have been proposed. As described in a previous NASA Tech Briefs article, a VGS system is an optoelectronic system that provides guidance for automated docking of two vehicles. The VGS provides relative position and attitude (6-DOF) information between the VGS and its target. In the original intended application, the two vehicles would be spacecraft, but the basic principles of design and operation of the system are applicable to aircraft, robots, objects maneuvered by cranes, or other objects that may be required to be aligned and brought together automatically or under remote control. In the first two of the four VGS systems as now proposed, the tracked vehicle would include active targets that would light up on command from the tracking vehicle, and a video camera on the tracking vehicle would be synchronized with, and would acquire images of, the active targets. The video camera would also acquire background images during the periods between target illuminations. The images would be digitized and the background images would be subtracted from the illuminated-target images. Then the position and orientation of the tracked vehicle relative to the tracking vehicle would be computed from the known geometric relationships among the positions of the targets in the image, the positions of the targets relative to each other and to the rest of the tracked vehicle, and the position and orientation of the video camera relative to the rest of the tracking vehicle. The major difference between the first two proposed systems and prior active-target VGS systems lies in the techniques for synchronizing the flashing of the active targets with the digitization and processing of image data. In the prior active-target VGS systems, synchronization was effected, variously, by use of either a wire connection or the Global Positioning System (GPS). In three of the proposed VGS systems, the synchronizing signal would be generated on, and

  7. Research on target tracking algorithm based on spatio-temporal context

    NASA Astrophysics Data System (ADS)

    Li, Baiping; Xu, Sanmei; Kang, Hongjuan

    2017-07-01

    In this paper, a novel target tracking algorithm based on spatio-temporal context is proposed. During the tracking process, the camera shaking or occlusion may lead to the failure of tracking. The proposed algorithm can solve this problem effectively. The method use the spatio-temporal context algorithm as the main research object. We get the first frame's target region via mouse. Then the spatio-temporal context algorithm is used to get the tracking targets of the sequence of frames. During this process a similarity measure function based on perceptual hash algorithm is used to judge the tracking results. If tracking failed, reset the initial value of Mean Shift algorithm for the subsequent target tracking. Experiment results show that the proposed algorithm can achieve real-time and stable tracking when camera shaking or target occlusion.

  8. Optical coherence tomography guided microinjections in live mouse embryos: high-resolution targeted manipulation for mouse embryonic research

    PubMed Central

    Syed, Saba H.; Coughlin, Andrew J.; Garcia, Monica D.; Wang, Shang; West, Jennifer L.; Larin, Kirill V.; Larina, Irina V.

    2015-01-01

    Abstract. The ability to conduct highly localized delivery of contrast agents, viral vectors, therapeutic or pharmacological agents, and signaling molecules or dyes to live mammalian embryos is greatly desired to enable a variety of studies in the field of developmental biology, such as investigating the molecular regulation of cardiovascular morphogenesis. To meet such a demand, we introduce, for the first time, the concept of employing optical coherence tomography (OCT)-guide microinjections in live mouse embryos, which provides precisely targeted manipulation with spatial resolution at the micrometer scale. The feasibility demonstration is performed with experimental studies on cultured live mouse embryos at E8.5 and E9.5. Additionally, we investigate the OCT-guided microinjection of gold–silica nanoshells to the yolk sac vasculature of live cultured mouse embryos at the stage when the heart just starts to beat, as a potential approach for dynamic assessment of cardiovascular form and function before the onset of blood cell circulation. Also, the capability of OCT to quantitatively monitor and measure injection volume is presented. Our results indicate that OCT-guided microinjection could be a useful tool for mouse embryonic research. PMID:25581495

  9. Optical coherence tomography guided microinjections in live mouse embryos: high-resolution targeted manipulation for mouse embryonic research.

    PubMed

    Syed, Saba H; Coughlin, Andrew J; Garcia, Monica D; Wang, Shang; West, Jennifer L; Larin, Kirill V; Larina, Irina V

    2015-05-01

    The ability to conduct highly localized delivery of contrast agents, viral vectors, therapeutic or pharmacological agents, and signaling molecules or dyes to live mammalian embryos is greatly desired to enable a variety of studies in the field of developmental biology, such as investigating the molecular regulation of cardiovascular morphogenesis. To meet such a demand, we introduce, for the first time, the concept of employing optical coherence tomography (OCT)-guide microinjections in live mouse embryos, which provides precisely targeted manipulation with spatial resolution at the micrometer scale. The feasibility demonstration is performed with experimental studies on cultured live mouse embryos at E8.5 and E9.5. Additionally, we investigate the OCT-guided microinjection of gold–silica nanoshells to the yolk sac vasculature of live cultured mouse embryos at the stage when the heart just starts to beat, as a potential approach for dynamic assessment of cardiovascular form and function before the onset of blood cell circulation. Also, the capability of OCT to quantitatively monitor and measure injection volume is presented. Our results indicate that OCT-guided microinjection could be a useful tool for mouse embryonic research.

  10. OPSO - The OpenGL based Field Acquisition and Telescope Guiding System

    NASA Astrophysics Data System (ADS)

    Škoda, P.; Fuchs, J.; Honsa, J.

    2006-07-01

    We present OPSO, a modular pointing and auto-guiding system for the coudé spectrograph of the Ondřejov observatory 2m telescope. The current field and slit viewing CCD cameras with image intensifiers are giving only standard TV video output. To allow the acquisition and guiding of very faint targets, we have designed an image enhancing system working in real time on TV frames grabbed by BT878-based video capture card. Its basic capabilities include the sliding averaging of hundreds of frames with bad pixel masking and removal of outliers, display of median of set of frames, quick zooming, contrast and brightness adjustment, plotting of horizontal and vertical cross cuts of seeing disk within given intensity range and many more. From the programmer's point of view, the system consists of three tasks running in parallel on a Linux PC. One C task controls the video capturing over Video for Linux (v4l2) interface and feeds the frames into the large block of shared memory, where the core image processing is done by another C program calling the OpenGL library. The GUI is, however, dynamically built in Python from XML description of widgets prepared in Glade. All tasks are exchanging information by IPC calls using the shared memory segments.

  11. Multi-MGy Radiation Hardened Camera for Nuclear Facilities

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Girard, Sylvain; Boukenter, Aziz; Ouerdane, Youcef

    There is an increasing interest in developing cameras for surveillance systems to monitor nuclear facilities or nuclear waste storages. Particularly, for today's and the next generation of nuclear facilities increasing safety requirements consecutive to Fukushima Daiichi's disaster have to be considered. For some applications, radiation tolerance needs to overcome doses in the MGy(SiO{sub 2}) range whereas the most tolerant commercial or prototypes products based on solid state image sensors withstand doses up to few kGy. The objective of this work is to present the radiation hardening strategy developed by our research groups to enhance the tolerance to ionizing radiations ofmore » the various subparts of these imaging systems by working simultaneously at the component and system design levels. Developing radiation-hardened camera implies to combine several radiation-hardening strategies. In our case, we decided not to use the simplest one, the shielding approach. This approach is efficient but limits the camera miniaturization and is not compatible with its future integration in remote-handling or robotic systems. Then, the hardening-by-component strategy appears mandatory to avoid the failure of one of the camera subparts at doses lower than the MGy. Concerning the image sensor itself, the used technology is a CMOS Image Sensor (CIS) designed by ISAE team with custom pixel designs used to mitigate the total ionizing dose (TID) effects that occur well below the MGy range in classical image sensors (e.g. Charge Coupled Devices (CCD), Charge Injection Devices (CID) and classical Active Pixel Sensors (APS)), such as the complete loss of functionality, the dark current increase and the gain drop. We'll present at the conference a comparative study between these radiation-hardened pixel radiation responses with respect to conventional ones, demonstrating the efficiency of the choices made. The targeted strategy to develop the complete radiation hard camera

  12. New Galaxy-hunting Sky Camera Sees Redder Better | Berkeley Lab

    Science.gov Websites

    ) is now one of the best cameras on the planet for studying outer space at red wavelengths that are too . Mosaic-3's primary mission is to carry out a survey of roughly one-eighth of the sky (5,500 square survey is just one layer in the galaxy survey that is locating targets for DESI. Data from this survey

  13. COBRA ATD multispectral camera response model

    NASA Astrophysics Data System (ADS)

    Holmes, V. Todd; Kenton, Arthur C.; Hilton, Russell J.; Witherspoon, Ned H.; Holloway, John H., Jr.

    2000-08-01

    A new multispectral camera response model has been developed in support of the US Marine Corps (USMC) Coastal Battlefield Reconnaissance and Analysis (COBRA) Advanced Technology Demonstration (ATD) Program. This analytical model accurately estimates response form five Xybion intensified IMC 201 multispectral cameras used for COBRA ATD airborne minefield detection. The camera model design is based on a series of camera response curves which were generated through optical laboratory test performed by the Naval Surface Warfare Center, Dahlgren Division, Coastal Systems Station (CSS). Data fitting techniques were applied to these measured response curves to obtain nonlinear expressions which estimates digitized camera output as a function of irradiance, intensifier gain, and exposure. This COBRA Camera Response Model was proven to be very accurate, stable over a wide range of parameters, analytically invertible, and relatively simple. This practical camera model was subsequently incorporated into the COBRA sensor performance evaluation and computational tools for research analysis modeling toolbox in order to enhance COBRA modeling and simulation capabilities. Details of the camera model design and comparisons of modeled response to measured experimental data are presented.

  14. Automated Meteor Fluxes with a Wide-Field Meteor Camera Network

    NASA Technical Reports Server (NTRS)

    Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.

    2013-01-01

    Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also

  15. Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST

    NASA Image and Video Library

    1993-12-06

    S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.

  16. Testing and Validation of Timing Properties for High Speed Digital Cameras - A Best Practices Guide

    DTIC Science & Technology

    2016-07-27

    a five year plan to begin replacing its inventory of antiquated film and video systems with more modern and capable digital systems. As evidenced in...installation, testing, and documentation of DITCS. If shop support can be accelerated due to shifting mission priorities, this schedule can likely...assistance from the machine shop , welding shop , paint shop , and carpenter shop . Testing the DITCS system will require a KTM with digital cameras and

  17. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras.

    PubMed

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-06-24

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer's calibration.

  18. Simultaneous Calibration: A Joint Optimization Approach for Multiple Kinect and External Cameras

    PubMed Central

    Liao, Yajie; Sun, Ying; Li, Gongfa; Kong, Jianyi; Jiang, Guozhang; Jiang, Du; Cai, Haibin; Ju, Zhaojie; Yu, Hui; Liu, Honghai

    2017-01-01

    Camera calibration is a crucial problem in many applications, such as 3D reconstruction, structure from motion, object tracking and face alignment. Numerous methods have been proposed to solve the above problem with good performance in the last few decades. However, few methods are targeted at joint calibration of multi-sensors (more than four devices), which normally is a practical issue in the real-time systems. In this paper, we propose a novel method and a corresponding workflow framework to simultaneously calibrate relative poses of a Kinect and three external cameras. By optimizing the final cost function and adding corresponding weights to the external cameras in different locations, an effective joint calibration of multiple devices is constructed. Furthermore, the method is tested in a practical platform, and experiment results show that the proposed joint calibration method can achieve a satisfactory performance in a project real-time system and its accuracy is higher than the manufacturer’s calibration. PMID:28672823

  19. Intravascular ultrasound guided directional atherectomy versus directional atherectomy guided by angiography for the treatment of femoropopliteal in-stent restenosis

    PubMed Central

    Krishnan, Prakash; Tarricone, Arthur; K-Raman, Purushothaman; Majeed, Farhan; Kapur, Vishal; Gujja, Karthik; Wiley, Jose; Vasquez, Miguel; Lascano, Rheoneil A.; Quiles, Katherine G.; Distin, Tashanne; Fontenelle, Ran; Atallah-Lajam, Farah; Kini, Annapoorna; Sharma, Samin

    2017-01-01

    Background: The aim of this study was to compare 1-year outcomes for patients with femoropopliteal in-stent restenosis using directional atherectomy guided by intravascular ultrasound (IVUS) versus directional atherectomy guided by angiography. Methods and results: This was a retrospective analysis for patients with femoropopliteal in-stent restenosis treated with IVUS-guided directional atherectomy versus directional atherectomy guided by angiography from a single center between March 2012 and February 2016. Clinically driven target lesion revascularization was the primary endpoint and was evaluated through medical chart review as well as phone call follow up. Conclusions: Directional atherectomy guided by IVUS reduces clinically driven target lesion revascularization for patients with femoropopliteal in-stent restenosis. PMID:29265002

  20. CHAMP - Camera, Handlens, and Microscope Probe

    NASA Technical Reports Server (NTRS)

    Mungas, G. S.; Beegle, L. W.; Boynton, J.; Sepulveda, C. A.; Balzer, M. A.; Sobel, H. R.; Fisher, T. A.; Deans, M.; Lee, P.

    2005-01-01

    CHAMP (Camera, Handlens And Microscope Probe) is a novel field microscope capable of color imaging with continuously variable spatial resolution from infinity imaging down to diffraction-limited microscopy (3 micron/pixel). As an arm-mounted imager, CHAMP supports stereo-imaging with variable baselines, can continuously image targets at an increasing magnification during an arm approach, can provide precision range-finding estimates to targets, and can accommodate microscopic imaging of rough surfaces through a image filtering process called z-stacking. Currently designed with a filter wheel with 4 different filters, so that color and black and white images can be obtained over the entire Field-of-View, future designs will increase the number of filter positions to include 8 different filters. Finally, CHAMP incorporates controlled white and UV illumination so that images can be obtained regardless of sun position, and any potential fluorescent species can be identified so the most astrobiologically interesting samples can be identified.

  1. Target tracking system based on preliminary and precise two-stage compound cameras

    NASA Astrophysics Data System (ADS)

    Shen, Yiyan; Hu, Ruolan; She, Jun; Luo, Yiming; Zhou, Jie

    2018-02-01

    Early detection of goals and high-precision of target tracking is two important performance indicators which need to be balanced in actual target search tracking system. This paper proposed a target tracking system with preliminary and precise two - stage compound. This system using a large field of view to achieve the target search. After the target was searched and confirmed, switch into a small field of view for two field of view target tracking. In this system, an appropriate filed switching strategy is the key to achieve tracking. At the same time, two groups PID parameters are add into the system to reduce tracking error. This combination way with preliminary and precise two-stage compound can extend the scope of the target and improve the target tracking accuracy and this method has practical value.

  2. Guided filter-based fusion method for multiexposure images

    NASA Astrophysics Data System (ADS)

    Hou, Xinglin; Luo, Haibo; Qi, Feng; Zhou, Peipei

    2016-11-01

    It is challenging to capture a high-dynamic range (HDR) scene using a low-dynamic range camera. A weighted sum-based image fusion (IF) algorithm is proposed so as to express an HDR scene with a high-quality image. This method mainly includes three parts. First, two image features, i.e., gradients and well-exposedness are measured to estimate the initial weight maps. Second, the initial weight maps are refined by a guided filter, in which the source image is considered as the guidance image. This process could reduce the noise in initial weight maps and preserve more texture consistent with the original images. Finally, the fused image is constructed by a weighted sum of source images in the spatial domain. The main contributions of this method are the estimation of the initial weight maps and the appropriate use of the guided filter-based weight maps refinement. It provides accurate weight maps for IF. Compared to traditional IF methods, this algorithm avoids image segmentation, combination, and the camera response curve calibration. Furthermore, experimental results demonstrate the superiority of the proposed method in both subjective and objective evaluations.

  3. Range Finding with a Plenoptic Camera

    DTIC Science & Technology

    2014-03-27

    92 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93 Simulated Camera Analysis...Varying Lens Diameter . . . . . . . . . . . . . . . . 95 Simulated Camera Analysis: Varying Detector Size . . . . . . . . . . . . . . . . . 98 Simulated ...Matching Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 76 37 Simulated Camera Performance with SIFT

  4. Development of high-speed video cameras

    NASA Astrophysics Data System (ADS)

    Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk

    2001-04-01

    Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.

  5. A high-sensitivity EM-CCD camera for the open port telescope cavity of SOFIA

    NASA Astrophysics Data System (ADS)

    Wiedemann, Manuel; Wolf, Jürgen; McGrotty, Paul; Edwards, Chris; Krabbe, Alfred

    2016-08-01

    The Stratospheric Observatory for Infrared Astronomy (SOFIA) has three target acquisition and tracking cameras. All three imagers originally used the same cameras, which did not meet the sensitivity requirements, due to low quantum efficiency and high dark current. The Focal Plane Imager (FPI) suffered the most from high dark current, since it operated in the aircraft cabin at room temperatures without active cooling. In early 2013 the FPI was upgraded with an iXon3 888 from Andor Techonolgy. Compared to the original cameras, the iXon3 has a factor five higher QE, thanks to its back-illuminated sensor, and orders of magnitude lower dark current, due to a thermo-electric cooler and "inverted mode operation." This leads to an increase in sensitivity of about five stellar magnitudes. The Wide Field Imager (WFI) and Fine Field Imager (FFI) shall now be upgraded with equally sensitive cameras. However, they are exposed to stratospheric conditions in flight (typical conditions: T≍-40° C, p≍ 0:1 atm) and there are no off-the-shelf CCD cameras with the performance of an iXon3, suited for these conditions. Therefore, Andor Technology and the Deutsches SOFIA Institut (DSI) are jointly developing and qualifying a camera for these conditions, based on the iXon3 888. These changes include replacement of electrical components with MIL-SPEC or industrial grade components and various system optimizations, a new data interface that allows the image data transmission over 30m of cable from the camera to the controller, a new power converter in the camera to generate all necessary operating voltages of the camera locally and a new housing that fulfills airworthiness requirements. A prototype of this camera has been built and tested in an environmental test chamber at temperatures down to T=-62° C and pressure equivalent to 50 000 ft altitude. In this paper, we will report about the development of the camera and present results from the environmental testing.

  6. Kitt Peak speckle camera

    NASA Technical Reports Server (NTRS)

    Breckinridge, J. B.; Mcalister, H. A.; Robinson, W. G.

    1979-01-01

    The speckle camera in regular use at Kitt Peak National Observatory since 1974 is described in detail. The design of the atmospheric dispersion compensation prisms, the use of film as a recording medium, the accuracy of double star measurements, and the next generation speckle camera are discussed. Photographs of double star speckle patterns with separations from 1.4 sec of arc to 4.7 sec of arc are shown to illustrate the quality of image formation with this camera, the effects of seeing on the patterns, and to illustrate the isoplanatic patch of the atmosphere.

  7. Lytro camera technology: theory, algorithms, performance analysis

    NASA Astrophysics Data System (ADS)

    Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio

    2013-03-01

    The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.

  8. Comparison of two ultrasound-guided injection techniques targeting the sacroiliac joint region in equine cadavers.

    PubMed

    Stack, John David; Bergamino, Chiara; Sanders, Ruth; Fogarty, Ursula; Puggioni, Antonella; Kearney, Clodagh; David, Florent

    2016-09-20

    To compare the accuracy and distribution of injectate for cranial (CR) and caudomedial (CM) ultrasound-guided injections of equine sacroiliac joints. Both sacroiliac joints from 10 lumbosacropelvic specimens were injected using cranial parasagittal (CR; curved 18 gauge, 25 cm spinal needles) and caudomedial (CM; straight 18 gauge, 15 cm spinal needles) ultrasound-guided approaches. Injectate consisted of 4 ml iodinated contrast and 2 ml methylene blue. Computed tomographical (CT) scans were performed before and after injections. Time for needle guidance and repositioning attempts were recorded. The CT sequences were analysed for accuracy and distribution of contrast. Intra-articular contrast was detected in sacroiliac joints following 15/40 injections. The CR and CM approaches deposited injectate ≤2 cm from sacroiliac joint margins following 17/20 and 20/20 injections, respectively. Median distance of closest contrast to the sacroiliac joint was 0.4 cm (interquartile range [IQR]: 1.5 cm) for CR approaches and 0.6 cm (IQR: 0.95 cm) for CM approaches. Cranial injections resulted in injectate contacting lumbosacral intertransverse joints 15/20 times. Caudomedial injections were perivascular 16/20 times. Safety and efficacy could not be established. Cranial and CM ultrasound-guided injections targeting sacroiliac joints were very accurate for periarticular injection, but accuracy was poor for intra-articular injection. Injectate was frequently found in contact with interosseous sacroiliac ligaments, as well as neurovascular and synovial structures in close vicinity of sacroiliac joints.

  9. Human tracking over camera networks: a review

    NASA Astrophysics Data System (ADS)

    Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang

    2017-12-01

    In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.

  10. Microprocessor-controlled wide-range streak camera

    NASA Astrophysics Data System (ADS)

    Lewis, Amy E.; Hollabaugh, Craig

    2006-08-01

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.

  11. Omnidirectional Underwater Camera Design and Calibration

    PubMed Central

    Bosch, Josep; Gracias, Nuno; Ridao, Pere; Ribas, David

    2015-01-01

    This paper presents the development of an underwater omnidirectional multi-camera system (OMS) based on a commercially available six-camera system, originally designed for land applications. A full calibration method is presented for the estimation of both the intrinsic and extrinsic parameters, which is able to cope with wide-angle lenses and non-overlapping cameras simultaneously. This method is valid for any OMS in both land or water applications. For underwater use, a customized housing is required, which often leads to strong image distortion due to refraction among the different media. This phenomena makes the basic pinhole camera model invalid for underwater cameras, especially when using wide-angle lenses, and requires the explicit modeling of the individual optical rays. To address this problem, a ray tracing approach has been adopted to create a field-of-view (FOV) simulator for underwater cameras. The simulator allows for the testing of different housing geometries and optics for the cameras to ensure a complete hemisphere coverage in underwater operation. This paper describes the design and testing of a compact custom housing for a commercial off-the-shelf OMS camera (Ladybug 3) and presents the first results of its use. A proposed three-stage calibration process allows for the estimation of all of the relevant camera parameters. Experimental results are presented, which illustrate the performance of the calibration method and validate the approach. PMID:25774707

  12. An efficient and robust MRI-guided radiotherapy planning approach for targeting abdominal organs and tumours in the mouse

    PubMed Central

    Bird, Luke; Tullis, Iain D. C.; Newman, Robert G.; Corroyer-Dulmont, Aurelien; Falzone, Nadia; Azad, Abul; Vallis, Katherine A.; Sansom, Owen J.; Muschel, Ruth J.; Vojnovic, Borivoj; Hill, Mark A.; Fokas, Emmanouil; Smart, Sean C.

    2017-01-01

    Introduction Preclinical CT-guided radiotherapy platforms are increasingly used but the CT images are characterized by poor soft tissue contrast. The aim of this study was to develop a robust and accurate method of MRI-guided radiotherapy (MR-IGRT) delivery to abdominal targets in the mouse. Methods A multimodality cradle was developed for providing subject immobilisation and its performance was evaluated. Whilst CT was still used for dose calculations, target identification was based on MRI. Each step of the radiotherapy planning procedure was validated initially in vitro using BANG gel dosimeters. Subsequently, MR-IGRT of normal adrenal glands with a size-matched collimated beam was performed. Additionally, the SK-N-SH neuroblastoma xenograft model and the transgenic KPC model of pancreatic ductal adenocarcinoma were used to demonstrate the applicability of our methods for the accurate delivery of radiation to CT-invisible abdominal tumours. Results The BANG gel phantoms demonstrated a targeting efficiency error of 0.56 ± 0.18 mm. The in vivo stability tests of body motion during MR-IGRT and the associated cradle transfer showed that the residual body movements are within this MR-IGRT targeting error. Accurate MR-IGRT of the normal adrenal glands with a size-matched collimated beam was confirmed by γH2AX staining. Regression in tumour volume was observed almost immediately post MR-IGRT in the neuroblastoma model, further demonstrating accuracy of x-ray delivery. Finally, MR-IGRT in the KPC model facilitated precise contouring and comparison of different treatment plans and radiotherapy dose distributions not only to the intra-abdominal tumour but also to the organs at risk. Conclusion This is, to our knowledge, the first study to demonstrate preclinical MR-IGRT in intra-abdominal organs. The proposed MR-IGRT method presents a state-of-the-art solution to enabling robust, accurate and efficient targeting of extracranial organs in the mouse and can operate with a

  13. Calibration Target as Seen by Mars Hand Lens Imager

    NASA Image and Video Library

    2012-02-07

    During pre-flight testing, the Mars Hand Lens Imager MAHLI camera on NASA Mars rover Curiosity took this image of the MAHLI calibration target from a distance of 3.94 inches 10 centimeters away from the target.

  14. Designing manufacturable filters for a 16-band plenoptic camera using differential evolution

    NASA Astrophysics Data System (ADS)

    Doster, Timothy; Olson, Colin C.; Fleet, Erin; Yetzbacher, Michael; Kanaev, Andrey; Lebow, Paul; Leathers, Robert

    2017-05-01

    A 16-band plenoptic camera allows for the rapid exchange of filter sets via a 4x4 filter array on the lens's front aperture. This ability to change out filters allows for an operator to quickly adapt to different locales or threat intelligence. Typically, such a system incorporates a default set of 16 equally spaced at-topped filters. Knowing the operating theater or the likely targets of interest it becomes advantageous to tune the filters. We propose using a modified beta distribution to parameterize the different possible filters and differential evolution (DE) to search over the space of possible filter designs. The modified beta distribution allows us to jointly optimize the width, taper and wavelength center of each single- or multi-pass filter in the set over a number of evolutionary steps. Further, by constraining the function parameters we can develop solutions which are not just theoretical but manufacturable. We examine two independent tasks: general spectral sensing and target detection. In the general spectral sensing task we utilize the theory of compressive sensing (CS) and find filters that generate codings which minimize the CS reconstruction error based on a fixed spectral dictionary of endmembers. For the target detection task and a set of known targets, we train the filters to optimize the separation of the background and target signature. We compare our results to the default 16 at-topped non-overlapping filter set which comes with the plenoptic camera and full hyperspectral resolution data which was previously acquired.

  15. A reaction-diffusion-based coding rate control mechanism for camera sensor networks.

    PubMed

    Yamamoto, Hiroshi; Hyodo, Katsuya; Wakamiya, Naoki; Murata, Masayuki

    2010-01-01

    A wireless camera sensor network is useful for surveillance and monitoring for its visibility and easy deployment. However, it suffers from the limited capacity of wireless communication and a network is easily overflown with a considerable amount of video traffic. In this paper, we propose an autonomous video coding rate control mechanism where each camera sensor node can autonomously determine its coding rate in accordance with the location and velocity of target objects. For this purpose, we adopted a biological model, i.e., reaction-diffusion model, inspired by the similarity of biological spatial patterns and the spatial distribution of video coding rate. Through simulation and practical experiments, we verify the effectiveness of our proposal.

  16. Composite Wavelet Filters for Enhanced Automated Target Recognition

    NASA Technical Reports Server (NTRS)

    Chiang, Jeffrey N.; Zhang, Yuhan; Lu, Thomas T.; Chao, Tien-Hsin

    2012-01-01

    Automated Target Recognition (ATR) systems aim to automate target detection, recognition, and tracking. The current project applies a JPL ATR system to low-resolution sonar and camera videos taken from unmanned vehicles. These sonar images are inherently noisy and difficult to interpret, and pictures taken underwater are unreliable due to murkiness and inconsistent lighting. The ATR system breaks target recognition into three stages: 1) Videos of both sonar and camera footage are broken into frames and preprocessed to enhance images and detect Regions of Interest (ROIs). 2) Features are extracted from these ROIs in preparation for classification. 3) ROIs are classified as true or false positives using a standard Neural Network based on the extracted features. Several preprocessing, feature extraction, and training methods are tested and discussed in this paper.

  17. Phenology cameras observing boreal ecosystems of Finland

    NASA Astrophysics Data System (ADS)

    Peltoniemi, Mikko; Böttcher, Kristin; Aurela, Mika; Kolari, Pasi; Tanis, Cemal Melih; Linkosalmi, Maiju; Loehr, John; Metsämäki, Sari; Nadir Arslan, Ali

    2016-04-01

    Cameras have become useful tools for monitoring seasonality of ecosystems. Low-cost cameras facilitate validation of other measurements and allow extracting some key ecological features and moments from image time series. We installed a network of phenology cameras at selected ecosystem research sites in Finland. Cameras were installed above, on the level, or/and below the canopies. Current network hosts cameras taking time lapse images in coniferous and deciduous forests as well as at open wetlands offering thus possibilities to monitor various phenological and time-associated events and elements. In this poster, we present our camera network and give examples of image series use for research. We will show results about the stability of camera derived color signals, and based on that discuss about the applicability of cameras in monitoring time-dependent phenomena. We will also present results from comparisons between camera-derived color signal time series and daily satellite-derived time series (NVDI, NDWI, and fractional snow cover) from the Moderate Resolution Imaging Spectrometer (MODIS) at selected spruce and pine forests and in a wetland. We will discuss the applicability of cameras in supporting phenological observations derived from satellites, by considering the possibility of cameras to monitor both above and below canopy phenology and snow.

  18. Stem cells’ guided gene therapy of cancer: New frontier in personalized and targeted therapy

    PubMed Central

    Mavroudi, Maria; Zarogoulidis, Paul; Porpodis, Konstantinos; Kioumis, Ioannis; Lampaki, Sofia; Yarmus, Lonny; Malecki, Raf; Zarogoulidis, Konstantinos; Malecki, Marek

    2014-01-01

    Introduction Diagnosis and therapy of cancer remain to be the greatest challenges for all physicians working in clinical oncology and molecular medicine. The statistics speak for themselves with the grim reports of 1,638,910 men and women diagnosed with cancer and nearly 577,190 patients passed away due to cancer in the USA in 2012. For practicing clinicians, who treat patients suffering from advanced cancers with contemporary systemic therapies, the main challenge is to attain therapeutic efficacy, while minimizing side effects. Unfortunately, all contemporary systemic therapies cause side effects. In treated patients, these side effects may range from nausea to damaged tissues. In cancer survivors, the iatrogenic outcomes of systemic therapies may include genomic mutations and their consequences. Therefore, there is an urgent need for personalized and targeted therapies. Recently, we reviewed the current status of suicide gene therapy for cancer. Herein, we discuss the novel strategy: genetically engineered stem cells’ guided gene therapy. Review of therapeutic strategies in preclinical and clinical trials Stem cells have the unique potential for self renewal and differentiation. This potential is the primary reason for introducing them into medicine to regenerate injured or degenerated organs, as well as to rejuvenate aging tissues. Recent advances in genetic engineering and stem cell research have created the foundations for genetic engineering of stem cells as the vectors for delivery of therapeutic transgenes. Specifically in oncology, the stem cells are genetically engineered to deliver the cell suicide inducing genes selectively to the cancer cells only. Expression of the transgenes kills the cancer cells, while leaving healthy cells unaffected. Herein, we present various strategies to bioengineer suicide inducing genes and stem cell vectors. Moreover, we review results of the main preclinical studies and clinical trials. However, the main risk for

  19. Versatile microsecond movie camera

    NASA Astrophysics Data System (ADS)

    Dreyfus, R. W.

    1980-03-01

    A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.

  20. Pediatric Sarcomas Are Targetable by MR-Guided High Intensity Focused Ultrasound (MR-HIFU): Anatomical Distribution and Radiological Characteristics.

    PubMed

    Shim, Jenny; Staruch, Robert M; Koral, Korgun; Xie, Xian-Jin; Chopra, Rajiv; Laetsch, Theodore W

    2016-10-01

    Despite intensive therapy, children with metastatic and recurrent sarcoma or neuroblastoma have a poor prognosis. Magnetic resonance guided high intensity focused ultrasound (MR-HIFU) is a noninvasive technique allowing the delivery of targeted ultrasound energy under MR imaging guidance. MR-HIFU may be used to ablate tumors without ionizing radiation or target chemotherapy using hyperthermia. Here, we evaluated the anatomic locations of tumors to assess the technical feasibility of MR-HIFU therapy for children with solid tumors. Patients with sarcoma or neuroblastoma with available cross-sectional imaging were studied. Tumors were classified based on the location and surrounding structures within the ultrasound beam path as (i) not targetable, (ii) completely or partially targetable with the currently available MR-HIFU system, and (iii) potentially targetable if a respiratory motion compensation technique was used. Of the 121 patients with sarcoma and 61 patients with neuroblastoma, 64% and 25% of primary tumors were targetable at diagnosis, respectively. Less than 20% of metastases at diagnosis or relapse were targetable for both sarcoma and neuroblastoma. Most targetable lesions were located in extremities or in the pelvis. Respiratory motion compensation may increase the percentage of targetable tumors by 4% for sarcomas and 10% for neuroblastoma. Many pediatric sarcomas are localized at diagnosis and are targetable by current MR-HIFU technology. Some children with neuroblastoma have bony tumors targetable by MR-HIFU at relapse, but few newly diagnosed children with neuroblastoma have tumors amenable to MR-HIFU therapy. Clinical trials of MR-HIFU should focus on patients with anatomically targetable tumors. © 2016 Wiley Periodicals, Inc.

  1. Thermal Imaging of Medical Saw Blades and Guides

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dinwiddie, Ralph Barton; Steffner, Thomas E

    2007-01-01

    Better Than New, LLC., has developed a surface treatment to reduce the friction and wear of orthopedic saw blades and guides. The medical saw blades were thermally imaged while sawing through fresh animal bone and an IR camera was used to measure the blade temperature as it exited the bone. The thermal performance of as-manufactured saw blades was compared to surface-treated blades, and a freshly used blade was used for temperature calibration purposes in order to account for any emissivity changes due to organic transfer layers. Thermal imaging indicates that the treated saw blades cut faster and cooler than untreatedmore » blades. In orthopedic surgery, saw guides are used to perfectly size the bone to accept a prosthesis. However, binding can occur between the blade and guide because of misalignment. This condition increases the saw blade temperature and may result in tissue damage. Both treated ad untreated saw guides were also studied. The treated saw guide operated at a significantly lower temperature than untreated guide. Saw blades and guides that operate at a cooler temperature are expected to reduce the amount of tissue damage (thermal necrosis) and may reduce the number of post-operative complications.« less

  2. Target position uncertainty during visually guided deep-inspiration breath-hold radiotherapy in locally advanced lung cancer.

    PubMed

    Scherman Rydhög, Jonas; Riisgaard de Blanck, Steen; Josipovic, Mirjana; Irming Jølck, Rasmus; Larsen, Klaus Richter; Clementsen, Paul; Lars Andersen, Thomas; Poulsen, Per Rugaard; Fredberg Persson, Gitte; Munck Af Rosenschold, Per

    2017-04-01

    The purpose of this study was to estimate the uncertainty in voluntary deep-inspiration breath-hold (DIBH) radiotherapy for locally advanced non-small cell lung cancer (NSCLC) patients. Perpendicular fluoroscopic movies were acquired in free breathing (FB) and DIBH during a course of visually guided DIBH radiotherapy of nine patients with NSCLC. Patients had liquid markers injected in mediastinal lymph nodes and primary tumours. Excursion, systematic- and random errors, and inter-breath-hold position uncertainty were investigated using an image based tracking algorithm. A mean reduction of 2-6mm in marker excursion in DIBH versus FB was seen in the anterior-posterior (AP), left-right (LR) and cranio-caudal (CC) directions. Lymph node motion during DIBH originated from cardiac motion. The systematic- (standard deviation (SD) of all the mean marker positions) and random errors (root-mean-square of the intra-BH SD) during DIBH were 0.5 and 0.3mm (AP), 0.5 and 0.3mm (LR), 0.8 and 0.4mm (CC), respectively. The mean inter-breath-hold shifts were -0.3mm (AP), -0.2mm (LR), and -0.2mm (CC). Intra- and inter-breath-hold uncertainty of tumours and lymph nodes were small in visually guided breath-hold radiotherapy of NSCLC. Target motion could be substantially reduced, but not eliminated, using visually guided DIBH. Copyright © 2017 Elsevier B.V. All rights reserved.

  3. Evaluation of Dental Shade Guide Variability Using Cross-Polarized Photography.

    PubMed

    Gurrea, Jon; Gurrea, Marta; Bruguera, August; Sampaio, Camila S; Janal, Malvin; Bonfante, Estevam; Coelho, Paulo G; Hirata, Ronaldo

    2016-01-01

    This study evaluated color variability in the A hue between the VITA Classical (VITA Zahnfabrik) shade guide and four other VITA-coded ceramic shade guides using a Canon EOS 60D camera and software (Photoshop CC, Adobe). A total of 125 photographs were taken, 5 per shade tab for each of 5 shades (A1 to A4) from the following shade guides: VITA Classical (control), IPS e.max Ceram (Ivoclar Vivadent), IPS d.SIGN (Ivoclar Vivadent), Initial ZI (GC), and Creation CC (Creation Willi Geller). Photos were processed with Adobe Photoshop CC to allow standardized evaluation of hue, chroma, and value between shade tabs. None of the VITA-coded shade tabs fully matched the VITA Classical shade tab for hue, chroma, or value. The VITA-coded shade guides evaluated herein showed an overall unmatched shade in all tabs when compared with the control, suggesting that shade selection should be made using the guide produced by the manufacturer of the ceramic intended for the final restoration.

  4. Coordinating High-Resolution Traffic Cameras : Developing Intelligent, Collaborating Cameras for Transportation Security and Communications

    DOT National Transportation Integrated Search

    2015-08-01

    Cameras are used prolifically to monitor transportation incidents, infrastructure, and congestion. Traditional camera systems often require human monitoring and only offer low-resolution video. Researchers for the Exploratory Advanced Research (EAR) ...

  5. Accurate and cost-effective MTF measurement system for lens modules of digital cameras

    NASA Astrophysics Data System (ADS)

    Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu

    2007-01-01

    For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.

  6. High Speed Digital Camera Technology Review

    NASA Technical Reports Server (NTRS)

    Clements, Sandra D.

    2009-01-01

    A High Speed Digital Camera Technology Review (HSD Review) is being conducted to evaluate the state-of-the-shelf in this rapidly progressing industry. Five HSD cameras supplied by four camera manufacturers participated in a Field Test during the Space Shuttle Discovery STS-128 launch. Each camera was also subjected to Bench Tests in the ASRC Imaging Development Laboratory. Evaluation of the data from the Field and Bench Tests is underway. Representatives from the imaging communities at NASA / KSC and the Optical Systems Group are participating as reviewers. A High Speed Digital Video Camera Draft Specification was updated to address Shuttle engineering imagery requirements based on findings from this HSD Review. This draft specification will serve as the template for a High Speed Digital Video Camera Specification to be developed for the wider OSG imaging community under OSG Task OS-33.

  7. ARNICA, the Arcetri Near-Infrared Camera

    NASA Astrophysics Data System (ADS)

    Lisi, F.; Baffa, C.; Bilotti, V.; Bonaccini, D.; del Vecchio, C.; Gennari, S.; Hunt, L. K.; Marcucci, G.; Stanga, R.

    1996-04-01

    ARNICA (ARcetri Near-Infrared CAmera) is the imaging camera for the near-infrared bands between 1.0 and 2.5 microns that the Arcetri Observatory has designed and built for the Infrared Telescope TIRGO located at Gornergrat, Switzerland. We describe the mechanical and optical design of the camera, and report on the astronomical performance of ARNICA as measured during the commissioning runs at the TIRGO (December, 1992 to December 1993), and an observing run at the William Herschel Telescope, Canary Islands (December, 1993). System performance is defined in terms of efficiency of the camera+telescope system and camera sensitivity for extended and point-like sources. (SECTION: Astronomical Instrumentation)

  8. HIGH SPEED CAMERA

    DOEpatents

    Rogers, B.T. Jr.; Davis, W.C.

    1957-12-17

    This patent relates to high speed cameras having resolution times of less than one-tenth microseconds suitable for filming distinct sequences of a very fast event such as an explosion. This camera consists of a rotating mirror with reflecting surfaces on both sides, a narrow mirror acting as a slit in a focal plane shutter, various other mirror and lens systems as well as an innage recording surface. The combination of the rotating mirrors and the slit mirror causes discrete, narrow, separate pictures to fall upon the film plane, thereby forming a moving image increment of the photographed event. Placing a reflecting surface on each side of the rotating mirror cancels the image velocity that one side of the rotating mirror would impart, so as a camera having this short a resolution time is thereby possible.

  9. LSST camera control system

    NASA Astrophysics Data System (ADS)

    Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael

    2006-06-01

    The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.

  10. IMAX camera in payload bay

    NASA Image and Video Library

    1995-12-20

    STS074-361-035 (12-20 Nov 1995) --- This medium close-up view centers on the IMAX Cargo Bay Camera (ICBC) and its associated IMAX Camera Container Equipment (ICCE) at its position in the cargo bay of the Earth-orbiting Space Shuttle Atlantis. With its own ?space suit? or protective covering to protect it from the rigors of space, this version of the IMAX was able to record scenes not accessible with the in-cabin cameras. For docking and undocking activities involving Russia?s Mir Space Station and the Space Shuttle Atlantis, the camera joined a variety of in-cabin camera hardware in recording the historical events. IMAX?s secondary objectives were to film Earth views. The IMAX project is a collaboration between NASA, the Smithsonian Institution?s National Air and Space Museum (NASM), IMAX Systems Corporation, and the Lockheed Corporation to document significant space activities and promote NASA?s educational goals using the IMAX film medium.

  11. Passive auto-focus for digital still cameras and camera phones: Filter-switching and low-light techniques

    NASA Astrophysics Data System (ADS)

    Gamadia, Mark Noel

    In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras

  12. Precision guided antiaircraft munition

    DOEpatents

    Hirschfeld, Tomas B.

    1987-01-01

    A small diameter, 20 mm to 50 mm, guided projectile is used in antiaircraft defense. A pulsing laser designator illuminates the target aircraft. Energy reflected from the aircraft is received by the guided projectile. The guided projectile is fired from a standard weapon but the spining caused by the riflings are removed before active tracking and guidance occurs. The received energy is focused by immersion optics onto a bridge cell. AC coupling and gating removes background and allows steering signals to move extended vanes by means of piezoelectric actuators in the rear of the guided projectile.

  13. Science, conservation, and camera traps

    USGS Publications Warehouse

    Nichols, James D.; Karanth, K. Ullas; O'Connel, Allan F.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas

    2011-01-01

    Biologists commonly perceive camera traps as a new tool that enables them to enter the hitherto secret world of wild animals. Camera traps are being used in a wide range of studies dealing with animal ecology, behavior, and conservation. Our intention in this volume is not to simply present the various uses of camera traps, but to focus on their use in the conduct of science and conservation. In this chapter, we provide an overview of these two broad classes of endeavor and sketch the manner in which camera traps are likely to be able to contribute to them. Our main point here is that neither photographs of individual animals, nor detection history data, nor parameter estimates generated from detection histories are the ultimate objective of a camera trap study directed at either science or management. Instead, the ultimate objectives are best viewed as either gaining an understanding of how ecological systems work (science) or trying to make wise decisions that move systems from less desirable to more desirable states (conservation, management). Therefore, we briefly describe here basic approaches to science and management, emphasizing the role of field data and associated analyses in these processes. We provide examples of ways in which camera trap data can inform science and management.

  14. Electronic cameras for low-light microscopy.

    PubMed

    Rasnik, Ivan; French, Todd; Jacobson, Ken; Berland, Keith

    2013-01-01

    This chapter introduces to electronic cameras, discusses the various parameters considered for evaluating their performance, and describes some of the key features of different camera formats. The chapter also presents the basic understanding of functioning of the electronic cameras and how these properties can be exploited to optimize image quality under low-light conditions. Although there are many types of cameras available for microscopy, the most reliable type is the charge-coupled device (CCD) camera, which remains preferred for high-performance systems. If time resolution and frame rate are of no concern, slow-scan CCDs certainly offer the best available performance, both in terms of the signal-to-noise ratio and their spatial resolution. Slow-scan cameras are thus the first choice for experiments using fixed specimens such as measurements using immune fluorescence and fluorescence in situ hybridization. However, if video rate imaging is required, one need not evaluate slow-scan CCD cameras. A very basic video CCD may suffice if samples are heavily labeled or are not perturbed by high intensity illumination. When video rate imaging is required for very dim specimens, the electron multiplying CCD camera is probably the most appropriate at this technological stage. Intensified CCDs provide a unique tool for applications in which high-speed gating is required. The variable integration time video cameras are very attractive options if one needs to acquire images at video rate acquisition, as well as with longer integration times for less bright samples. This flexibility can facilitate many diverse applications with highly varied light levels. Copyright © 2007 Elsevier Inc. All rights reserved.

  15. Indoor calibration for stereoscopic camera STC: a new method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2017-11-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  16. Indoor Calibration for Stereoscopic Camera STC, A New Method

    NASA Astrophysics Data System (ADS)

    Simioni, E.; Re, C.; Da Deppo, V.; Naletto, G.; Borrelli, D.; Dami, M.; Ficai Veltroni, I.; Cremonese, G.

    2014-10-01

    In the framework of the ESA-JAXA BepiColombo mission to Mercury, the global mapping of the planet will be performed by the on-board Stereo Camera (STC), part of the SIMBIO-SYS suite [1]. In this paper we propose a new technique for the validation of the 3D reconstruction of planetary surface from images acquired with a stereo camera. STC will provide a three-dimensional reconstruction of Mercury surface. The generation of a DTM of the observed features is based on the processing of the acquired images and on the knowledge of the intrinsic and extrinsic parameters of the optical system. The new stereo concept developed for STC needs a pre-flight verification of the actual capabilities to obtain elevation information from stereo couples: for this, a stereo validation setup to get an indoor reproduction of the flight observing condition of the instrument would give a much greater confidence to the developed instrument design. STC is the first stereo satellite camera with two optical channels converging in a unique sensor. Its optical model is based on a brand new concept to minimize mass and volume and to allow push-frame imaging. This model imposed to define a new calibration pipeline to test the reconstruction method in a controlled ambient. An ad-hoc indoor set-up has been realized for validating the instrument designed to operate in deep space, i.e. in-flight STC will have to deal with source/target essentially placed at infinity. This auxiliary indoor setup permits on one side to rescale the stereo reconstruction problem from the operative distance in-flight of 400 km to almost 1 meter in lab; on the other side it allows to replicate different viewing angles for the considered targets. Neglecting for sake of simplicity the Mercury curvature, the STC observing geometry of the same portion of the planet surface at periherm corresponds to a rotation of the spacecraft (SC) around the observed target by twice the 20° separation of each channel with respect to nadir

  17. Multimodality Non-Rigid Image Registration for Planning, Targeting and Monitoring during CT-guided Percutaneous Liver Tumor Cryoablation

    PubMed Central

    Elhawary, Haytham; Oguro, Sota; Tuncali, Kemal; Morrison, Paul R.; Tatli, Servet; Shyn, Paul B.; Silverman, Stuart G.; Hata, Nobuhiko

    2010-01-01

    Rationale and Objectives To develop non-rigid image registration between pre-procedure contrast enhanced MR images and intra-procedure unenhanced CT images, to enhance tumor visualization and localization during CT-guided liver tumor cryoablation procedures. Materials and Methods After IRB approval, a non-rigid registration (NRR) technique was evaluated with different pre-processing steps and algorithm parameters and compared to a standard rigid registration (RR) approach. The Dice Similarity Coefficient (DSC), Target Registration Error (TRE), 95% Hausdorff distance (HD) and total registration time (minutes) were compared using a two-sided Student’s t-test. The entire registration method was then applied during five CT-guided liver cryoablation cases with the intra-procedural CT data transmitted directly from the CT scanner, with both accuracy and registration time evaluated. Results Selected optimal parameters for registration were section thickness of 5mm, cropping the field of view to 66% of its original size, manual segmentation of the liver, B-spline control grid of 5×5×5 and spatial sampling of 50,000 pixels. Mean 95% HD of 3.3mm (2.5x improvement compared to RR, p<0.05); mean DSC metric of 0.97 (13% increase); and mean TRE of 4.1mm (2.7x reduction) were measured. During the cryoablation procedure registration between the pre-procedure MR and the planning intra-procedure CT took a mean time of 10.6 minutes, the MR to targeting CT image took 4 minutes and MR to monitoring CT took 4.3 minutes. Mean registration accuracy was under 3.4mm. Conclusion Non-rigid registration allowed improved visualization of the tumor during interventional planning, targeting and evaluation of tumor coverage by the ice ball. Future work is focused on reducing segmentation time to make the method more clinically acceptable. PMID:20817574

  18. The Last Meter: Blind Visual Guidance to a Target.

    PubMed

    Manduchi, Roberto; Coughlan, James M

    2014-01-01

    Smartphone apps can use object recognition software to provide information to blind or low vision users about objects in the visual environment. A crucial challenge for these users is aiming the camera properly to take a well-framed picture of the desired target object. We investigate the effects of two fundamental constraints of object recognition - frame rate and camera field of view - on a blind person's ability to use an object recognition smartphone app. The app was used by 18 blind participants to find visual targets beyond arm's reach and approach them to within 30 cm. While we expected that a faster frame rate or wider camera field of view should always improve search performance, our experimental results show that in many cases increasing the field of view does not help, and may even hurt, performance. These results have important implications for the design of object recognition systems for blind users.

  19. Solid state television camera

    NASA Technical Reports Server (NTRS)

    1976-01-01

    The design, fabrication, and tests of a solid state television camera using a new charge-coupled imaging device are reported. An RCA charge-coupled device arranged in a 512 by 320 format and directly compatible with EIA format standards was the sensor selected. This is a three-phase, sealed surface-channel array that has 163,840 sensor elements, which employs a vertical frame transfer system for image readout. Included are test results of the complete camera system, circuit description and changes to such circuits as a result of integration and test, maintenance and operation section, recommendations to improve the camera system, and a complete set of electrical and mechanical drawing sketches.

  20. An Automatic Portable Telecine Camera.

    DTIC Science & Technology

    1978-08-01

    five television frames to achieve synchronous operation, that is about 0.2 second. 6.3 Video recorder noise imnunity The synchronisation pulse separator...display is filmed by a modified 16 am cine camera driven by a control unit in which the camera supply voltage is derived from the field synchronisation ...pulses of the video signal. Automatic synchronisation of the camera mechanism is achieved over a wide range of television field frequencies and the

  1. Towards next generation 3D cameras

    NASA Astrophysics Data System (ADS)

    Gupta, Mohit

    2017-03-01

    We are in the midst of a 3D revolution. Robots enabled by 3D cameras are beginning to autonomously drive cars, perform surgeries, and manage factories. However, when deployed in the real-world, these cameras face several challenges that prevent them from measuring 3D shape reliably. These challenges include large lighting variations (bright sunlight to dark night), presence of scattering media (fog, body tissue), and optically complex materials (metal, plastic). Due to these factors, 3D imaging is often the bottleneck in widespread adoption of several key robotics technologies. I will talk about our work on developing 3D cameras based on time-of-flight and active triangulation that addresses these long-standing problems. This includes designing `all-weather' cameras that can perform high-speed 3D scanning in harsh outdoor environments, as well as cameras that recover shape of objects with challenging material properties. These cameras are, for the first time, capable of measuring detailed (<100 microns resolution) scans in extremely demanding scenarios with low-cost components. Several of these cameras are making a practical impact in industrial automation, being adopted in robotic inspection and assembly systems.

  2. Non-Invasive Targeted Peripheral Nerve Ablation Using 3D MR Neurography and MRI-Guided High-Intensity Focused Ultrasound (MR-HIFU): Pilot Study in a Swine Model.

    PubMed

    Huisman, Merel; Staruch, Robert M; Ladouceur-Wodzak, Michelle; van den Bosch, Maurice A; Burns, Dennis K; Chhabra, Avneesh; Chopra, Rajiv

    2015-01-01

    Ultrasound (US)-guided high intensity focused ultrasound (HIFU) has been proposed for noninvasive treatment of neuropathic pain and has been investigated in in-vivo studies. However, ultrasound has important limitations regarding treatment guidance and temperature monitoring. Magnetic resonance (MR)-imaging guidance may overcome these limitations and MR-guided HIFU (MR-HIFU) has been used successfully for other clinical indications. The primary purpose of this study was to evaluate the feasibility of utilizing 3D MR neurography to identify and guide ablation of peripheral nerves using a clinical MR-HIFU system. Volumetric MR-HIFU was used to induce lesions in the peripheral nerves of the lower limbs in three pigs. Diffusion-prep MR neurography and T1-weighted images were utilized to identify the target, plan treatment and immediate post-treatment evaluation. For each treatment, one 8 or 12 mm diameter treatment cell was used (sonication duration 20 s and 36 s, power 160-300 W). Peripheral nerves were extracted < 3 hours after treatment. Ablation dimensions were calculated from thermal maps, post-contrast MRI and macroscopy. Histological analysis included standard H&E staining, Masson's trichrome and toluidine blue staining. All targeted peripheral nerves were identifiable on MR neurography and T1-weighted images and could be accurately ablated with a single exposure of focused ultrasound, with peak temperatures of 60.3 to 85.7°C. The lesion dimensions as measured on MR neurography were similar to the lesion dimensions as measured on CE-T1, thermal dose maps, and macroscopy. Histology indicated major hyperacute peripheral nerve damage, mostly confined to the location targeted for ablation. Our preliminary results indicate that targeted peripheral nerve ablation is feasible with MR-HIFU. Diffusion-prep 3D MR neurography has potential for guiding therapy procedures where either nerve targeting or avoidance is desired, and may also have potential for post

  3. Non-Invasive Targeted Peripheral Nerve Ablation Using 3D MR Neurography and MRI-Guided High-Intensity Focused Ultrasound (MR-HIFU): Pilot Study in a Swine Model

    PubMed Central

    Huisman, Merel; Staruch, Robert M.; Ladouceur-Wodzak, Michelle; van den Bosch, Maurice A.; Burns, Dennis K.; Chhabra, Avneesh; Chopra, Rajiv

    2015-01-01

    Purpose Ultrasound (US)-guided high intensity focused ultrasound (HIFU) has been proposed for noninvasive treatment of neuropathic pain and has been investigated in in-vivo studies. However, ultrasound has important limitations regarding treatment guidance and temperature monitoring. Magnetic resonance (MR)-imaging guidance may overcome these limitations and MR-guided HIFU (MR-HIFU) has been used successfully for other clinical indications. The primary purpose of this study was to evaluate the feasibility of utilizing 3D MR neurography to identify and guide ablation of peripheral nerves using a clinical MR-HIFU system. Methods Volumetric MR-HIFU was used to induce lesions in the peripheral nerves of the lower limbs in three pigs. Diffusion-prep MR neurography and T1-weighted images were utilized to identify the target, plan treatment and immediate post-treatment evaluation. For each treatment, one 8 or 12 mm diameter treatment cell was used (sonication duration 20 s and 36 s, power 160–300 W). Peripheral nerves were extracted < 3 hours after treatment. Ablation dimensions were calculated from thermal maps, post-contrast MRI and macroscopy. Histological analysis included standard H&E staining, Masson’s trichrome and toluidine blue staining. Results All targeted peripheral nerves were identifiable on MR neurography and T1-weighted images and could be accurately ablated with a single exposure of focused ultrasound, with peak temperatures of 60.3 to 85.7°C. The lesion dimensions as measured on MR neurography were similar to the lesion dimensions as measured on CE-T1, thermal dose maps, and macroscopy. Histology indicated major hyperacute peripheral nerve damage, mostly confined to the location targeted for ablation. Conclusion Our preliminary results indicate that targeted peripheral nerve ablation is feasible with MR-HIFU. Diffusion-prep 3D MR neurography has potential for guiding therapy procedures where either nerve targeting or avoidance is desired, and may

  4. Localizing people in crosswalks with a moving handheld camera: proof of concept

    NASA Astrophysics Data System (ADS)

    Lalonde, Marc; Chapdelaine, Claude; Foucher, Samuel

    2015-02-01

    Although people or object tracking in uncontrolled environments has been acknowledged in the literature, the accurate localization of a subject with respect to a reference ground plane remains a major issue. This study describes an early prototype for the tracking and localization of pedestrians with a handheld camera. One application envisioned here is to analyze the trajectories of blind people going across long crosswalks when following different audio signals as a guide. This kind of study is generally conducted manually with an observer following a subject and logging his/her current position at regular time intervals with respect to a white grid painted on the ground. This study aims at automating the manual logging activity: with a marker attached to the subject's foot, a video of the crossing is recorded by a person following the subject, and a semi-automatic tool analyzes the video and estimates the trajectory of the marker with respect to the painted markings. Challenges include robustness to variations to lighting conditions (shadows, etc.), occlusions, and changes in camera viewpoint. Results are promising when compared to GNSS measurements.

  5. Moving target feature phenomenology data collection at China Lake

    NASA Astrophysics Data System (ADS)

    Gross, David C.; Hill, Jeff; Schmitz, James L.

    2002-08-01

    This paper describes the DARPA Moving Target Feature Phenomenology (MTFP) data collection conducted at the China Lake Naval Weapons Center's Junction Ranch in July 2001. The collection featured both X-band and Ku-band radars positioned on top of Junction Ranch's Parrot Peak. The test included seven targets used in eleven configurations with vehicle motion consisting of circular, straight-line, and 90-degree turning motion. Data was collected at 10-degree and 17-degree depression angles. Key parameters in the collection were polarization, vehicle speed, and road roughness. The collection also included a canonical target positioned at Junction Ranch's tilt-deck turntable. The canonical target included rotating wheels (military truck tire and civilian pick-up truck tire) and a flat plate with variable positioned corner reflectors. The canonical target was also used to simulate a rotating antenna and a vibrating plate. The target vehicles were instrumented with ARDS pods for differential GPS and roll, pitch and yaw measurements. Target motion was also documented using a video camera slaved to the X-band radar antenna and by a video camera operated near the target site.

  6. Using the Dual-Target Cost to Explore the Nature of Search Target Representations

    ERIC Educational Resources Information Center

    Stroud, Michael J.; Menneer, Tamaryn; Cave, Kyle R.; Donnelly, Nick

    2012-01-01

    Eye movements were monitored to examine search efficiency and infer how color is mentally represented to guide search for multiple targets. Observers located a single color target very efficiently by fixating colors similar to the target. However, simultaneous search for 2 colors produced a dual-target cost. In addition, as the similarity between…

  7. Automatic multi-camera calibration for deployable positioning systems

    NASA Astrophysics Data System (ADS)

    Axelsson, Maria; Karlsson, Mikael; Rudner, Staffan

    2012-06-01

    Surveillance with automated positioning and tracking of subjects and vehicles in 3D is desired in many defence and security applications. Camera systems with stereo or multiple cameras are often used for 3D positioning. In such systems, accurate camera calibration is needed to obtain a reliable 3D position estimate. There is also a need for automated camera calibration to facilitate fast deployment of semi-mobile multi-camera 3D positioning systems. In this paper we investigate a method for automatic calibration of the extrinsic camera parameters (relative camera pose and orientation) of a multi-camera positioning system. It is based on estimation of the essential matrix between each camera pair using the 5-point method for intrinsically calibrated cameras. The method is compared to a manual calibration method using real HD video data from a field trial with a multicamera positioning system. The method is also evaluated on simulated data from a stereo camera model. The results show that the reprojection error of the automated camera calibration method is close to or smaller than the error for the manual calibration method and that the automated calibration method can replace the manual calibration.

  8. Calibration Procedures on Oblique Camera Setups

    NASA Astrophysics Data System (ADS)

    Kemper, G.; Melykuti, B.; Yu, C.

    2016-06-01

    Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of

  9. Use of a plastic insulin dosage guide to correct blood glucose levels out of the target range and for carbohydrate counting in subjects with type 1 diabetes.

    PubMed

    Kaufman, F R; Halvorson, M; Carpenter, S

    1999-08-01

    To improve glycemic control, a hand-held plastic Insulin Dosage Guide was developed to correct blood glucose levels outside of the target range. Protocol 1: Some 40 children (mean age 10.6+/-4.6 years) were randomly assigned for 3 months to use a written-on-paper algorithm or the Insulin Dosage Guide to correct abnormal blood glucose levels. Mean HbA1c and blood glucose levels and time to teach insulin dosage correction were compared. Protocol 2: The Insulin Dosage Guide was used by 83 subjects (mean age 11.4+/-4.3 years) for 1 year, and mean HbA1c levels, blood glucose levels, and number of consecutive high blood glucose values taken before and after the year were compared. Protocol 3: Some 20 patients (mean age 10.1+/-3.7 years) using rapid-acting insulin and 64 patients (mean age 15.9+/-3.6 years) using an insulin pump and rapid-acting insulin used the Insulin Dosage Guide and had mean blood glucose levels, HbA1c, and percentage of blood glucose levels outside of the target range determined. Protocol 1: There was a significant reduction in mean HbA1c (P = 0.04) and blood glucose levels (P = 0.05) and in the time needed to teach how to correct blood glucose values using the Insulin Dosage Guide compared with the paper algorithm. Protocol 2: There was a decrease in mean HbA1c levels (P = 0.0001) and a decrease in the mean number of consecutive blood glucose levels (P = 0.001) over the 1-year time period. Protocol 3: With rapid-acting insulin, there was a significant increase in the percentage of blood glucose levels within the target range (1 month, P = 0.04; at 3 months, P = 0.03). With the insulin pump, there was a high rate (90%) of blood glucose levels in the target range during pump initiation when the Insulin Dosage Guide was used. This inexpensive hand-held plastic card, which is portable and easy to use, may help patients improve glycemia and successfully manage diabetes.

  10. Relative and Absolute Calibration of a Multihead Camera System with Oblique and Nadir Looking Cameras for a Uas

    NASA Astrophysics Data System (ADS)

    Niemeyer, F.; Schima, R.; Grenzdörffer, G.

    2013-08-01

    Numerous unmanned aerial systems (UAS) are currently flooding the market. For the most diverse applications UAVs are special designed and used. Micro and mini UAS (maximum take-off weight up to 5 kg) are of particular interest, because legal restrictions are still manageable but also the payload capacities are sufficient for many imaging sensors. Currently a camera system with four oblique and one nadir looking cameras is under development at the Chair for Geodesy and Geoinformatics. The so-called "Four Vision" camera system was successfully built and tested in the air. A MD4-1000 UAS from microdrones is used as a carrier system. Light weight industrial cameras are used and controlled by a central computer. For further photogrammetric image processing, each individual camera, as well as all the cameras together have to be calibrated. This paper focuses on the determination of the relative orientation between the cameras with the „Australis" software and will give an overview of the results and experiences of test flights.

  11. MAHLI Calibration Target in Ultraviolet Light

    NASA Image and Video Library

    2012-02-07

    During pre-flight testing in March 2011, the Mars Hand Lens Imager MAHLI camera on NASA Mars rover Curiosity took this image of the MAHLI calibration target under illumination from MAHLI two ultraviolet LEDs light emitting diodes.

  12. Restoration planning to guide Aichi targets in a megadiverse country.

    PubMed

    Tobón, Wolke; Urquiza-Haas, Tania; Koleff, Patricia; Schröter, Matthias; Ortega-Álvarez, Rubén; Campo, Julio; Lindig-Cisneros, Roberto; Sarukhán, José; Bonn, Aletta

    2017-10-01

    Ecological restoration has become an important strategy to conserve biodiversity and ecosystems services. To restore 15% of degraded ecosystems as stipulated by the Convention on Biological Diversity Aichi target 15, we developed a prioritization framework to identify potential priority sites for restoration in Mexico, a megadiverse country. We used the most current biological and environmental data on Mexico to assess areas of biological importance and restoration feasibility at national scale and engaged stakeholders and experts throughout the process. We integrated 8 criteria into 2 components (i.e., biological importance and restoration feasibility) in a spatial multicriteria analysis and generated 11 scenarios to test the effect of assigning different component weights. The priority restoration sites were distributed across all terrestrial ecosystems of Mexico; 64.1% were in degraded natural vegetation and 6% were in protected areas. Our results provide a spatial guide to where restoration could enhance the persistence of species of conservation concern and vulnerable ecosystems while maximizing the likelihood of restoration success. Such spatial prioritization is a first step in informing policy makers and restoration planners where to focus local and large-scale restoration efforts, which should additionally incorporate social and monetary cost-benefit considerations. © 2017 The Authors. Conservation Biology published by Wiley Periodicals, Inc. on behalf of Society for Conservation Biology.

  13. Camera Operator and Videographer

    ERIC Educational Resources Information Center

    Moore, Pam

    2007-01-01

    Television, video, and motion picture camera operators produce images that tell a story, inform or entertain an audience, or record an event. They use various cameras to shoot a wide range of material, including television series, news and sporting events, music videos, motion pictures, documentaries, and training sessions. Those who film or…

  14. System Synchronizes Recordings from Separated Video Cameras

    NASA Technical Reports Server (NTRS)

    Nail, William; Nail, William L.; Nail, Jasper M.; Le, Doung T.

    2009-01-01

    A system of electronic hardware and software for synchronizing recordings from multiple, physically separated video cameras is being developed, primarily for use in multiple-look-angle video production. The system, the time code used in the system, and the underlying method of synchronization upon which the design of the system is based are denoted generally by the term "Geo-TimeCode(TradeMark)." The system is embodied mostly in compact, lightweight, portable units (see figure) denoted video time-code units (VTUs) - one VTU for each video camera. The system is scalable in that any number of camera recordings can be synchronized. The estimated retail price per unit would be about $350 (in 2006 dollars). The need for this or another synchronization system external to video cameras arises because most video cameras do not include internal means for maintaining synchronization with other video cameras. Unlike prior video-camera-synchronization systems, this system does not depend on continuous cable or radio links between cameras (however, it does depend on occasional cable links lasting a few seconds). Also, whereas the time codes used in prior video-camera-synchronization systems typically repeat after 24 hours, the time code used in this system does not repeat for slightly more than 136 years; hence, this system is much better suited for long-term deployment of multiple cameras.

  15. Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System

    NASA Astrophysics Data System (ADS)

    Bethmann, F.; Luhmann, T.

    2012-07-01

    The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.

  16. Optoelectronic System Measures Distances to Multiple Targets

    NASA Technical Reports Server (NTRS)

    Liebe, Carl Christian; Abramovici, Alexander; Bartman, Randall; Chapsky, Jacob; Schmalz, John; Coste, Keith; Litty, Edward; Lam, Raymond; Jerebets, Sergei

    2007-01-01

    An optoelectronic metrology apparatus now at the laboratory-prototype stage of development is intended to repeatedly determine distances of as much as several hundred meters, at submillimeter accuracy, to multiple targets in rapid succession. The underlying concept of optoelectronic apparatuses that can measure distances to targets is not new; such apparatuses are commonly used in general surveying and machining. However, until now such apparatuses have been, variously, constrained to (1) a single target or (2) multiple targets with a low update rate and a requirement for some a priori knowledge of target geometry. When fully developed, the present apparatus would enable measurement of distances to more than 50 targets at an update rate greater than 10 Hz, without a requirement for a priori knowledge of target geometry. The apparatus (see figure) includes a laser ranging unit (LRU) that includes an electronic camera (photo receiver), the field of view of which contains all relevant targets. Each target, mounted at a fiducial position on an object of interest, consists of a small lens at the output end of an optical fiber that extends from the object of interest back to the LRU. For each target and its optical fiber, there is a dedicated laser that is used to illuminate the target via the optical fiber. The targets are illuminated, one at a time, with laser light that is modulated at a frequency of 10.01 MHz. The modulated laser light is emitted by the target, from where it returns to the camera (photodetector), where it is detected. Both the outgoing and incoming 10.01-MHz laser signals are mixed with a 10-MHz local-oscillator to obtain beat notes at 10 kHz, and the difference between the phases of the beat notes is measured by a phase meter. This phase difference serves as a measure of the total length of the path traveled by light going out through the optical fiber and returning to the camera (photodetector) through free space. Because the portion of the path

  17. Camera/Photometer Results

    NASA Astrophysics Data System (ADS)

    Clifton, K. S.; Owens, J. K.

    1983-04-01

    Efforts continue regarding the analysis of particulate contamination recorded by the Camera/Photometers on STS-2. These systems were constructed by Epsilon Laboratories, Inc. and consisted of two 16-mm photographic cameras, using Kodak Double X film, Type 7222, to make stereoscopic observations of contaminant particles and background. Each was housed within a pressurized canister and operated automatically throughout the mission, making simultaneous exposures on a continuous basis every 150 sec. The cameras were equipped with 18-mm f/0.9 lenses and subtended overlapping 20° fields-of-view. An integrating photometer was used to inhibit the exposure sequences during periods of excessive illumination and to terminate the exposures at preset light levels. During the exposures, a camera shutter operated in a chopping mode in order to isolate the movement of particles for velocity determinations. Calculations based on the preflight film calibration indicate that particles as small as 25 μm can be detected from ideal observing conditions. Current emphasis is placed on the digitization of the photographic data frames and the determination of particle distances, sizes, and velocities. It has been concluded that background bright-ness measurements cannot be established with any reliability on the STS-2 mission, due to the preponderance of Earth-directed attitudes and the incidence of light reflected from nearby surfaces.

  18. A near-Infrared SETI Experiment: Alignment and Astrometric precision

    NASA Astrophysics Data System (ADS)

    Duenas, Andres; Maire, Jerome; Wright, Shelley; Drake, Frank D.; Marcy, Geoffrey W.; Siemion, Andrew; Stone, Remington P. S.; Tallis, Melisa; Treffers, Richard R.; Werthimer, Dan

    2016-06-01

    Beginning in March 2015, a Near-InfraRed Optical SETI (NIROSETI) instrument aiming to search for fast nanosecond laser pulses, has been commissioned on the Nickel 1m-telescope at Lick Observatory. The NIROSETI instrument makes use of an optical guide camera, SONY ICX694 CCD from PointGrey, to align our selected sources into two 200µm near-infrared Avalanche Photo Diodes (APD) with a field-of-view of 2.5"x2.5" each. These APD detectors operate at very fast bandwidths and are able to detect pulse widths extending down into the nanosecond range. Aligning sources onto these relatively small detectors requires characterizing the guide camera plate scale, static optical distortion solution, and relative orientation with respect to the APD detectors. We determined the guide camera plate scale as 55.9+- 2.7 milli-arcseconds/pixel and magnitude limit of 18.15mag (+1.07/-0.58) in V-band. We will present the full distortion solution of the guide camera, orientation, and our alignment method between the camera and the two APDs, and will discuss target selection within the NIROSETI observational campaign, including coordination with Breakthrough Listen.

  19. Transmission electron microscope CCD camera

    DOEpatents

    Downing, Kenneth H.

    1999-01-01

    In order to improve the performance of a CCD camera on a high voltage electron microscope, an electron decelerator is inserted between the microscope column and the CCD. This arrangement optimizes the interaction of the electron beam with the scintillator of the CCD camera while retaining optimization of the microscope optics and of the interaction of the beam with the specimen. Changing the electron beam energy between the specimen and camera allows both to be optimized.

  20. Differentiating Biological Colours with Few and Many Sensors: Spectral Reconstruction with RGB and Hyperspectral Cameras

    PubMed Central

    Garcia, Jair E.; Girard, Madeline B.; Kasumovic, Michael; Petersen, Phred; Wilksch, Philip A.; Dyer, Adrian G.

    2015-01-01

    Background The ability to discriminate between two similar or progressively dissimilar colours is important for many animals as it allows for accurately interpreting visual signals produced by key target stimuli or distractor information. Spectrophotometry objectively measures the spectral characteristics of these signals, but is often limited to point samples that could underestimate spectral variability within a single sample. Algorithms for RGB images and digital imaging devices with many more than three channels, hyperspectral cameras, have been recently developed to produce image spectrophotometers to recover reflectance spectra at individual pixel locations. We compare a linearised RGB and a hyperspectral camera in terms of their individual capacities to discriminate between colour targets of varying perceptual similarity for a human observer. Main Findings (1) The colour discrimination power of the RGB device is dependent on colour similarity between the samples whilst the hyperspectral device enables the reconstruction of a unique spectrum for each sampled pixel location independently from their chromatic appearance. (2) Uncertainty associated with spectral reconstruction from RGB responses results from the joint effect of metamerism and spectral variability within a single sample. Conclusion (1) RGB devices give a valuable insight into the limitations of colour discrimination with a low number of photoreceptors, as the principles involved in the interpretation of photoreceptor signals in trichromatic animals also apply to RGB camera responses. (2) The hyperspectral camera architecture provides means to explore other important aspects of colour vision like the perception of certain types of camouflage and colour constancy where multiple, narrow-band sensors increase resolution. PMID:25965264

  1. IGF-1 receptor targeted nanoparticles for image-guided therapy of stroma-rich and drug resistant human cancer

    NASA Astrophysics Data System (ADS)

    Zhou, Hongyu; Qian, Weiping; Uckun, Fatih M.; Zhou, Zhiyang; Wang, Liya; Wang, Andrew; Mao, Hui; Yang, Lily

    2016-05-01

    Low drug delivery efficiency and drug resistance from highly heterogeneous cancer cells and tumor microenvironment represent major challenges in clinical oncology. Growth factor receptor, IGF-1R, is overexpressed in both human tumor cells and tumor associated stromal cells. The level of IGF-1R expression is further up-regulated in drug resistant tumor cells. We have developed IGF-1R targeted magnetic iron oxide nanoparticles (IONPs) carrying multiple anticancer drugs into human tumors. This IGF-1R targeted theranostic nanoparticle delivery system has an iron core for non-invasive MR imaging, amphiphilic polymer coating to ensure the biocompatibility as well as for drug loading and conjugation of recombinant human IGF-1 as targeting molecules. Chemotherapy drugs, Doxorubicin (Dox), was encapsulated into the polymer coating and/or conjugated to the IONP surface by coupling with the carboxyl groups. The ability of IGF1R targeted theranostic nanoparticles to penetrate tumor stromal barrier and enhance tumor cell killing has been demonstrated in human pancreatic cancer patient tissue derived xenograft (PDX) models. Repeated systemic administrations of those IGF-1R targeted theranostic IONP carrying Dox led to breaking the tumor stromal barrier and improved therapeutic effect. Near infrared (NIR) optical and MR imaging enabled noninvasive monitoring of nanoparticle-drug delivery and therapeutic responses. Our results demonstrated that IGF-1R targeted nanoparticles carrying multiple drugs are promising combination therapy approaches for image-guided therapy of stroma-rich and drug resistant human cancer, such as pancreatic cancer.

  2. Slant path range gated imaging of static and moving targets

    NASA Astrophysics Data System (ADS)

    Steinvall, Ove; Elmqvist, Magnus; Karlsson, Kjell; Gustafsson, Ove; Chevalier, Tomas

    2012-06-01

    This paper will report experiments and analysis of slant path imaging using 1.5 μm and 0.8 μm gated imaging. The investigation is a follow up on the measurement reported last year at the laser radar conference at SPIE Orlando. The sensor, a SWIR camera was collecting both passive and active images along a 2 km long path over an airfield. The sensor was elevated by a lift in steps from 1.6-13.5 meters. Targets were resolution charts and also human targets. The human target was holding various items and also performing certain tasks some of high of relevance in defence and security. One of the main purposes with this investigation was to compare the recognition of these human targets and their activities with the resolution information obtained from conventional resolution charts. The data collection of human targets was also made from out roof top laboratory at about 13 m height above ground. The turbulence was measured along the path with anemometers and scintillometers. The camera was collecting both passive and active images in the SWIR region. We also included the Obzerv camera working at 0.8 μm in some tests. The paper will present images for both passive and active modes obtained at different elevations and discuss the results from both technical and system perspectives.

  3. A Method to Solve Interior and Exterior Camera Calibration Parameters for Image Resection

    NASA Technical Reports Server (NTRS)

    Samtaney, Ravi

    1999-01-01

    An iterative method is presented to solve the internal and external camera calibration parameters, given model target points and their images from one or more camera locations. The direct linear transform formulation was used to obtain a guess for the iterative method, and herein lies one of the strengths of the present method. In all test cases, the method converged to the correct solution. In general, an overdetermined system of nonlinear equations is solved in the least-squares sense. The iterative method presented is based on Newton-Raphson for solving systems of nonlinear algebraic equations. The Jacobian is analytically derived and the pseudo-inverse of the Jacobian is obtained by singular value decomposition.

  4. Solid state replacement of rotating mirror cameras

    NASA Astrophysics Data System (ADS)

    Frank, Alan M.; Bartolick, Joseph M.

    2007-01-01

    Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.

  5. Optical design of portable nonmydriatic fundus camera

    NASA Astrophysics Data System (ADS)

    Chen, Weilin; Chang, Jun; Lv, Fengxian; He, Yifan; Liu, Xin; Wang, Dajiang

    2016-03-01

    Fundus camera is widely used in screening and diagnosis of retinal disease. It is a simple, and widely used medical equipment. Early fundus camera expands the pupil with mydriatic to increase the amount of the incoming light, which makes the patients feel vertigo and blurred. Nonmydriatic fundus camera is a trend of fundus camera. Desktop fundus camera is not easy to carry, and only suitable to be used in the hospital. However, portable nonmydriatic retinal camera is convenient for patient self-examination or medical stuff visiting a patient at home. This paper presents a portable nonmydriatic fundus camera with the field of view (FOV) of 40°, Two kinds of light source are used, 590nm is used in imaging, while 808nm light is used in observing the fundus in high resolving power. Ring lights and a hollow mirror are employed to restrain the stray light from the cornea center. The focus of the camera is adjusted by reposition the CCD along the optical axis. The range of the diopter is between -20m-1 and 20m-1.

  6. Computing camera heading: A study

    NASA Astrophysics Data System (ADS)

    Zhang, John Jiaxiang

    2000-08-01

    An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.

  7. Voss with video camera in Service Module

    NASA Image and Video Library

    2001-04-08

    ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.

  8. Plate refractive camera model and its applications

    NASA Astrophysics Data System (ADS)

    Huang, Longxiang; Zhao, Xu; Cai, Shen; Liu, Yuncai

    2017-03-01

    In real applications, a pinhole camera capturing objects through a planar parallel transparent plate is frequently employed. Due to the refractive effects of the plate, such an imaging system does not comply with the conventional pinhole camera model. Although the system is ubiquitous, it has not been thoroughly studied. This paper aims at presenting a simple virtual camera model, called a plate refractive camera model, which has a form similar to a pinhole camera model and can efficiently model refractions through a plate. The key idea is to employ a pixel-wise viewpoint concept to encode the refraction effects into a pixel-wise pinhole camera model. The proposed camera model realizes an efficient forward projection computation method and has some advantages in applications. First, the model can help to compute the caustic surface to represent the changes of the camera viewpoints. Second, the model has strengths in analyzing and rectifying the image caustic distortion caused by the plate refraction effects. Third, the model can be used to calibrate the camera's intrinsic parameters without removing the plate. Last but not least, the model contributes to putting forward the plate refractive triangulation methods in order to solve the plate refractive triangulation problem easily in multiviews. We verify our theory in both synthetic and real experiments.

  9. Low Noise Camera for Suborbital Science Applications

    NASA Technical Reports Server (NTRS)

    Hyde, David; Robertson, Bryan; Holloway, Todd

    2015-01-01

    Low-cost, commercial-off-the-shelf- (COTS-) based science cameras are intended for lab use only and are not suitable for flight deployment as they are difficult to ruggedize and repackage into instruments. Also, COTS implementation may not be suitable since mission science objectives are tied to specific measurement requirements, and often require performance beyond that required by the commercial market. Custom camera development for each application is cost prohibitive for the International Space Station (ISS) or midrange science payloads due to nonrecurring expenses ($2,000 K) for ground-up camera electronics design. While each new science mission has a different suite of requirements for camera performance (detector noise, speed of image acquisition, charge-coupled device (CCD) size, operation temperature, packaging, etc.), the analog-to-digital conversion, power supply, and communications can be standardized to accommodate many different applications. The low noise camera for suborbital applications is a rugged standard camera platform that can accommodate a range of detector types and science requirements for use in inexpensive to mid range payloads supporting Earth science, solar physics, robotic vision, or astronomy experiments. Cameras developed on this platform have demonstrated the performance found in custom flight cameras at a price per camera more than an order of magnitude lower.

  10. Selective-imaging camera

    NASA Astrophysics Data System (ADS)

    Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.

    2015-05-01

    How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.

  11. Microprocessor-controlled, wide-range streak camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Amy E. Lewis, Craig Hollabaugh

    Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less

  12. Autocalibration of a projector-camera system.

    PubMed

    Okatani, Takayuki; Deguchi, Koichiro

    2005-12-01

    This paper presents a method for calibrating a projector-camera system that consists of multiple projectors (or multiple poses of a single projector), a camera, and a planar screen. We consider the problem of estimating the homography between the screen and the image plane of the camera or the screen-camera homography, in the case where there is no prior knowledge regarding the screen surface that enables the direct computation of the homography. It is assumed that the pose of each projector is unknown while its internal geometry is known. Subsequently, it is shown that the screen-camera homography can be determined from only the images projected by the projectors and then obtained by the camera, up to a transformation with four degrees of freedom. This transformation corresponds to arbitrariness in choosing a two-dimensional coordinate system on the screen surface and when this coordinate system is chosen in some manner, the screen-camera homography as well as the unknown poses of the projectors can be uniquely determined. A noniterative algorithm is presented, which computes the homography from three or more images. Several experimental results on synthetic as well as real images are shown to demonstrate the effectiveness of the method.

  13. GuideLiner™ as guide catheter extension for the unreachable mammary bypass graft.

    PubMed

    Vishnevsky, Alec; Savage, Michael P; Fischman, David L

    2018-03-09

    Percutaneous coronary intervention (PCI) of mammary artery bypass grafts through a trans-radial (TR) approach can present unique challenges, including coaxial vessel engagement of the guiding catheter, adequate visualization of the target lesion, sufficient backup support for equipment delivery, and the ability to reach very distal lesions. The GuideLiner catheter, a rapid exchange monorail mother-in-daughter system, facilitates successful interventions in such challenging anatomy. We present a case of a patient undergoing PCI of a right internal mammary artery (RIMA) graft via TR access in whom the graft could not be engaged with any guiding catheter. Using a balloon tracking technique over a guidewire, a GuideLiner was placed as an extension of the guiding catheter and facilitated TR-PCI by overcoming technical challenges associated with difficult anatomy. © 2018 Wiley Periodicals, Inc.

  14. New generation of meteorology cameras

    NASA Astrophysics Data System (ADS)

    Janout, Petr; Blažek, Martin; Páta, Petr

    2017-12-01

    A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.

  15. Autonomous Selection of a Rover Laser Target on Mars

    NASA Image and Video Library

    2016-07-21

    NASA's Curiosity Mars rover autonomously selects some of the targets for the laser and telescopic camera of the rover's Chemistry and Camera (ChemCam) instrument. For example, on-board software analyzed the image on the left, chose the target highlighted with the yellow dot, and pointed ChemCam to acquire laser analysis and the image on the right. Most ChemCam targets are still selected by scientists discussing rocks or soil seen in images the rover has sent to Earth, but the autonomous targeting provides an added capability. It can offer a head start on acquiring composition information at a location just reached by a drive. The software for target selection and instrument pointing is called AEGIS, for Autonomous Exploration for Gathering Increased Science. The image on the left was taken by the left eye of Curiosity's stereo Navigation Camera (Navcam) a few minutes after the rover completed a drive of about 43 feet (13 meters) on July 14, 2016, during the 1,400th Martian day, or sol, of the rover's work on Mars. Using AEGIS for target selection and pointing based on the Navcam imagery, Curiosity's ChemCam zapped a grid of nine points on a rock chosen for meeting criteria set by the science team. In this run, parameters were set to find bright-toned outcrop rock rather than darker rocks, which in this area tend to be loose on the surface. Within less than 30 minutes after the Navcam image was taken, ChemCam had used its laser on all nine points and had taken before-and-after images of the target area with its remote micro-imager (RMI) camera. The image at right combines those two RMI exposures. The nine laser targets are marked in red at the center. On the Navcam image at left, the yellow dot identifies the selected target area, which is about 2.2 inches (5.6 centimeters) in diameter. An unannotated version of this Sol 1400 Navcam image is available. ChemCam records spectra of glowing plasma generated when the laser hits a target point. These spectra provide

  16. Neutron counting with cameras

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Van Esch, Patrick; Crisanti, Marta; Mutti, Paolo

    2015-07-01

    A research project is presented in which we aim at counting individual neutrons with CCD-like cameras. We explore theoretically a technique that allows us to use imaging detectors as counting detectors at lower counting rates, and transits smoothly to continuous imaging at higher counting rates. As such, the hope is to combine the good background rejection properties of standard neutron counting detectors with the absence of dead time of integrating neutron imaging cameras as well as their very good spatial resolution. Compared to Xray detection, the essence of thermal neutron detection is the nuclear conversion reaction. The released energies involvedmore » are of the order of a few MeV, while X-ray detection releases energies of the order of the photon energy, which is in the 10 KeV range. Thanks to advances in camera technology which have resulted in increased quantum efficiency, lower noise, as well as increased frame rate up to 100 fps for CMOS-type cameras, this more than 100-fold higher available detection energy implies that the individual neutron detection light signal can be significantly above the noise level, as such allowing for discrimination and individual counting, which is hard to achieve with X-rays. The time scale of CMOS-type cameras doesn't allow one to consider time-of-flight measurements, but kinetic experiments in the 10 ms range are possible. The theory is next confronted to the first experimental results. (authors)« less

  17. Minimum Requirements for Taxicab Security Cameras*

    PubMed Central

    Zeng, Shengke; Amandus, Harlan E.; Amendola, Alfred A.; Newbraugh, Bradley H.; Cantis, Douglas M.; Weaver, Darlene

    2015-01-01

    Problem The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Methods Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Results Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. Practical Applications These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability. PMID:26823992

  18. Circuit design of an EMCCD camera

    NASA Astrophysics Data System (ADS)

    Li, Binhua; Song, Qian; Jin, Jianhui; He, Chun

    2012-07-01

    EMCCDs have been used in the astronomical observations in many ways. Recently we develop a camera using an EMCCD TX285. The CCD chip is cooled to -100°C in an LN2 dewar. The camera controller consists of a driving board, a control board and a temperature control board. Power supplies and driving clocks of the CCD are provided by the driving board, the timing generator is located in the control board. The timing generator and an embedded Nios II CPU are implemented in an FPGA. Moreover the ADC and the data transfer circuit are also in the control board, and controlled by the FPGA. The data transfer between the image workstation and the camera is done through a Camera Link frame grabber. The software of image acquisition is built using VC++ and Sapera LT. This paper describes the camera structure, the main components and circuit design for video signal processing channel, clock driver, FPGA and Camera Link interfaces, temperature metering and control system. Some testing results are presented.

  19. Multi-Angle Snowflake Camera Instrument Handbook

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Stuefer, Martin; Bailey, J.

    2016-07-01

    The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less

  20. Minimum Requirements for Taxicab Security Cameras.

    PubMed

    Zeng, Shengke; Amandus, Harlan E; Amendola, Alfred A; Newbraugh, Bradley H; Cantis, Douglas M; Weaver, Darlene

    2014-07-01

    The homicide rate of taxicab-industry is 20 times greater than that of all workers. A NIOSH study showed that cities with taxicab-security cameras experienced significant reduction in taxicab driver homicides. Minimum technical requirements and a standard test protocol for taxicab-security cameras for effective taxicab-facial identification were determined. The study took more than 10,000 photographs of human-face charts in a simulated-taxicab with various photographic resolutions, dynamic ranges, lens-distortions, and motion-blurs in various light and cab-seat conditions. Thirteen volunteer photograph-evaluators evaluated these face photographs and voted for the minimum technical requirements for taxicab-security cameras. Five worst-case scenario photographic image quality thresholds were suggested: the resolution of XGA-format, highlight-dynamic-range of 1 EV, twilight-dynamic-range of 3.3 EV, lens-distortion of 30%, and shutter-speed of 1/30 second. These minimum requirements will help taxicab regulators and fleets to identify effective taxicab-security cameras, and help taxicab-security camera manufacturers to improve the camera facial identification capability.

  1. Superconducting millimetre-wave cameras

    NASA Astrophysics Data System (ADS)

    Monfardini, Alessandro

    2017-05-01

    I present a review of the developments in kinetic inductance detectors (KID) for mm-wave and THz imaging-polarimetry in the framework of the Grenoble collaboration. The main application that we have targeted so far is large field-of-view astronomy. I focus in particular on our own experiment: NIKA2 (Néel IRAM KID Arrays). NIKA2 is today the largest millimetre camera available to the astronomical community for general purpose observations. It consists of a dual-band, dual-polarisation, multi-thousands pixels system installed at the IRAM 30-m telescope at Pico Veleta (Spain). I start with a general introduction covering the underlying physics and the KID working principle. Then I describe briefly the instrument and the detectors, to conclude with examples of pictures taken on the Sky by NIKA2 and its predecessor, NIKA. Thanks to these results, together with the relative simplicity and low cost of the KID fabrication, industrial applications requiring passive millimetre-THz imaging have now become possible.

  2. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 16 Commercial Practices 1 2014-01-01 2014-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  3. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 16 Commercial Practices 1 2011-01-01 2011-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  4. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 16 Commercial Practices 1 2012-01-01 2012-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  5. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 16 Commercial Practices 1 2013-01-01 2013-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  6. 16 CFR 501.1 - Camera film.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 16 Commercial Practices 1 2010-01-01 2010-01-01 false Camera film. 501.1 Section 501.1 Commercial... 500 § 501.1 Camera film. Camera film packaged and labeled for retail sale is exempt from the net... should be expressed, provided: (a) The net quantity of contents on packages of movie film and bulk still...

  7. Smart Cameras for Remote Science Survey

    NASA Technical Reports Server (NTRS)

    Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.

    2012-01-01

    Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.

  8. Methods for multiple-telescope beam imaging and guiding in the near-infrared

    NASA Astrophysics Data System (ADS)

    Anugu, N.; Amorim, A.; Gordo, P.; Eisenhauer, F.; Pfuhl, O.; Haug, M.; Wieprecht, E.; Wiezorrek, E.; Lima, J.; Perrin, G.; Brandner, W.; Straubmeier, C.; Le Bouquin, J.-B.; Garcia, P. J. V.

    2018-05-01

    Atmospheric turbulence and precise measurement of the astrometric baseline vector between any two telescopes are two major challenges in implementing phase-referenced interferometric astrometry and imaging. They limit the performance of a fibre-fed interferometer by degrading the instrument sensitivity and the precision of astrometric measurements and by introducing image reconstruction errors due to inaccurate phases. A multiple-beam acquisition and guiding camera was built to meet these challenges for a recently commissioned four-beam combiner instrument, GRAVITY, at the European Southern Observatory Very Large Telescope Interferometer. For each telescope beam, it measures (a) field tip-tilts by imaging stars in the sky, (b) telescope pupil shifts by imaging pupil reference laser beacons installed on each telescope using a 2 × 2 lenslet and (c) higher-order aberrations using a 9 × 9 Shack-Hartmann. The telescope pupils are imaged to provide visual monitoring while observing. These measurements enable active field and pupil guiding by actuating a train of tip-tilt mirrors placed in the pupil and field planes, respectively. The Shack-Hartmann measured quasi-static aberrations are used to focus the auxiliary telescopes and allow the possibility of correcting the non-common path errors between the adaptive optics systems of the unit telescopes and GRAVITY. The guiding stabilizes the light injection into single-mode fibres, increasing sensitivity and reducing the astrometric and image reconstruction errors. The beam guiding enables us to achieve an astrometric error of less than 50 μas. Here, we report on the data reduction methods and laboratory tests of the multiple-beam acquisition and guiding camera and its performance on-sky.

  9. Camera-on-a-Chip

    NASA Technical Reports Server (NTRS)

    1999-01-01

    Jet Propulsion Laboratory's research on a second generation, solid-state image sensor technology has resulted in the Complementary Metal- Oxide Semiconductor Active Pixel Sensor (CMOS), establishing an alternative to the Charged Coupled Device (CCD). Photobit Corporation, the leading supplier of CMOS image sensors, has commercialized two products of their own based on this technology: the PB-100 and PB-300. These devices are cameras on a chip, combining all camera functions. CMOS "active-pixel" digital image sensors offer several advantages over CCDs, a technology used in video and still-camera applications for 30 years. The CMOS sensors draw less energy, they use the same manufacturing platform as most microprocessors and memory chips, and they allow on-chip programming of frame size, exposure, and other parameters.

  10. PredGuid+A: Orion Entry Guidance Modified for Aerocapture

    NASA Technical Reports Server (NTRS)

    Lafleur, Jarret

    2013-01-01

    PredGuid+A software was developed to enable a unique numerical predictor-corrector aerocapture guidance capability that builds on heritage Orion entry guidance algorithms. The software can be used for both planetary entry and aerocapture applications. Furthermore, PredGuid+A implements a new Delta-V minimization guidance option that can take the place of traditional targeting guidance and can result in substantial propellant savings. PredGuid+A allows the user to set a mode flag and input a target orbit's apoapsis and periapsis. Using bank angle control, the guidance will then guide the vehicle to the appropriate post-aerocapture orbit using one of two algorithms: Apoapsis Targeting or Delta-V Minimization (as chosen by the user). Recently, the PredGuid guidance algorithm was adapted for use in skip-entry scenarios for NASA's Orion multi-purpose crew vehicle (MPCV). To leverage flight heritage, most of Orion's entry guidance routines are adapted from the Apollo program.

  11. The guidance methodology of a new automatic guided laser theodolite system

    NASA Astrophysics Data System (ADS)

    Zhang, Zili; Zhu, Jigui; Zhou, Hu; Ye, Shenghua

    2008-12-01

    Spatial coordinate measurement systems such as theodolites, laser trackers and total stations have wide application in manufacturing and certification processes. The traditional operation of theodolites is manual and time-consuming which does not meet the need of online industrial measurement, also laser trackers and total stations need reflective targets which can not realize noncontact and automatic measurement. A new automatic guided laser theodolite system is presented to achieve automatic and noncontact measurement with high precision and efficiency which is comprised of two sub-systems: the basic measurement system and the control and guidance system. The former system is formed by two laser motorized theodolites to accomplish the fundamental measurement tasks while the latter one consists of a camera and vision system unit mounted on a mechanical displacement unit to provide azimuth information of the measured points. The mechanical displacement unit can rotate horizontally and vertically to direct the camera to the desired orientation so that the camera can scan every measured point in the measuring field, then the azimuth of the corresponding point is calculated for the laser motorized theodolites to move accordingly to aim at it. In this paper the whole system composition and measuring principle are analyzed, and then the emphasis is laid on the guidance methodology for the laser points from the theodolites to move towards the measured points. The guidance process is implemented based on the coordinate transformation between the basic measurement system and the control and guidance system. With the view field angle of the vision system unit and the world coordinate of the control and guidance system through coordinate transformation, the azimuth information of the measurement area that the camera points at can be attained. The momentary horizontal and vertical changes of the mechanical displacement movement are also considered and calculated to provide

  12. The Concise Guide to Pharmacology 2013/14: Enzymes

    PubMed Central

    Alexander, Stephen PH; Benson, Helen E; Faccenda, Elena; Pawson, Adam J; Sharman, Joanna L; Spedding, Michael; Peters, John A; Harmar, Anthony J

    2013-01-01

    The Concise Guide to PHARMACOLOGY 2013/14 provides concise overviews of the key properties of over 2000 human drug targets with their pharmacology, plus links to an open access knowledgebase of drug targets and their ligands (www.guidetopharmacology.org), which provides more detailed views of target and ligand properties. The full contents can be found at http://onlinelibrary.wiley.com/doi/10.1111/bph.12444/full. Enzymes are one of the seven major pharmacological targets into which the Guide is divided, with the others being G protein-coupled receptors, ligand-gated ion channels, ion channels, nuclear hormone receptors, catalytic receptors and transporters. These are presented with nomenclature guidance and summary information on the best available pharmacological tools, alongside key references and suggestions for further reading. A new landscape format has easy to use tables comparing related targets. It is a condensed version of material contemporary to late 2013, which is presented in greater detail and constantly updated on the website www.guidetopharmacology.org, superseding data presented in previous Guides to Receptors and Channels. It is produced in conjunction with NC-IUPHAR and provides the official IUPHAR classification and nomenclature for human drug targets, where appropriate. It consolidates information previously curated and displayed separately in IUPHAR-DB and the Guide to Receptors and Channels, providing a permanent, citable, point-in-time record that will survive database updates. PMID:24528243

  13. The Concise Guide to Pharmacology 2013/14: Transporters

    PubMed Central

    Alexander, Stephen PH; Benson, Helen E; Faccenda, Elena; Pawson, Adam J; Sharman, Joanna L; Spedding, Michael; Peters, John A; Harmar, Anthony J

    2013-01-01

    The Concise Guide to PHARMACOLOGY 2013/14 provides concise overviews of the key properties of over 2000 human drug targets with their pharmacology, plus links to an open access knowledgebase of drug targets and their ligands (www.guidetopharmacology.org), which provides more detailed views of target and ligand properties. The full contents can be found at http://onlinelibrary.wiley.com/doi/10.1111/bph.12444/full. Transporters are one of the seven major pharmacological targets into which the Guide is divided, with the others being G protein-coupled receptors, ligand-gated ion channels, ion channels, catalytic receptors, nuclear hormone receptors and enzymes. These are presented with nomenclature guidance and summary information on the best available pharmacological tools, alongside key references and suggestions for further reading. A new landscape format has easy to use tables comparing related targets. It is a condensed version of material contemporary to late 2013, which is presented in greater detail and constantly updated on the website www.guidetopharmacology.org, superseding data presented in previous Guides to Receptors and Channels. It is produced in conjunction with NC-IUPHAR and provides the official IUPHAR classification and nomenclature for human drug targets, where appropriate. It consolidates information previously curated and displayed separately in IUPHAR-DB and the Guide to Receptors and Channels, providing a permanent, citable, point-in-time record that will survive database updates. PMID:24528242

  14. Motorcycle detection and counting using stereo camera, IR camera, and microphone array

    NASA Astrophysics Data System (ADS)

    Ling, Bo; Gibson, David R. P.; Middleton, Dan

    2013-03-01

    Detection, classification, and characterization are the key to enhancing motorcycle safety, motorcycle operations and motorcycle travel estimation. Average motorcycle fatalities per Vehicle Mile Traveled (VMT) are currently estimated at 30 times those of auto fatalities. Although it has been an active research area for many years, motorcycle detection still remains a challenging task. Working with FHWA, we have developed a hybrid motorcycle detection and counting system using a suite of sensors including stereo camera, thermal IR camera and unidirectional microphone array. The IR thermal camera can capture the unique thermal signatures associated with the motorcycle's exhaust pipes that often show bright elongated blobs in IR images. The stereo camera in the system is used to detect the motorcyclist who can be easily windowed out in the stereo disparity map. If the motorcyclist is detected through his or her 3D body recognition, motorcycle is detected. Microphones are used to detect motorcycles that often produce low frequency acoustic signals. All three microphones in the microphone array are placed in strategic locations on the sensor platform to minimize the interferences of background noises from sources such as rain and wind. Field test results show that this hybrid motorcycle detection and counting system has an excellent performance.

  15. SLR digital camera for forensic photography

    NASA Astrophysics Data System (ADS)

    Har, Donghwan; Son, Youngho; Lee, Sungwon

    2004-06-01

    Forensic photography, which was systematically established in the late 19th century by Alphonse Bertillon of France, has developed a lot for about 100 years. The development will be more accelerated with the development of high technologies, in particular the digital technology. This paper reviews three studies to answer the question: Can the SLR digital camera replace the traditional silver halide type ultraviolet photography and infrared photography? 1. Comparison of relative ultraviolet and infrared sensitivity of SLR digital camera to silver halide photography. 2. How much ultraviolet or infrared sensitivity is improved when removing the UV/IR cutoff filter built in the SLR digital camera? 3. Comparison of relative sensitivity of CCD and CMOS for ultraviolet and infrared. The test result showed that the SLR digital camera has a very low sensitivity for ultraviolet and infrared. The cause was found to be the UV/IR cutoff filter mounted in front of the image sensor. Removing the UV/IR cutoff filter significantly improved the sensitivity for ultraviolet and infrared. Particularly for infrared, the sensitivity of the SLR digital camera was better than that of the silver halide film. This shows the possibility of replacing the silver halide type ultraviolet photography and infrared photography with the SLR digital camera. Thus, the SLR digital camera seems to be useful for forensic photography, which deals with a lot of ultraviolet and infrared photographs.

  16. Pre-hibernation performances of the OSIRIS cameras onboard the Rosetta spacecraft

    NASA Astrophysics Data System (ADS)

    Magrin, S.; La Forgia, F.; Da Deppo, V.; Lazzarin, M.; Bertini, I.; Ferri, F.; Pajola, M.; Barbieri, M.; Naletto, G.; Barbieri, C.; Tubiana, C.; Küppers, M.; Fornasier, S.; Jorda, L.; Sierks, H.

    2015-02-01

    Context. The ESA cometary mission Rosetta was launched in 2004. In the past years and until the spacecraft hibernation in June 2011, the two cameras of the OSIRIS imaging system (Narrow Angle and Wide Angle Camera, NAC and WAC) observed many different sources. On 20 January 2014 the spacecraft successfully exited hibernation to start observing the primary scientific target of the mission, comet 67P/Churyumov-Gerasimenko. Aims: A study of the past performances of the cameras is now mandatory to be able to determine whether the system has been stable through the time and to derive, if necessary, additional analysis methods for the future precise calibration of the cometary data. Methods: The instrumental responses and filter passbands were used to estimate the efficiency of the system. A comparison with acquired images of specific calibration stars was made, and a refined photometric calibration was computed, both for the absolute flux and for the reflectivity of small bodies of the solar system. Results: We found a stability of the instrumental performances within ±1.5% from 2007 to 2010, with no evidence of an aging effect on the optics or detectors. The efficiency of the instrumentation is found to be as expected in the visible range, but lower than expected in the UV and IR range. A photometric calibration implementation was discussed for the two cameras. Conclusions: The calibration derived from pre-hibernation phases of the mission will be checked as soon as possible after the awakening of OSIRIS and will be continuously monitored until the end of the mission in December 2015. A list of additional calibration sources has been determined that are to be observed during the forthcoming phases of the mission to ensure a better coverage across the wavelength range of the cameras and to study the possible dust contamination of the optics.

  17. Inferred UV Fluence Focal-Spot Profiles from Soft X-Ray Pinhole Camera Measurements on OMEGA

    NASA Astrophysics Data System (ADS)

    Theobald, W.; Sorce, C.; Epstein, R.; Keck, R. L.; Kellogg, C.; Kessler, T. J.; Kwiatkowski, J.; Marshall, F. J.; Seka, W.; Shvydky, A.; Stoeckl, C.

    2017-10-01

    The drive uniformity of OMEGA cryogenic implosions is affected by UV beamfluence variations on target, which require careful monitoring at full laser power. This is routinely performed with multiple pinhole cameras equipped with charge-injection devices (CID's) that record the x-ray emission in the 3- to 7-keV photon energy range from an Au-coated target. The technique relies on the knowledge of the relation between x-ray fluence Fx and UV fluence FUV ,Fx FUVγ , with a measured γ = 3.42 for the CID-based diagnostic and 1-ns laser pulse. It is demonstrated here that using a back-thinned charge-coupled-device camera with softer filtration for x-rays with photon energies <2 keV and well calibrated pinhole provides a lower γ 2 and a larger dynamic range in the measured UV fluence. Inferred UV fluence profiles were measured for 100-ps and 1-ns laser pulses and were compared to directly measured profiles from a UV equivalent-target-plane diagnostic. Good agreement between both techniques is reported for selected beams. This material is based upon work supported by the Department of Energy National Nuclear Security Administration under Award Number DE-NA0001944.

  18. Optical designs for the Mars '03 rover cameras

    NASA Astrophysics Data System (ADS)

    Smith, Gregory H.; Hagerott, Edward C.; Scherr, Lawrence M.; Herkenhoff, Kenneth E.; Bell, James F.

    2001-12-01

    In 2003, NASA is planning to send two robotic rover vehicles to explore the surface of Mars. The spacecraft will land on airbags in different, carefully chosen locations. The search for evidence indicating conditions favorable for past or present life will be a high priority. Each rover will carry a total of ten cameras of five various types. There will be a stereo pair of color panoramic cameras, a stereo pair of wide- field navigation cameras, one close-up camera on a movable arm, two stereo pairs of fisheye cameras for hazard avoidance, and one Sun sensor camera. This paper discusses the lenses for these cameras. Included are the specifications, design approaches, expected optical performances, prescriptions, and tolerances.

  19. An Educational PET Camera Model

    ERIC Educational Resources Information Center

    Johansson, K. E.; Nilsson, Ch.; Tegner, P. E.

    2006-01-01

    Positron emission tomography (PET) cameras are now in widespread use in hospitals. A model of a PET camera has been installed in Stockholm House of Science and is used to explain the principles of PET to school pupils as described here.

  20. Cameras on Mars 2020 Rover

    NASA Image and Video Library

    2017-10-31

    This image presents a selection of the 23 cameras on NASA's 2020 Mars rover. Many are improved versions of the cameras on the Curiosity rover, with a few new additions as well. https://photojournal.jpl.nasa.gov/catalog/PIA22103