Automated camera-phone experience with the frequency of imaging necessary to capture diet.
Arab, Lenore; Winter, Ashley
2010-08-01
Camera-enabled cell phones provide an opportunity to strengthen dietary recall through automated imaging of foods eaten during a specified period. To explore the frequency of imaging needed to capture all foods eaten, we examined the number of images of individual foods consumed in a pilot study of automated imaging using camera phones set to an image-capture frequency of one snapshot every 10 seconds. Food images were tallied from 10 young adult subjects who wore the phone continuously during the work day and consented to share their images. Based on the number of images received for each eating experience, the pilot data suggest that automated capturing of images at a frequency of once every 10 seconds is adequate for recording foods consumed during regular meals, whereas a greater frequency of imaging is necessary to capture snacks and beverages eaten quickly. 2010 American Dietetic Association. Published by Elsevier Inc. All rights reserved.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
Megapixel mythology and photospace: estimating photospace for camera phones from large image sets
NASA Astrophysics Data System (ADS)
Hultgren, Bror O.; Hertel, Dirk W.
2008-01-01
It is a myth that more pixels alone result in better images. The marketing of camera phones in particular has focused on their pixel numbers. However, their performance varies considerably according to the conditions of image capture. Camera phones are often used in low-light situations where the lack of a flash and limited exposure time will produce underexposed, noisy and blurred images. Camera utilization can be quantitatively described by photospace distributions, a statistical description of the frequency of pictures taken at varying light levels and camera-subject distances. If the photospace distribution is known, the user-experienced distribution of quality can be determined either directly by direct measurement of subjective quality, or by photospace-weighting of objective attributes. The population of a photospace distribution requires examining large numbers of images taken under typical camera phone usage conditions. ImagePhi was developed as a user-friendly software tool to interactively estimate the primary photospace variables, subject illumination and subject distance, from individual images. Additionally, subjective evaluations of image quality and failure modes for low quality images can be entered into ImagePhi. ImagePhi has been applied to sets of images taken by typical users with a selection of popular camera phones varying in resolution. The estimated photospace distribution of camera phone usage has been correlated with the distributions of failure modes. The subjective and objective data show that photospace conditions have a much bigger impact on image quality of a camera phone than the pixel count of its imager. The 'megapixel myth' is thus seen to be less a myth than an ill framed conditional assertion, whose conditions are to a large extent specified by the camera's operational state in photospace.
Quantitative Imaging with a Mobile Phone Microscope
Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.
2014-01-01
Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-09-01
The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less
Toward Dietary Assessment via Mobile Phone Video Cameras.
Chen, Nicholas; Lee, Yun Young; Rabb, Maurice; Schatz, Bruce
2010-11-13
Reliable dietary assessment is a challenging yet essential task for determining general health. Existing efforts are manual, require considerable effort, and are prone to underestimation and misrepresentation of food intake. We propose leveraging mobile phones to make this process faster, easier and automatic. Using mobile phones with built-in video cameras, individuals capture short videos of their meals; our software then automatically analyzes the videos to recognize dishes and estimate calories. Preliminary experiments on 20 typical dishes from a local cafeteria show promising results. Our approach complements existing dietary assessment methods to help individuals better manage their diet to prevent obesity and other diet-related diseases.
The Endockscope Using Next Generation Smartphones: "A Global Opportunity".
Tse, Christina; Patel, Roshan M; Yoon, Renai; Okhunov, Zhamshid; Landman, Jaime; Clayman, Ralph V
2018-06-02
The Endockscope combines a smartphone, a battery powered flashlight and a fiberoptic cystoscope allowing for mobile videocystoscopy. We compared conventional videocystoscopy to the Endockscope paired with next generation smartphones in an ex-vivo porcine bladder model to evaluate its image quality. The Endockscope consists of a three-dimensional (3D) printed attachment that connects a smartphone to a flexible fiberoptic cystoscope plus a 1000 lumen light-emitting diode (LED) cordless light source. Video recordings of porcine cystoscopy with a fiberoptic flexible cystoscope (Storz) were captured for each mobile device (iPhone 6, iPhone 6S, iPhone 7, Samsung S8, and Google Pixel) and for the high-definition H3-Z versatile camera (HD) set-up with both the LED light source and the xenon light (XL) source. Eleven faculty urologists, blinded to the modality used, evaluated each video for image quality/resolution, brightness, color quality, sharpness, overall quality, and acceptability for diagnostic use. When comparing the Endockscope coupled to an Galaxy S8, iPhone 7, and iPhone 6S with the LED portable light source to the HD camera with XL, there were no statistically significant differences in any metric. 82% and 55% of evaluators considered the iPhone 7 + LED light source and iPhone 6S + LED light, respectively, appropriate for diagnostic purposes as compared to 100% who considered the HD camera with XL appropriate. The iPhone 6 and Google Pixel coupled with the LED source were both inferior to the HD camera with XL in all metrics. The Endockscope system with a LED light source when coupled with either an iPhone 7 or Samsung S8 (total cost: $750) is comparable to conventional videocystoscopy with a standard camera and XL light source (total cost: $45,000).
Applying and extending ISO/TC42 digital camera resolution standards to mobile imaging products
NASA Astrophysics Data System (ADS)
Williams, Don; Burns, Peter D.
2007-01-01
There are no fundamental differences between today's mobile telephone cameras and consumer digital still cameras that suggest many existing ISO imaging performance standards do not apply. To the extent that they have lenses, color filter arrays, detectors, apertures, image processing, and are hand held, there really are no operational or architectural differences. Despite this, there are currently differences in the levels of imaging performance. These are driven by physical and economic constraints, and image-capture conditions. Several ISO standards for resolution, well established for digital consumer digital cameras, require care when applied to the current generation of cell phone cameras. In particular, accommodation of optical flare, shading non-uniformity and distortion are recommended. We offer proposals for the application of existing ISO imaging resolution performance standards to mobile imaging products, and suggestions for extending performance standards to the characteristic behavior of camera phones.
Visual object recognition for mobile tourist information systems
NASA Astrophysics Data System (ADS)
Paletta, Lucas; Fritz, Gerald; Seifert, Christin; Luley, Patrick; Almer, Alexander
2005-03-01
We describe a mobile vision system that is capable of automated object identification using images captured from a PDA or a camera phone. We present a solution for the enabling technology of outdoors vision based object recognition that will extend state-of-the-art location and context aware services towards object based awareness in urban environments. In the proposed application scenario, tourist pedestrians are equipped with GPS, W-LAN and a camera attached to a PDA or a camera phone. They are interested whether their field of view contains tourist sights that would point to more detailed information. Multimedia type data about related history, the architecture, or other related cultural context of historic or artistic relevance might be explored by a mobile user who is intending to learn within the urban environment. Learning from ambient cues is in this way achieved by pointing the device towards the urban sight, capturing an image, and consequently getting information about the object on site and within the focus of attention, i.e., the users current field of view.
3D image processing architecture for camera phones
NASA Astrophysics Data System (ADS)
Atanassov, Kalin; Ramachandra, Vikas; Goma, Sergio R.; Aleksic, Milivoje
2011-03-01
Putting high quality and easy-to-use 3D technology into the hands of regular consumers has become a recent challenge as interest in 3D technology has grown. Making 3D technology appealing to the average user requires that it be made fully automatic and foolproof. Designing a fully automatic 3D capture and display system requires: 1) identifying critical 3D technology issues like camera positioning, disparity control rationale, and screen geometry dependency, 2) designing methodology to automatically control them. Implementing 3D capture functionality on phone cameras necessitates designing algorithms to fit within the processing capabilities of the device. Various constraints like sensor position tolerances, sensor 3A tolerances, post-processing, 3D video resolution and frame rate should be carefully considered for their influence on 3D experience. Issues with migrating functions such as zoom and pan from the 2D usage model (both during capture and display) to 3D needs to be resolved to insure the highest level of user experience. It is also very important that the 3D usage scenario (including interactions between the user and the capture/display device) is carefully considered. Finally, both the processing power of the device and the practicality of the scheme needs to be taken into account while designing the calibration and processing methodology.
Applying image quality in cell phone cameras: lens distortion
NASA Astrophysics Data System (ADS)
Baxter, Donald; Goma, Sergio R.; Aleksic, Milivoje
2009-01-01
This paper describes the framework used in one of the pilot studies run under the I3A CPIQ initiative to quantify overall image quality in cell-phone cameras. The framework is based on a multivariate formalism which tries to predict overall image quality from individual image quality attributes and was validated in a CPIQ pilot program. The pilot study focuses on image quality distortions introduced in the optical path of a cell-phone camera, which may or may not be corrected in the image processing path. The assumption is that the captured image used is JPEG compressed and the cellphone camera is set to 'auto' mode. As the used framework requires that the individual attributes to be relatively perceptually orthogonal, in the pilot study, the attributes used are lens geometric distortion (LGD) and lateral chromatic aberrations (LCA). The goal of this paper is to present the framework of this pilot project starting with the definition of the individual attributes, up to their quantification in JNDs of quality, a requirement of the multivariate formalism, therefore both objective and subjective evaluations were used. A major distinction in the objective part from the 'DSC imaging world' is that the LCA/LGD distortions found in cell-phone cameras, rarely exhibit radial behavior, therefore a radial mapping/modeling cannot be used in this case.
One-click scanning of large-size documents using mobile phone camera
NASA Astrophysics Data System (ADS)
Liu, Sijiang; Jiang, Bo; Yang, Yuanjie
2016-07-01
Currently mobile apps for document scanning do not provide convenient operations to tackle large-size documents. In this paper, we present a one-click scanning approach for large-size documents using mobile phone camera. After capturing a continuous video of documents, our approach automatically extracts several key frames by optical flow analysis. Then based on key frames, a mobile GPU based image stitching method is adopted to generate a completed document image with high details. There are no extra manual intervention in the process and experimental results show that our app performs well, showing convenience and practicability for daily life.
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
2012-01-01
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper. PMID:22438753
Macias, Elsa; Lloret, Jaime; Suarez, Alvaro; Garcia, Miguel
2012-01-01
Current mobile phones come with several sensors and powerful video cameras. These video cameras can be used to capture good quality scenes, which can be complemented with the information gathered by the sensors also embedded in the phones. For example, the surroundings of a beach recorded by the camera of the mobile phone, jointly with the temperature of the site can let users know via the Internet if the weather is nice enough to swim. In this paper, we present a system that tags the video frames of the video recorded from mobile phones with the data collected by the embedded sensors. The tagged video is uploaded to a video server, which is placed on the Internet and is accessible by any user. The proposed system uses a semantic approach with the stored information in order to make easy and efficient video searches. Our experimental results show that it is possible to tag video frames in real time and send the tagged video to the server with very low packet delay variations. As far as we know there is not any other application developed as the one presented in this paper.
Hinck, Glori; Bergmann, Thomas F
2013-01-01
Objective : We evaluated the feasibility of using mobile device technology to allow students to record their own psychomotor skills so that these recordings can be used for self-reflection and formative evaluation. Methods : Students were given the choice of using DVD recorders, zip drive video capture equipment, or their personal mobile phone, device, or digital camera to record specific psychomotor skills. During the last week of the term, they were asked to complete a 9-question survey regarding their recording experience, including details of mobile phone ownership, technology preferences, technical difficulties, and satisfaction with the recording experience and video critique process. Results : Of those completing the survey, 83% currently owned a mobile phone with video capability. Of the mobile phone owners 62% reported having email capability on their phone and that they could transfer their video recording successfully to their computer, making it available for upload to the learning management system. Viewing the video recording of the psychomotor skill was valuable to 88% of respondents. Conclusions : Our results suggest that mobile phones are a viable technology to use for the video capture and critique of psychomotor skills, as most students own this technology and their satisfaction with this method is high.
Hinck, Glori; Bergmann, Thomas F.
2013-01-01
Objective We evaluated the feasibility of using mobile device technology to allow students to record their own psychomotor skills so that these recordings can be used for self-reflection and formative evaluation. Methods Students were given the choice of using DVD recorders, zip drive video capture equipment, or their personal mobile phone, device, or digital camera to record specific psychomotor skills. During the last week of the term, they were asked to complete a 9-question survey regarding their recording experience, including details of mobile phone ownership, technology preferences, technical difficulties, and satisfaction with the recording experience and video critique process. Results Of those completing the survey, 83% currently owned a mobile phone with video capability. Of the mobile phone owners 62% reported having email capability on their phone and that they could transfer their video recording successfully to their computer, making it available for upload to the learning management system. Viewing the video recording of the psychomotor skill was valuable to 88% of respondents. Conclusions Our results suggest that mobile phones are a viable technology to use for the video capture and critique of psychomotor skills, as most students own this technology and their satisfaction with this method is high. PMID:23957324
Tuijn, Coosje J; Hoefman, Bas J; van Beijma, Hajo; Oskam, Linda; Chevrollier, Nicolas
2011-01-01
The emerging market of mobile phone technology and its use in the health sector is rapidly expanding and connecting even the most remote areas of world. Distributing diagnostic images over the mobile network for knowledge sharing, feedback or quality control is a logical innovation. To determine the feasibility of using mobile phones for capturing microscopy images and transferring these to a central database for assessment, feedback and educational purposes. A feasibility study was carried out in Uganda. Images of microscopy samples were taken using a prototype connector that could fix a variety of mobile phones to a microscope. An Information Technology (IT) platform was set up for data transfer from a mobile phone to a website, including feedback by text messaging to the end user. Clear images were captured using mobile phone cameras of 2 megapixels (MP) up to 5MP. Images were sent by mobile Internet to a website where they were visualized and feedback could be provided to the sender by means of text message. The process of capturing microscopy images on mobile phones, relaying them to a central review website and feeding back to the sender is feasible and of potential benefit in resource poor settings. Even though the system needs further optimization, it became evident from discussions with stakeholders that there is a demand for this type of technology.
Mobile phones for retinopathy of prematurity screening in Lagos, Nigeria, sub-Saharan Africa.
Oluleye, Tunji S; Rotimi-Samuel, Adekunle; Adenekan, Adetunji
2016-01-01
Retinopathy of prematurity (ROP), thought to be rare in Nigeria, sub-Saharan Africa, has been reported in recent studies. Developing cost-effective screening is crucial for detecting retinal changes amenable to treatment. This study describes the use of an iPhone combined with a 20-D lens in screening for ROP in Lagos, Nigeria. The ROP screening program was approved by the Lagos University Teaching Hospital Ethical Committee. Preterm infants with birthweight of less than 1.5 kg or gestational age of less than 32 weeks were screened. In conjunction with the neonatologist, topical tropicamide (0.5%) and phenylephrine (2.5%) was used to dilate the pupils. A pediatric lid speculum was used. Indirect ophthalmoscopy was used to examine the fundus to ensure there were no missed diagnoses. An iPhone 5 with 20-D lens was used to examine the fundus. The App Filmic Pro was launched in the video mode. The camera flash served as the source of illumination. Its intensity was controlled by the app. The 20-D lens was used to capture the image of the retina, which was picked up by the camera system of the mobile phone. Another app, Aviary, was used to edit the picture. The images captured by the system were satisfactory for staging and determining the need for treatment. An iPhone combined with a 20-D lens appear to be useful in screening for ROP in resource-poor settings. More studies are needed in this area.
Pahlevan, Niema M; Rinderknecht, Derek G; Tavallali, Peyman; Razavi, Marianne; Tran, Thao T; Fong, Michael W; Kloner, Robert A; Csete, Marie; Gharib, Morteza
2017-07-01
The study is based on previously reported mathematical analysis of arterial waveform that extracts hidden oscillations in the waveform that we called intrinsic frequencies. The goal of this clinical study was to compare the accuracy of left ventricular ejection fraction derived from intrinsic frequencies noninvasively versus left ventricular ejection fraction obtained with cardiac MRI, the most accurate method for left ventricular ejection fraction measurement. After informed consent, in one visit, subjects underwent cardiac MRI examination and noninvasive capture of a carotid waveform using an iPhone camera (The waveform is captured using a custom app that constructs the waveform from skin displacement images during the cardiac cycle.). The waveform was analyzed using intrinsic frequency algorithm. Outpatient MRI facility. Adults able to undergo MRI were referred by local physicians or self-referred in response to local advertisement and included patients with heart failure with reduced ejection fraction diagnosed by a cardiologist. Standard cardiac MRI sequences were used, with periodic breath holding for image stabilization. To minimize motion artifact, the iPhone camera was held in a cradle over the carotid artery during iPhone measurements. Regardless of neck morphology, carotid waveforms were captured in all subjects, within seconds to minutes. Seventy-two patients were studied, ranging in age from 20 to 92 years old. The main endpoint of analysis was left ventricular ejection fraction; overall, the correlation between ejection fraction-iPhone and ejection fraction-MRI was 0.74 (r = 0.74; p < 0.0001; ejection fraction-MRI = 0.93 × [ejection fraction-iPhone] + 1.9). Analysis of carotid waveforms using intrinsic frequency methods can be used to document left ventricular ejection fraction with accuracy comparable with that of MRI. The measurements require no training to perform or interpret, no calibration, and can be repeated at the bedside to generate almost continuous analysis of left ventricular ejection fraction without arterial cannulation.
Diffraction experiments with infrared remote controls
NASA Astrophysics Data System (ADS)
Kuhn, Jochen; Vogt, Patrik
2012-02-01
In this paper we describe an experiment in which radiation emitted by an infrared remote control is passed through a diffraction grating. An image of the diffraction pattern is captured using a cell phone camera and then used to determine the wavelength of the radiation.
Using the iPhone as a device for a rapid quantitative analysis of trinitrotoluene in soil.
Choodum, Aree; Kanatharana, Proespichaya; Wongniramaikul, Worawit; Daeid, Niamh Nic
2013-10-15
Mobile 'smart' phones have become almost ubiquitous in society and are typically equipped with a high-resolution digital camera which can be used to produce an image very conveniently. In this study, the built-in digital camera of a smart phone (iPhone) was used to capture the results from a rapid quantitative colorimetric test for trinitrotoluene (TNT) in soil. The results were compared to those from a digital single-lens reflex (DSLR) camera. The colored product from the selective test for TNT was quantified using an innovative application of photography where the relationships between the Red Green Blue (RGB) values and the concentrations of colorimetric product were exploited. The iPhone showed itself to be capable of being used more conveniently than the DSLR while providing similar analytical results with increased sensitivity. The wide linear range and low detection limits achieved were comparable with those from spectrophotometric quantification methods. Low relative errors in the range of 0.4 to 6.3% were achieved in the analysis of control samples and 0.4-6.2% for spiked soil extracts with good precision (2.09-7.43% RSD) for the analysis over 4 days. The results demonstrate that the iPhone provides the potential to be used as an ideal novel platform for the development of a rapid on site semi quantitative field test for the analysis of explosives. © 2013 Elsevier B.V. All rights reserved.
Mobile app for chemical detection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klunder, Gregory; Cooper, Chadway R.; Satcher, Jr., Joe H.
The present invention incorporates the camera from a mobile device (phone, iPad, etc.) to capture an image from a chemical test kit and process the image to provide chemical information. A simple user interface enables the automatic evaluation of the image, data entry, gps info, and maintain records from previous analyses.
Wide-field fluorescent microscopy on a cell-phone.
Zhu, Hongying; Yaglidere, Oguzhan; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2011-01-01
We demonstrate wide-field fluorescent imaging on a cell-phone, using compact and cost-effective optical components that are mechanically attached to the existing camera unit of the cell-phone. Battery powered light-emitting diodes (LEDs) are used to side-pump the sample of interest using butt-coupling. The pump light is guided within the sample cuvette to excite the specimen uniformly. The fluorescent emission from the sample is then imaged with an additional lens that is put in front of the existing lens of the cell-phone camera. Because the excitation occurs through guided waves that propagate perpendicular to the detection path, an inexpensive plastic color filter is sufficient to create the dark-field background needed for fluorescent imaging. The imaging performance of this light-weight platform (~28 grams) is characterized with red and green fluorescent microbeads, achieving an imaging field-of-view of ~81 mm(2) and a spatial resolution of ~10 μm, which is enhanced through digital processing of the captured cell-phone images using compressive sampling based sparse signal recovery. We demonstrate the performance of this cell-phone fluorescent microscope by imaging labeled white-blood cells separated from whole blood samples as well as water-borne pathogenic protozoan parasites such as Giardia Lamblia cysts.
NASA Astrophysics Data System (ADS)
Gupta, S.; Lohani, B.
2014-05-01
Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results
Smart-Phone Based Magnetic Levitation for Measuring Densities
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615
Smart-Phone Based Magnetic Levitation for Measuring Densities.
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.
Wide-field Fluorescent Microscopy and Fluorescent Imaging Flow Cytometry on a Cell-phone
Zhu, Hongying; Ozcan, Aydogan
2013-01-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. ~ 10 μm over a very large field-of-view of ~ 81 mm2. This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water. PMID:23603893
Wide-field fluorescent microscopy and fluorescent imaging flow cytometry on a cell-phone.
Zhu, Hongying; Ozcan, Aydogan
2013-04-11
Fluorescent microscopy and flow cytometry are widely used tools in biomedical research and clinical diagnosis. However these devices are in general relatively bulky and costly, making them less effective in the resource limited settings. To potentially address these limitations, we have recently demonstrated the integration of wide-field fluorescent microscopy and imaging flow cytometry tools on cell-phones using compact, light-weight, and cost-effective opto-fluidic attachments. In our flow cytometry design, fluorescently labeled cells are flushed through a microfluidic channel that is positioned above the existing cell-phone camera unit. Battery powered light-emitting diodes (LEDs) are butt-coupled to the side of this microfluidic chip, which effectively acts as a multi-mode slab waveguide, where the excitation light is guided to uniformly excite the fluorescent targets. The cell-phone camera records a time lapse movie of the fluorescent cells flowing through the microfluidic channel, where the digital frames of this movie are processed to count the number of the labeled cells within the target solution of interest. Using a similar opto-fluidic design, we can also image these fluorescently labeled cells in static mode by e.g. sandwiching the fluorescent particles between two glass slides and capturing their fluorescent images using the cell-phone camera, which can achieve a spatial resolution of e.g. - 10 μm over a very large field-of-view of - 81 mm(2). This cell-phone based fluorescent imaging flow cytometry and microscopy platform might be useful especially in resource limited settings, for e.g. counting of CD4+ T cells toward monitoring of HIV+ patients or for detection of water-borne parasites in drinking water.
Cellphones in Classrooms Land Teachers on Online Video Sites
ERIC Educational Resources Information Center
Honawar, Vaishali
2007-01-01
Videos of teachers that students taped in secrecy are all over online sites like YouTube and MySpace. Angry teachers, enthusiastic teachers, teachers clowning around, singing, and even dancing are captured, usually with camera phones, for the whole world to see. Some students go so far as to create elaborately edited videos, shot over several…
2016-01-01
Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709
NASA Astrophysics Data System (ADS)
Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert
2016-02-01
Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.
Low-cost mobile phone microscopy with a reversed mobile phone camera lens.
Switz, Neil A; D'Ambrosio, Michael V; Fletcher, Daniel A
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples.
Low-Cost Mobile Phone Microscopy with a Reversed Mobile Phone Camera Lens
Fletcher, Daniel A.
2014-01-01
The increasing capabilities and ubiquity of mobile phones and their associated digital cameras offer the possibility of extending low-cost, portable diagnostic microscopy to underserved and low-resource areas. However, mobile phone microscopes created by adding magnifying optics to the phone's camera module have been unable to make use of the full image sensor due to the specialized design of the embedded camera lens, exacerbating the tradeoff between resolution and field of view inherent to optical systems. This tradeoff is acutely felt for diagnostic applications, where the speed and cost of image-based diagnosis is related to the area of the sample that can be viewed at sufficient resolution. Here we present a simple and low-cost approach to mobile phone microscopy that uses a reversed mobile phone camera lens added to an intact mobile phone to enable high quality imaging over a significantly larger field of view than standard microscopy. We demonstrate use of the reversed lens mobile phone microscope to identify red and white blood cells in blood smears and soil-transmitted helminth eggs in stool samples. PMID:24854188
Albumin testing in urine using a smart-phone
Coskun, Ahmet F.; Nagi, Richie; Sadeghi, Kayvon; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a digital sensing platform, termed Albumin Tester, running on a smart-phone that images and automatically analyses fluorescent assays confined within disposable test tubes for sensitive and specific detection of albumin in urine. This light-weight and compact Albumin Tester attachment, weighing approximately 148 grams, is mechanically installed on the existing camera unit of a smart-phone, where test and control tubes are inserted from the side and are excited by a battery powered laser diode. This excitation beam, after probing the sample of interest located within the test tube, interacts with the control tube, and the resulting fluorescent emission is collected perpendicular to the direction of the excitation, where the cellphone camera captures the images of the fluorescent tubes through the use of an external plastic lens that is inserted between the sample and the camera lens. The acquired fluorescent images of the sample and control tubes are digitally processed within one second through an Android application running on the same cellphone for quantification of albumin concentration in urine specimen of interest. Using a simple sample preparation approach which takes ~ 5 minutes per test (including the incubation time), we experimentally confirmed the detection limit of our sensing platform as 5–10 μg/mL (which is more than 3 times lower than clinically accepted normal range) in buffer as well as urine samples. This automated albumin testing tool running on a smart-phone could be useful for early diagnosis of kidney disease or for monitoring of chronic patients, especially those suffering from diabetes, hypertension, and/or cardiovascular diseases. PMID:23995895
Chrominance watermark for mobile applications
NASA Astrophysics Data System (ADS)
Reed, Alastair; Rogers, Eliot; James, Dan
2010-01-01
Creating an imperceptible watermark which can be read by a broad range of cell phone cameras is a difficult problem. The problems are caused by the inherently low resolution and noise levels of typical cell phone cameras. The quality limitations of these devices compared to a typical digital camera are caused by the small size of the cell phone and cost trade-offs made by the manufacturer. In order to achieve this, a low resolution watermark is required which can be resolved by a typical cell phone camera. The visibility of a traditional luminance watermark was too great at this lower resolution, so a chrominance watermark was developed. The chrominance watermark takes advantage of the relatively low sensitivity of the human visual system to chrominance changes. This enables a chrominance watermark to be inserted into an image which is imperceptible to the human eye but can be read using a typical cell phone camera. Sample images will be presented showing images with a very low visibility which can be easily read by a typical cell phone camera.
iPhone 4s and iPhone 5s Imaging of the Eye.
Jalil, Maaz; Ferenczy, Sandor R; Shields, Carol L
2017-01-01
To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable.
Kim, Dong-Keun; Yoo, Sun K; Kim, Sun H
2005-01-01
The instant transmission of radiological images may be important for making rapid clinical decisions about emergency patients. We have examined an instant image transfer system based on a personal digital assistant (PDA) phone with a built-in camera. Images displayed on a picture archiving and communication systems (PACS) monitor can be captured by the camera in the PDA phone directly. Images can then be transmitted from an emergency centre to a remote physician via a wireless high-bandwidth network (CDMA 1 x EVDO). We reviewed the radiological lesions in 10 normal and 10 abnormal cases produced by modalities such as computerized tomography (CT), magnetic resonance (MR) and digital angiography. The images were of 24-bit depth and 1,144 x 880, 1,120 x 840, 1,024 x 768, 800 x 600, 640 x 480 and 320 x 240 pixels. Three neurosurgeons found that for satisfactory remote consultation a minimum size of 640 x 480 pixels was required for CT and MR images and 1,024 x 768 pixels for angiography images. Although higher resolution produced higher clinical satisfaction, it also required more transmission time. At the limited bandwidth employed, higher resolutions could not be justified.
iPhone 4s and iPhone 5s Imaging of the Eye
Jalil, Maaz; Ferenczy, Sandor R.; Shields, Carol L.
2017-01-01
Background/Aims To evaluate the technical feasibility of a consumer-grade cellular iPhone camera as an ocular imaging device compared to existing ophthalmic imaging equipment for documentation purposes. Methods A comparison of iPhone 4s and 5s images was made with external facial images (macrophotography) using Nikon cameras, slit-lamp images (microphotography) using Zeiss photo slit-lamp camera, and fundus images (fundus photography) using RetCam II. Results In an analysis of six consecutive patients with ophthalmic conditions, both iPhones achieved documentation of external findings (macrophotography) using standard camera modality, tap to focus, and built-in flash. Both iPhones achieved documentation of anterior segment findings (microphotography) during slit-lamp examination through oculars. Both iPhones achieved fundus imaging using standard video modality with continuous iPhone illumination through an ophthalmic lens. Comparison to standard ophthalmic cameras, macrophotography and microphotography were excellent. In comparison to RetCam fundus photography, iPhone fundus photography revealed smaller field and was technically more difficult to obtain, but the quality was nearly similar to RetCam. Conclusions iPhone versions 4s and 5s can provide excellent ophthalmic macrophotography and microphotography and adequate fundus photography. We believe that iPhone imaging could be most useful in settings where expensive, complicated, and cumbersome imaging equipment is unavailable. PMID:28275604
The electromagnetic interference of mobile phones on the function of a γ-camera.
Javadi, Hamid; Azizmohammadi, Zahra; Mahmoud Pashazadeh, Ali; Neshandar Asli, Isa; Moazzeni, Taleb; Baharfar, Nastaran; Shafiei, Babak; Nabipour, Iraj; Assadi, Majid
2014-03-01
The aim of the present study is to evaluate whether or not the electromagnetic field generated by mobile phones interferes with the function of a SPECT γ-camera during data acquisition. We tested the effects of 7 models of mobile phones on 1 SPECT γ-camera. The mobile phones were tested when making a call, in ringing mode, and in standby mode. The γ-camera function was assessed during data acquisition from a planar source and a point source of Tc with activities of 10 mCi and 3 mCi, respectively. A significant visual decrease in count number was considered to be electromagnetic interference (EMI). The percentage of induced EMI with the γ-camera per mobile phone was in the range of 0% to 100%. The incidence of EMI was mainly observed in the first seconds of ringing and then mitigated in the following frames. Mobile phones are portable sources of electromagnetic radiation, and there is interference potential with the function of SPECT γ-cameras leading to adverse effects on the quality of the acquired images.
MobileFusion: real-time volumetric surface reconstruction and dense tracking on mobile phones.
Ondrúška, Peter; Kohli, Pushmeet; Izadi, Shahram
2015-11-01
We present the first pipeline for real-time volumetric surface reconstruction and dense 6DoF camera tracking running purely on standard, off-the-shelf mobile phones. Using only the embedded RGB camera, our system allows users to scan objects of varying shape, size, and appearance in seconds, with real-time feedback during the capture process. Unlike existing state of the art methods, which produce only point-based 3D models on the phone, or require cloud-based processing, our hybrid GPU/CPU pipeline is unique in that it creates a connected 3D surface model directly on the device at 25Hz. In each frame, we perform dense 6DoF tracking, which continuously registers the RGB input to the incrementally built 3D model, minimizing a noise aware photoconsistency error metric. This is followed by efficient key-frame selection, and dense per-frame stereo matching. These depth maps are fused volumetrically using a method akin to KinectFusion, producing compelling surface models. For each frame, the implicit surface is extracted for live user feedback and pose estimation. We demonstrate scans of a variety of objects, and compare to a Kinect-based baseline, showing on average ∼ 1.5cm error. We qualitatively compare to a state of the art point-based mobile phone method, demonstrating an order of magnitude faster scanning times, and fully connected surface models.
Arabic word recognizer for mobile applications
NASA Astrophysics Data System (ADS)
Khanna, Nitin; Abdollahian, Golnaz; Brame, Ben; Boutin, Mireille; Delp, Edward J.
2011-03-01
When traveling in a region where the local language is not written using a "Roman alphabet," translating written text (e.g., documents, road signs, or placards) is a particularly difficult problem since the text cannot be easily entered into a translation device or searched using a dictionary. To address this problem, we are developing the "Rosetta Phone," a handheld device (e.g., PDA or mobile telephone) capable of acquiring an image of the text, locating the region (word) of interest within the image, and producing both an audio and a visual English interpretation of the text. This paper presents a system targeted for interpreting words written in Arabic script. The goal of this work is to develop an autonomous, segmentation-free Arabic phrase recognizer, with computational complexity low enough to deploy on a mobile device. A prototype of the proposed system has been deployed on an iPhone with a suitable user interface. The system was tested on a number of noisy images, in addition to the images acquired from the iPhone's camera. It identifies Arabic words or phrases by extracting appropriate features and assigning "codewords" to each word or phrase. On a dictionary of 5,000 words, the system uniquely mapped (word-image to codeword) 99.9% of the words. The system has a 82% recognition accuracy on images of words captured using the iPhone's built-in camera.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Stegmayr, Armin; Fessl, Benjamin; Hörtnagl, Richard; Marcadella, Michael; Perkhofer, Susanne
2013-08-01
The aim of the study was to assess the potential negative impact of cellular phones and digitally enhanced cordless telecommunication (DECT) devices on the quality of static and dynamic scintigraphy to avoid repeated testing in infant and teenage patients to protect them from unnecessary radiation exposure. The assessment was conducted by performing phantom measurements under real conditions. A functional renal-phantom acting as a pair of kidneys in dynamic scans was created. Data were collected using the setup of cellular phones and DECT phones placed in different positions in relation to a camera head to test the potential interference of cellular phones and DECT phones with the cameras. Cellular phones reproducibly interfered with the oldest type of gamma camera, which, because of its single-head specification, is the device most often used for renal examinations. Curves indicating the renal function were considerably disrupted; cellular phones as well as DECT phones showed a disturbance concerning static acquisition. Variable electromagnetic tolerance in different types of γ-cameras could be identified. Moreover, a straightforward, low-cost method of testing the susceptibility of equipment to interference caused by cellular phones and DECT phones was generated. Even though some departments use newer models of γ-cameras, which are less susceptible to electromagnetic interference, we recommend testing examination rooms to avoid any interference caused by cellular phones. The potential electromagnetic interference should be taken into account when the purchase of new sensitive medical equipment is being considered, not least because the technology of mobile communication is developing fast, which also means that different standards of wave bands will be issued in the future.
Camera/Video Phones in Schools: Law and Practice
ERIC Educational Resources Information Center
Parry, Gareth
2005-01-01
The emergence of mobile phones with built-in digital cameras is creating legal and ethical concerns for school systems throughout the world. Users of such phones can instantly email, print or post pictures to other MMS1 phones or websites. Local authorities and schools in Britain, Europe, USA, Canada, Australia and elsewhere have introduced…
Text recognition and correction for automated data collection by mobile devices
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
Participatory sensing is an approach which allows mobile devices such as mobile phones to be used for data collection, analysis and sharing processes by individuals. Data collection is the first and most important part of a participatory sensing system, but it is time consuming for the participants. In this paper, we discuss automatic data collection approaches for reducing the time required for collection, and increasing the amount of collected data. In this context, we explore automated text recognition on images of store receipts which are captured by mobile phone cameras, and the correction of the recognized text. Accordingly, our first goal is to evaluate the performance of the Optical Character Recognition (OCR) method with respect to data collection from store receipt images. Images captured by mobile phones exhibit some typical problems, and common image processing methods cannot handle some of them. Consequently, the second goal is to address these types of problems through our proposed Knowledge Based Correction (KBC) method used in support of the OCR, and also to evaluate the KBC method with respect to the improvement on the accurate recognition rate. Results of the experiments show that the KBC method improves the accurate data recognition rate noticeably.
Smartphone adapters for digital photomicrography.
Roy, Somak; Pantanowitz, Liron; Amin, Milon; Seethala, Raja R; Ishtiaque, Ahmed; Yousem, Samuel A; Parwani, Anil V; Cucoranu, Ioan; Hartman, Douglas J
2014-01-01
Photomicrographs in Anatomic Pathology provide a means of quickly sharing information from a glass slide for consultation, education, documentation and publication. While static image acquisition historically involved the use of a permanently mounted camera unit on a microscope, such cameras may be expensive, need to be connected to a computer, and often require proprietary software to acquire and process images. Another novel approach for capturing digital microscopic images is to use smartphones coupled with the eyepiece of a microscope. Recently, several smartphone adapters have emerged that allow users to attach mobile phones to the microscope. The aim of this study was to test the utility of these various smartphone adapters. We surveyed the market for adapters to attach smartphones to the ocular lens of a conventional light microscope. Three adapters (Magnifi, Skylight and Snapzoom) were tested. We assessed the designs of these adapters and their effectiveness at acquiring static microscopic digital images. All adapters facilitated the acquisition of digital microscopic images with a smartphone. The optimal adapter was dependent on the type of phone used. The Magnifi adapters for iPhone were incompatible when using a protective case. The Snapzoom adapter was easiest to use with iPhones and other smartphones even with protective cases. Smartphone adapters are inexpensive and easy to use for acquiring digital microscopic images. However, they require some adjustment by the user in order to optimize focus and obtain good quality images. Smartphone microscope adapters provide an economically feasible method of acquiring and sharing digital pathology photomicrographs.
How Phoenix Creates Color Images (Animation)
NASA Technical Reports Server (NTRS)
2008-01-01
[figure removed for brevity, see original site] Click on image for animation This simple animation shows how a color image is made from images taken by Phoenix. The Surface Stereo Imager captures the same scene with three different filters. The images are sent to Earth in black and white and the color is added by mission scientists. By contrast, consumer digital cameras and cell phones have filters built in and do all of the color processing within the camera itself. The Phoenix Mission is led by the University of Arizona, Tucson, on behalf of NASA. Project management of the mission is by NASAaE(TM)s Jet Propulsion Laboratory, Pasadena, Calif. Spacecraft development is by Lockheed Martin Space Systems, Denver.Augmented reality in laser laboratories
NASA Astrophysics Data System (ADS)
Quercioli, Franco
2018-05-01
Laser safety glasses block visibility of the laser light. This is a big nuisance when a clear view of the beam path is required. A headset made up of a smartphone and a viewer can overcome this problem. The user looks at the image of the real world on the cellphone display, captured by its rear camera. An unimpeded and safe sight of the laser beam is then achieved. If the infrared blocking filter of the smartphone camera is removed, the spectral sensitivity of the CMOS image sensor extends in the near infrared region up to 1100 nm. This substantial improvement widens the usability of the device to many laser systems for industrial and medical applications, which are located in this spectral region. The paper describes this modification of a phone camera to extend its sensitivity beyond the visible and make a true augmented reality laser viewer.
Smartphone adapters for digital photomicrography
Roy, Somak; Pantanowitz, Liron; Amin, Milon; Seethala, Raja R.; Ishtiaque, Ahmed; Yousem, Samuel A.; Parwani, Anil V.; Cucoranu, Ioan; Hartman, Douglas J.
2014-01-01
Background: Photomicrographs in Anatomic Pathology provide a means of quickly sharing information from a glass slide for consultation, education, documentation and publication. While static image acquisition historically involved the use of a permanently mounted camera unit on a microscope, such cameras may be expensive, need to be connected to a computer, and often require proprietary software to acquire and process images. Another novel approach for capturing digital microscopic images is to use smartphones coupled with the eyepiece of a microscope. Recently, several smartphone adapters have emerged that allow users to attach mobile phones to the microscope. The aim of this study was to test the utility of these various smartphone adapters. Materials and Methods: We surveyed the market for adapters to attach smartphones to the ocular lens of a conventional light microscope. Three adapters (Magnifi, Skylight and Snapzoom) were tested. We assessed the designs of these adapters and their effectiveness at acquiring static microscopic digital images. Results: All adapters facilitated the acquisition of digital microscopic images with a smartphone. The optimal adapter was dependent on the type of phone used. The Magnifi adapters for iPhone were incompatible when using a protective case. The Snapzoom adapter was easiest to use with iPhones and other smartphones even with protective cases. Conclusions: Smartphone adapters are inexpensive and easy to use for acquiring digital microscopic images. However, they require some adjustment by the user in order to optimize focus and obtain good quality images. Smartphone microscope adapters provide an economically feasible method of acquiring and sharing digital pathology photomicrographs. PMID:25191623
Image Sensors Enhance Camera Technologies
NASA Technical Reports Server (NTRS)
2010-01-01
In the 1990s, a Jet Propulsion Laboratory team led by Eric Fossum researched ways of improving complementary metal-oxide semiconductor (CMOS) image sensors in order to miniaturize cameras on spacecraft while maintaining scientific image quality. Fossum s team founded a company to commercialize the resulting CMOS active pixel sensor. Now called the Aptina Imaging Corporation, based in San Jose, California, the company has shipped over 1 billion sensors for use in applications such as digital cameras, camera phones, Web cameras, and automotive cameras. Today, one of every three cell phone cameras on the planet feature Aptina s sensor technology.
Fundus imaging with a mobile phone: a review of techniques.
Shanmugam, Mahesh P; Mishra, Divyansh K C; Madhukumar, R; Ramanjulu, Rajesh; Reddy, Srinivasulu Y; Rodrigues, Gladys
2014-09-01
Fundus imaging with a fundus camera is an essential part of ophthalmic practice. A mobile phone with its in-built camera and flash can be used to obtain fundus images of reasonable quality. The mobile phone can be used as an indirect ophthalmoscope when coupled with a condensing lens. It can be used as a direct ophthalmoscope after minimal modification, wherein the fundus can be viewed without an intervening lens in young patients with dilated pupils. Employing the ubiquitous mobile phone to obtain fundus images has the potential for mass screening, enables ophthalmologists without a fundus camera to document and share findings, is a tool for telemedicine and is rather inexpensive.
Development of Portable Automatic Number Plate Recognition System on Android Mobile Phone
NASA Astrophysics Data System (ADS)
Mutholib, Abdul; Gunawan, Teddy S.; Chebil, Jalel; Kartiwi, Mira
2013-12-01
The Automatic Number Plate Recognition (ANPR) System has performed as the main role in various access control and security, such as: tracking of stolen vehicles, traffic violations (speed trap) and parking management system. In this paper, the portable ANPR implemented on android mobile phone is presented. The main challenges in mobile application are including higher coding efficiency, reduced computational complexity, and improved flexibility. Significance efforts are being explored to find suitable and adaptive algorithm for implementation of ANPR on mobile phone. ANPR system for mobile phone need to be optimize due to its limited CPU and memory resources, its ability for geo-tagging image captured using GPS coordinates and its ability to access online database to store the vehicle's information. In this paper, the design of portable ANPR on android mobile phone will be described as follows. First, the graphical user interface (GUI) for capturing image using built-in camera was developed to acquire vehicle plate number in Malaysia. Second, the preprocessing of raw image was done using contrast enhancement. Next, character segmentation using fixed pitch and an optical character recognition (OCR) using neural network were utilized to extract texts and numbers. Both character segmentation and OCR were using Tesseract library from Google Inc. The proposed portable ANPR algorithm was implemented and simulated using Android SDK on a computer. Based on the experimental results, the proposed system can effectively recognize the license plate number at 90.86%. The required processing time to recognize a license plate is only 2 seconds on average. The result is consider good in comparison with the results obtained from previous system that was processed in a desktop PC with the range of result from 91.59% to 98% recognition rate and 0.284 second to 1.5 seconds recognition time.
Acceptance and perception of Nigerian patients to medical photography.
Adeyemo, W L; Mofikoya, B O; Akadiri, O A; James, O; Fashina, A A
2013-12-01
The aim of the study was to determine the acceptance and perception of Nigerian patients to medical photography. A self-administered questionnaire was distributed among Nigerian patients attending oral and maxillofacial surgery and plastic surgery clinics of 3 tertiary health institutions. Information requested included patients' opinion about consent process, capturing equipment, distribution and accessibility of medical photographs. The use of non-identifiable medical photographs was more acceptable than identifiable to respondents for all purposes (P = 0.003). Most respondents were favourably disposed to photographs being taken for inclusion in the case note, but opposed to identifiable photographs being used for other purposes most especially in medical websites and medical journals. Female respondents preferred non-identifiable medical photographs to identifiable ones (P = 0.001). Most respondents (78%) indicated that their consent be sought for each of the outline needs for medical photography. Half of the respondents indicated that identifiable photographs may have a negative effect on their persons; and the most commonly mentioned effects were social stigmatization, bad publicity and emotional/psychological effects. Most of the respondents preferred the use of hospital-owned camera to personal camera/personal camera-phone for their medical photographs. Most respondents (67.8%) indicated that they would like to be informed about the use of their photographs on every occasion, and 74% indicated that they would like to be informed of the specific journal in which their medical photographs are to be published. In conclusion, non-identifiable rather than identifiable medical photography is acceptable to most patients in the studied Nigerian environment. The use of personal camera/personal camera-phone should be discouraged as its acceptance by respondents is very low. Judicious use of medical photography is therefore advocated to avoid breach of principle of privacy and confidentiality in medical practice. © 2012 John Wiley & Sons Ltd.
Wavefront measurement of plastic lenses for mobile-phone applications
NASA Astrophysics Data System (ADS)
Huang, Li-Ting; Cheng, Yuan-Chieh; Wang, Chung-Yen; Wang, Pei-Jen
2016-08-01
In camera lenses for mobile-phone applications, all lens elements have been designed with aspheric surfaces because of the requirements in minimal total track length of the lenses. Due to the diffraction-limited optics design with precision assembly procedures, element inspection and lens performance measurement have become cumbersome in the production of mobile-phone cameras. Recently, wavefront measurements based on Shack-Hartmann sensors have been successfully implemented on injection-molded plastic lens with aspheric surfaces. However, the applications of wavefront measurement on small-sized plastic lenses have yet to be studied both theoretically and experimentally. In this paper, both an in-house-built and a commercial wavefront measurement system configured on two optics structures have been investigated with measurement of wavefront aberrations on two lens elements from a mobile-phone camera. First, the wet-cell method has been employed for verifications of aberrations due to residual birefringence in an injection-molded lens. Then, two lens elements of a mobile-phone camera with large positive and negative power have been measured with aberrations expressed in Zernike polynomial to illustrate the effectiveness in wavefront measurement for troubleshooting defects in optical performance.
Expression transmission using exaggerated animation for Elfoid
Hori, Maiya; Tsuruda, Yu; Yoshimura, Hiroki; Iwai, Yoshio
2015-01-01
We propose an expression transmission system using a cellular-phone-type teleoperated robot called Elfoid. Elfoid has a soft exterior that provides the look and feel of human skin, and is designed to transmit the speaker's presence to their communication partner using a camera and microphone. To transmit the speaker's presence, Elfoid sends not only the voice of the speaker but also the facial expression captured by the camera. In this research, facial expressions are recognized using a machine learning technique. Elfoid cannot, however, display facial expressions because of its compactness and a lack of sufficiently small actuator motors. To overcome this problem, facial expressions are displayed using Elfoid's head-mounted mobile projector. In an experiment, we built a prototype system and experimentally evaluated it's subjective usability. PMID:26347686
Embedded processor extensions for image processing
NASA Astrophysics Data System (ADS)
Thevenin, Mathieu; Paindavoine, Michel; Letellier, Laurent; Heyrman, Barthélémy
2008-04-01
The advent of camera phones marks a new phase in embedded camera sales. By late 2009, the total number of camera phones will exceed that of both conventional and digital cameras shipped since the invention of photography. Use in mobile phones of applications like visiophony, matrix code readers and biometrics requires a high degree of component flexibility that image processors (IPs) have not, to date, been able to provide. For all these reasons, programmable processor solutions have become essential. This paper presents several techniques geared to speeding up image processors. It demonstrates that a gain of twice is possible for the complete image acquisition chain and the enhancement pipeline downstream of the video sensor. Such results confirm the potential of these computing systems for supporting future applications.
Estimation of Antenna Pose in the Earth Frame Using Camera and IMU Data from Mobile Phones
Wang, Zhen; Jin, Bingwen; Geng, Weidong
2017-01-01
The poses of base station antennas play an important role in cellular network optimization. Existing methods of pose estimation are based on physical measurements performed either by tower climbers or using additional sensors attached to antennas. In this paper, we present a novel non-contact method of antenna pose measurement based on multi-view images of the antenna and inertial measurement unit (IMU) data captured by a mobile phone. Given a known 3D model of the antenna, we first estimate the antenna pose relative to the phone camera from the multi-view images and then employ the corresponding IMU data to transform the pose from the camera coordinate frame into the Earth coordinate frame. To enhance the resulting accuracy, we improve existing camera-IMU calibration models by introducing additional degrees of freedom between the IMU sensors and defining a new error metric based on both the downtilt and azimuth angles, instead of a unified rotational error metric, to refine the calibration. In comparison with existing camera-IMU calibration methods, our method achieves an improvement in azimuth accuracy of approximately 1.0 degree on average while maintaining the same level of downtilt accuracy. For the pose estimation in the camera coordinate frame, we propose an automatic method of initializing the optimization solver and generating bounding constraints on the resulting pose to achieve better accuracy. With this initialization, state-of-the-art visual pose estimation methods yield satisfactory results in more than 75% of cases when plugged into our pipeline, and our solution, which takes advantage of the constraints, achieves even lower estimation errors on the downtilt and azimuth angles, both on average (0.13 and 0.3 degrees lower, respectively) and in the worst case (0.15 and 7.3 degrees lower, respectively), according to an evaluation conducted on a dataset consisting of 65 groups of data. We show that both of our enhancements contribute to the performance improvement offered by the proposed estimation pipeline, which achieves downtilt and azimuth accuracies of respectively 0.47 and 5.6 degrees on average and 1.38 and 12.0 degrees in the worst case, thereby satisfying the accuracy requirements for network optimization in the telecommunication industry. PMID:28397765
Koydemir, Hatice Ceylan; Gorocs, Zoltan; Tseng, Derek; Cortazar, Bingen; Feng, Steve; Chan, Raymond Yan Lok; Burbano, Jordi; McLeod, Euan; Ozcan, Aydogan
2015-03-07
Rapid and sensitive detection of waterborne pathogens in drinkable and recreational water sources is crucial for treating and preventing the spread of water related diseases, especially in resource-limited settings. Here we present a field-portable and cost-effective platform for detection and quantification of Giardia lamblia cysts, one of the most common waterborne parasites, which has a thick cell wall that makes it resistant to most water disinfection techniques including chlorination. The platform consists of a smartphone coupled with an opto-mechanical attachment weighing ~205 g, which utilizes a hand-held fluorescence microscope design aligned with the camera unit of the smartphone to image custom-designed disposable water sample cassettes. Each sample cassette is composed of absorbent pads and mechanical filter membranes; a membrane with 8 μm pore size is used as a porous spacing layer to prevent the backflow of particles to the upper membrane, while the top membrane with 5 μm pore size is used to capture the individual Giardia cysts that are fluorescently labeled. A fluorescence image of the filter surface (field-of-view: ~0.8 cm(2)) is captured and wirelessly transmitted via the mobile-phone to our servers for rapid processing using a machine learning algorithm that is trained on statistical features of Giardia cysts to automatically detect and count the cysts captured on the membrane. The results are then transmitted back to the mobile-phone in less than 2 minutes and are displayed through a smart application running on the phone. This mobile platform, along with our custom-developed sample preparation protocol, enables analysis of large volumes of water (e.g., 10-20 mL) for automated detection and enumeration of Giardia cysts in ~1 hour, including all the steps of sample preparation and analysis. We evaluated the performance of this approach using flow-cytometer-enumerated Giardia-contaminated water samples, demonstrating an average cyst capture efficiency of ~79% on our filter membrane along with a machine learning based cyst counting sensitivity of ~84%, yielding a limit-of-detection of ~12 cysts per 10 mL. Providing rapid detection and quantification of microorganisms, this field-portable imaging and sensing platform running on a mobile-phone could be useful for water quality monitoring in field and resource-limited settings.
Waran, Vicknes; Selladurai, Benedict M; Bahuri, Nor Faizal Ahmad; George, George John K Thomas; Lim, Grace P S; Khine, Myo
2008-02-01
: We present our initial experience using a simple and relatively cost effective system using existing mobile phone network services and conventional handphones with built in cameras to capture carefully selected images from hard copies of scan images and transferring these images from a hospital without neurosurgical services to a university hospital with tertiary neurosurgical service for consultation and management plan. : A total of 14 patients with acute neurosurgical problems admitted to a general hospital in a 6 months period had their images photographed and transferred in JPEG format to a university neurosurgical unit. This was accompanied by a phone conference to discuss the scan and the patients' condition between the neurosurgeon and the referring physician. All images were also reviewed by a second independent neurosurgeon on a separate occasion to asses the agreement on the diagnosis and the management plan. : There were nine patients with acute head injury and five patients with acute nontraumatic neurosurgical problems. In all cases both neurosurgeons were in agreement that a diagnosis could be made on the basis of the images that were transferred. With respect to the management advice there were differences in opinion on three of the patients but these were considered to be minor. : Accurate diagnosis can be made on images of acute neurosurgical problems transferred using a conventional camera phone and meaningful decisions can be made on these images. This method of consultation also proved to be highly convenient and cost effective.
Image registration for multi-exposed HDRI and motion deblurring
NASA Astrophysics Data System (ADS)
Lee, Seok; Wey, Ho-Cheon; Lee, Seong-Deok
2009-02-01
In multi-exposure based image fusion task, alignment is an essential prerequisite to prevent ghost artifact after blending. Compared to usual matching problem, registration is more difficult when each image is captured under different photographing conditions. In HDR imaging, we use long and short exposure images, which have different brightness and there exist over/under satuated regions. In motion deblurring problem, we use blurred and noisy image pair and the amount of motion blur varies from one image to another due to the different exposure times. The main difficulty is that luminance levels of the two images are not in linear relationship and we cannot perfectly equalize or normalize the brightness of each image and this leads to unstable and inaccurate alignment results. To solve this problem, we applied probabilistic measure such as mutual information to represent similarity between images after alignment. In this paper, we discribed about the characteristics of multi-exposed input images in the aspect of registration and also analyzed the magnitude of camera hand shake. By exploiting the independence of luminance of mutual information, we proposed a fast and practically useful image registration technique in multiple capturing. Our algorithm can be applied to extreme HDR scenes and motion blurred scenes with over 90% success rate and its simplicity enables to be embedded in digital camera and mobile camera phone. The effectiveness of our registration algorithm is examined by various experiments on real HDR or motion deblurring cases using hand-held camera.
New technology in dietary assessment: a review of digital methods in improving food record accuracy.
Stumbo, Phyllis J
2013-02-01
Methods for conducting dietary assessment in the United States date back to the early twentieth century. Methods of assessment encompassed dietary records, written and spoken dietary recalls, FFQ using pencil and paper and more recently computer and internet applications. Emerging innovations involve camera and mobile telephone technology to capture food and meal images. This paper describes six projects sponsored by the United States National Institutes of Health that use digital methods to improve food records and two mobile phone applications using crowdsourcing. The techniques under development show promise for improving accuracy of food records.
Double biprism arrays design using for stereo-photography of mobile phone camera
NASA Astrophysics Data System (ADS)
Sun, Wen-Shing; Chu, Pu-Yi; Chao, Yu-Hao; Pan, Jui-Wen; Tien, Chuen-Lin
2016-11-01
Generally, mobile phone use one camera to catch the image, and it is hard to get stereo image pair. Adding a biprism array can help that get the image pair easily. So users can use their mobile phone to catch the stereo image anywhere by adding a biprism array, and if they want to get a normal image just remove it. Using biprism arrays will induce chromatic aberration. Therefore, we design a double biprism arrays to reduce chromatic aberration.
Image-based mobile service: automatic text extraction and translation
NASA Astrophysics Data System (ADS)
Berclaz, Jérôme; Bhatti, Nina; Simske, Steven J.; Schettino, John C.
2010-01-01
We present a new mobile service for the translation of text from images taken by consumer-grade cell-phone cameras. Such capability represents a new paradigm for users where a simple image provides the basis for a service. The ubiquity and ease of use of cell-phone cameras enables acquisition and transmission of images anywhere and at any time a user wishes, delivering rapid and accurate translation over the phone's MMS and SMS facilities. Target text is extracted completely automatically, requiring no bounding box delineation or related user intervention. The service uses localization, binarization, text deskewing, and optical character recognition (OCR) in its analysis. Once the text is translated, an SMS message is sent to the user with the result. Further novelties include that no software installation is required on the handset, any service provider or camera phone can be used, and the entire service is implemented on the server side.
Near-infrared fluorescence imaging with a mobile phone (Conference Presentation)
NASA Astrophysics Data System (ADS)
Ghassemi, Pejhman; Wang, Bohan; Wang, Jianting; Wang, Quanzeng; Chen, Yu; Pfefer, T. Joshua
2017-03-01
Mobile phone cameras employ sensors with near-infrared (NIR) sensitivity, yet this capability has not been exploited for biomedical purposes. Removing the IR-blocking filter from a phone-based camera opens the door to a wide range of techniques and applications for inexpensive, point-of-care biophotonic imaging and sensing. This study provides proof of principle for one of these modalities - phone-based NIR fluorescence imaging. An imaging system was assembled using a 780 nm light source along with excitation and emission filters with 800 nm and 825 nm cut-off wavelengths, respectively. Indocyanine green (ICG) was used as an NIR fluorescence contrast agent in an ex vivo rodent model, a resolution test target and a 3D-printed, tissue-simulating vascular phantom. Raw and processed images for red, green and blue pixel channels were analyzed for quantitative evaluation of fundamental performance characteristics including spectral sensitivity, detection linearity and spatial resolution. Mobile phone results were compared with a scientific CCD. The spatial resolution of CCD system was consistently superior to the phone, and green phone camera pixels showed better resolution than blue or green channels. The CCD exhibited similar sensitivity as processed red and blue pixels channels, yet a greater degree of detection linearity. Raw phone pixel data showed lower sensitivity but greater linearity than processed data. Overall, both qualitative and quantitative results provided strong evidence of the potential of phone-based NIR imaging, which may lead to a wide range of applications from cancer detection to glucose sensing.
COUGHLAN, JAMES; MANDUCHI, ROBERTO
2009-01-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users. PMID:19960101
Coughlan, James; Manduchi, Roberto
2009-06-01
We describe a wayfinding system for blind and visually impaired persons that uses a camera phone to determine the user's location with respect to color markers, posted at locations of interest (such as offices), which are automatically detected by the phone. The color marker signs are specially designed to be detected in real time in cluttered environments using computer vision software running on the phone; a novel segmentation algorithm quickly locates the borders of the color marker in each image, which allows the system to calculate how far the marker is from the phone. We present a model of how the user's scanning strategy (i.e. how he/she pans the phone left and right to find color markers) affects the system's ability to detect color markers given the limitations imposed by motion blur, which is always a possibility whenever a camera is in motion. Finally, we describe experiments with our system tested by blind and visually impaired volunteers, demonstrating their ability to reliably use the system to find locations designated by color markers in a variety of indoor and outdoor environments, and elucidating which search strategies were most effective for users.
Assessment of soil health and fertility indicators with mobile phone imagery
NASA Astrophysics Data System (ADS)
Aitkenhead, Matt; Gwatkin, Richard; Coull, Malcolm; Donnelly, David
2015-04-01
Work on rapid soil assessment in the field has led to many hand-held sensors for soil monitoring (e.g. NIR, FTIR, XRF). Recent work by a research team at the James Hutton Institute has led to an integrated framework of mobile phones, apps and server-side processing. One example of this is the SOCIT app for estimating soil organic matter and carbon using geolocated mobile phone camera imagery. The SOCIT app is only applicable for agricultural soils in Scotland, and our intention is to expand this work both geographically and in functional ability. Ongoing work for the development of a prototype app for estimating soil characteristics across Europe using mobile phone imagery and the JRC LUCAS dataset will be described. Additionally, we will demonstrate recent work in estimating a number of soil health indicators from more detailed analysis of soil photographs. Accuracy levels achieved for estimating soil organic matter and organic carbon content, pH, structure, cation exchange capacity and texture vary and are not as good as those achieved with laboratory analysis, but are suitable for rapid field-based assessment. Issues relating to this work include colour stabilisation and calibration, integration with data on site characteristics, data processing, model development and the ethical use of data captured by others, and each of these topics will also be discussed.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.
Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-10-21
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array
Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637
Miniature Spatial Heterodyne Raman Spectrometer with a Cell Phone Camera Detector.
Barnett, Patrick D; Angel, S Michael
2017-05-01
A spatial heterodyne Raman spectrometer (SHRS) with millimeter-sized optics has been coupled with a standard cell phone camera as a detector for Raman measurements. The SHRS is a dispersive-based interferometer with no moving parts and the design is amenable to miniaturization while maintaining high resolution and large spectral range. In this paper, a SHRS with 2.5 mm diffraction gratings has been developed with 17.5 cm -1 theoretical spectral resolution. The footprint of the SHRS is orders of magnitude smaller than the footprint of charge-coupled device (CCD) detectors typically employed in Raman spectrometers, thus smaller detectors are being explored to shrink the entire spectrometer package. This paper describes the performance of a SHRS with 2.5 mm wide diffraction gratings and a cell phone camera detector, using only the cell phone's built-in optics to couple the output of the SHRS to the sensor. Raman spectra of a variety of samples measured with the cell phone are compared to measurements made using the same miniature SHRS with high-quality imaging optics and a high-quality, scientific-grade, thermoelectrically cooled CCD.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Palmprint Recognition Across Different Devices.
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD.
Palmprint Recognition across Different Devices
Jia, Wei; Hu, Rong-Xiang; Gui, Jie; Zhao, Yang; Ren, Xiao-Ming
2012-01-01
In this paper, the problem of Palmprint Recognition Across Different Devices (PRADD) is investigated, which has not been well studied so far. Since there is no publicly available PRADD image database, we created a non-contact PRADD image database containing 12,000 grayscale captured from 100 subjects using three devices, i.e., one digital camera and two smart-phones. Due to the non-contact image acquisition used, rotation and scale changes between different images captured from a same palm are inevitable. We propose a robust method to calculate the palm width, which can be effectively used for scale normalization of palmprints. On this PRADD image database, we evaluate the recognition performance of three different methods, i.e., subspace learning method, correlation method, and orientation coding based method, respectively. Experiments results show that orientation coding based methods achieved promising recognition performance for PRADD. PMID:22969380
D Modelling with the Samsung Gear 360
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2017-02-01
The Samsung Gear 360 is a consumer grade spherical camera able to capture photos and videos. The aim of this work is to test the metric accuracy and the level of detail achievable with the Samsung Gear 360 coupled with digital modelling techniques based on photogrammetry/computer vision algorithms. Results demonstrate that the direct use of the projection generated inside the mobile phone or with Gear 360 Action Direction (the desktop software for post-processing) have a relatively low metric accuracy. As results were in contrast with the accuracy achieved by using the original fisheye images (front and rear facing images) in photogrammetric reconstructions, an alternative solution to generate the equirectangular projections was developed. A calibration aimed at understanding the intrinsic parameters of the two lenses camera, as well as their relative orientation, allowed one to generate new equirectangular projections from which a significant improvement of geometric accuracy has been achieved.
Capturing Fine Details Involving Low-Cost Sensors -a Comparative Study
NASA Astrophysics Data System (ADS)
Rehany, N.; Barsi, A.; Lovas, T.
2017-11-01
Capturing the fine details on the surface of small objects is a real challenge to many conventional surveying methods. Our paper discusses the investigation of several data acquisition technologies, such as arm scanner, structured light scanner, terrestrial laser scanner, object line-scanner, DSLR camera, and mobile phone camera. A palm-sized embossed sculpture reproduction was used as a test object; it has been surveyed by all the instruments. The result point clouds and meshes were then analyzed, using the arm scanner's dataset as reference. In addition to general statistics, the results have been evaluated based both on 3D deviation maps and 2D deviation graphs; the latter allows even more accurate analysis of the characteristics of the different data acquisition approaches. Additionally, own-developed local minimum maps were created that nicely visualize the potential level of detail provided by the applied technologies. Besides the usual geometric assessment, the paper discusses the different resource needs (cost, time, expertise) of the discussed techniques. Our results proved that even amateur sensors operated by amateur users can provide high quality datasets that enable engineering analysis. Based on the results, the paper contains an outlook to potential future investigations in this field.
Development of mobile phone based transcutaneous billirubinometry
NASA Astrophysics Data System (ADS)
Dumont, Alexander P.; Harrison, Brandon; McCormick, Zachary T.; Ganesh Kumar, Nishant; Patil, Chetan A.
2017-03-01
Infants in the US are routinely screened for risk of neurodevelopmental impairment due to neonatal jaundice using transcutaneous bilirubinometry (TcB). In low-resource settings, such as sub-Saharan Africa, TcB devices are not common, however, mobile camera-phones are now widespread. We provide an update on the development of TcB using the built-in camera and flash of a mobile phone, along with a snap-on adapter containing optical filters. We will present Monte Carlo Extreme modeling of diffuse reflectance in neonatal skin, implications in design, and refined analysis methods.
Mobile phone based mini-spectrometer for rapid screening of skin cancer
NASA Astrophysics Data System (ADS)
Das, Anshuman; Swedish, Tristan; Wahi, Akshat; Moufarrej, Mira; Noland, Marie; Gurry, Thomas; Aranda-Michel, Edgar; Aksel, Deniz; Wagh, Sneha; Sadashivaiah, Vijay; Zhang, Xu; Raskar, Ramesh
2015-06-01
We demonstrate a highly sensitive mobile phone based spectrometer that has potential to detect cancerous skin lesions in a rapid, non-invasive manner. Earlier reports of low cost spectrometers utilize the camera of the mobile phone to image the field after moving through a diffraction grating. These approaches are inherently limited by the closed nature of mobile phone image sensors and built in optical elements. The system presented uses a novel integrated grating and sensor that is compact, accurate and calibrated. Resolutions of about 10 nm can be achieved. Additionally, UV and visible LED excitation sources are built into the device. Data collection and analysis is simplified using the wireless interfaces and logical control on the smart phone. Furthermore, by utilizing an external sensor, the mobile phone camera can be used in conjunction with spectral measurements. We are exploring ways to use this device to measure endogenous fluorescence of skin in order to distinguish cancerous from non-cancerous lesions with a mobile phone based dermatoscope.
ERIC Educational Resources Information Center
Koesdjojo, Myra T.; Pengpumkiat, Sumate; Wu, Yuanyuan; Boonloed, Anukul; Huynh, Daniel; Remcho, Thomas P.; Remcho, Vincent T.
2015-01-01
We have developed a simple and direct method to fabricate paper-based microfluidic devices that can be used for a wide range of colorimetric assay applications. With these devices, assays can be performed within minutes to allow for quantitative colorimetric analysis by use of a widely accessible iPhone camera and an RGB color reader application…
A novel smartphone ophthalmic imaging adapter: User feasibility studies in Hyderabad, India
Ludwig, Cassie A; Murthy, Somasheila I; Pappuru, Rajeev R; Jais, Alexandre; Myung, David J; Chang, Robert T
2016-01-01
Aim of Study: To evaluate the ability of ancillary health staff to use a novel smartphone imaging adapter system (EyeGo, now known as Paxos Scope) to capture images of sufficient quality to exclude emergent eye findings. Secondary aims were to assess user and patient experiences during image acquisition, interuser reproducibility, and subjective image quality. Materials and Methods: The system captures images using a macro lens and an indirect ophthalmoscopy lens coupled with an iPhone 5S. We conducted a prospective cohort study of 229 consecutive patients presenting to L. V. Prasad Eye Institute, Hyderabad, India. Primary outcome measure was mean photographic quality (FOTO-ED study 1–5 scale, 5 best). 210 patients and eight users completed surveys assessing comfort and ease of use. For 46 patients, two users imaged the same patient's eyes sequentially. For 182 patients, photos taken with the EyeGo system were compared to images taken by existing clinic cameras: a BX 900 slit-lamp with a Canon EOS 40D Digital Camera and an FF 450 plus Fundus Camera with VISUPAC™ Digital Imaging System. Images were graded post hoc by a reviewer blinded to diagnosis. Results: Nine users acquired 719 useable images and 253 videos of 229 patients. Mean image quality was ≥ 4.0/5.0 (able to exclude subtle findings) for all users. 8/8 users and 189/210 patients surveyed were comfortable with the EyeGo device on a 5-point Likert scale. For 21 patients imaged with the anterior adapter by two users, a weighted κ of 0.597 (95% confidence interval: 0.389–0.806) indicated moderate reproducibility. High level of agreement between EyeGo and existing clinic cameras (92.6% anterior, 84.4% posterior) was found. Conclusion: The novel, ophthalmic imaging system is easily learned by ancillary eye care providers, well tolerated by patients, and captures high-quality images of eye findings. PMID:27146928
Quantum dot enabled detection of Escherichia coli using a cell-phone.
Zhu, Hongying; Sikora, Uzair; Ozcan, Aydogan
2012-06-07
We report a cell-phone based Escherichia coli (E. coli) detection platform for screening of liquid samples. In this compact and cost-effective design attached to a cell-phone, we utilize anti-E. coli O157:H7 antibody functionalized glass capillaries as solid substrates to perform a quantum dot based sandwich assay for specific detection of E. coli O157:H7 in liquid samples. Using battery-powered inexpensive light-emitting-diodes (LEDs) we excite/pump these labelled E. coli particles captured on the capillary surface, where the emission from the quantum dots is then imaged using the cell-phone camera unit through an additional lens that is inserted between the capillary and the cell-phone. By quantifying the fluorescent light emission from each capillary tube, the concentration of E. coli in the sample is determined. We experimentally confirmed the detection limit of this cell-phone based fluorescent imaging and sensing platform as ∼5 to 10 cfu mL(-1) in buffer solution. We also tested the specificity of this E. coli detection platform by spiking samples with different species (e.g., Salmonella) to confirm that non-specific binding/detection is negligible. We further demonstrated the proof-of-concept of our approach in a complex food matrix, e.g., fat-free milk, where a similar detection limit of ∼5 to 10 cfu mL(-1) was achieved despite challenges associated with the density of proteins that exist in milk. Our results reveal the promising potential of this cell-phone enabled field-portable and cost-effective E. coli detection platform for e.g., screening of water and food samples even in resource limited environments. The presented platform can also be applicable to other pathogens of interest through the use of different antibodies.
A low cost mobile phone dark-field microscope for nanoparticle-based quantitative studies.
Sun, Dali; Hu, Tony Y
2018-01-15
Dark-field microscope (DFM) analysis of nanoparticle binding signal is highly useful for a variety of research and biomedical applications, but current applications for nanoparticle quantification rely on expensive DFM systems. The cost, size, limited robustness of these DFMs limits their utility for non-laboratory settings. Most nanoparticle analyses use high-magnification DFM images, which are labor intensive to acquire and subject to operator bias. Low-magnification DFM image capture is faster, but is subject to background from surface artifacts and debris, although image processing can partially compensate for background signal. We thus mated an LED light source, a dark-field condenser and a 20× objective lens with a mobile phone camera to create an inexpensive, portable and robust DFM system suitable for use in non-laboratory conditions. This proof-of-concept mobile DFM device weighs less than 400g and costs less than $2000, but analysis of images captured with this device reveal similar nanoparticle quantitation results to those acquired with a much larger and more expensive desktop DFMM system. Our results suggest that similar devices may be useful for quantification of stable, nanoparticle-based activity and quantitation assays in resource-limited areas where conventional assay approaches are not practical. Copyright © 2017 Elsevier B.V. All rights reserved.
In-flight Video Captured by External Tank Camera System
NASA Technical Reports Server (NTRS)
2005-01-01
In this July 26, 2005 video, Earth slowly fades into the background as the STS-114 Space Shuttle Discovery climbs into space until the External Tank (ET) separates from the orbiter. An External Tank ET Camera System featuring a Sony XC-999 model camera provided never before seen footage of the launch and tank separation. The camera was installed in the ET LO2 Feedline Fairing. From this position, the camera had a 40% field of view with a 3.5 mm lens. The field of view showed some of the Bipod area, a portion of the LH2 tank and Intertank flange area, and some of the bottom of the shuttle orbiter. Contained in an electronic box, the battery pack and transmitter were mounted on top of the Solid Rocker Booster (SRB) crossbeam inside the ET. The battery pack included 20 Nickel-Metal Hydride batteries (similar to cordless phone battery packs) totaling 28 volts DC and could supply about 70 minutes of video. Located 95 degrees apart on the exterior of the Intertank opposite orbiter side, there were 2 blade S-Band antennas about 2 1/2 inches long that transmitted a 10 watt signal to the ground stations. The camera turned on approximately 10 minutes prior to launch and operated for 15 minutes following liftoff. The complete camera system weighs about 32 pounds. Marshall Space Flight Center (MSFC), Johnson Space Center (JSC), Goddard Space Flight Center (GSFC), and Kennedy Space Center (KSC) participated in the design, development, and testing of the ET camera system.
NASA Astrophysics Data System (ADS)
Gamadia, Mark Noel
In order to gain valuable market share in the growing consumer digital still camera and camera phone market, camera manufacturers have to continually add and improve existing features to their latest product offerings. Auto-focus (AF) is one such feature, whose aim is to enable consumers to quickly take sharply focused pictures with little or no manual intervention in adjusting the camera's focus lens. While AF has been a standard feature in digital still and cell-phone cameras, consumers often complain about their cameras' slow AF performance, which may lead to missed photographic opportunities, rendering valuable moments and events with undesired out-of-focus pictures. This dissertation addresses this critical issue to advance the state-of-the-art in the digital band-pass filter, passive AF method. This method is widely used to realize AF in the camera industry, where a focus actuator is adjusted via a search algorithm to locate the in-focus position by maximizing a sharpness measure extracted from a particular frequency band of the incoming image of the scene. There are no known systematic methods for automatically deriving the parameters such as the digital pass-bands or the search step-size increments used in existing passive AF schemes. Conventional methods require time consuming experimentation and tuning in order to arrive at a set of parameters which balance AF performance in terms of speed and accuracy ultimately causing a delay in product time-to-market. This dissertation presents a new framework for determining an optimal set of passive AF parameters, named Filter- Switching AF, providing an automatic approach to achieve superior AF performance, both in good and low lighting conditions based on the following performance measures (metrics): speed (total number of iterations), accuracy (offset from truth), power consumption (total distance moved), and user experience (in-focus position overrun). Performance results using three different prototype cameras are presented to further illustrate the real-world AF performance gains achieved by the developed approach. The major contribution of this dissertation is that the developed auto focusing approach can be successfully used by camera manufacturers in the development of the AF feature in future generations of digital still cameras and camera phones.
Hashimoto, Daniel A; Phitayakorn, Roy; Fernandez-del Castillo, Carlos; Meireles, Ozanan
2016-01-01
The goal of telementoring is to recreate face-to-face encounters with a digital presence. Open-surgery telementoring is limited by lack of surgeon's point-of-view cameras. Google Glass is a wearable computer that looks like a pair of glasses but is equipped with wireless connectivity, a camera, and viewing screen for video conferencing. This study aimed to assess the safety of using Google Glass by assessing the video quality of a telementoring session. Thirty-four (n = 34) surgeons at a single institution were surveyed and blindly compared via video captured with Google Glass versus an Apple iPhone 5 during the open cholecystectomy portion of a Whipple. Surgeons were asked to evaluate the quality of the video and its adequacy for safe use in telementoring. Thirty-four of 107 invited surgical attendings (32%) responded to the anonymous survey. A total of 50% rated the Google Glass video as fair with the other 50% rating it as bad to poor. A total of 52.9% of respondents rated the Apple iPhone video as good. A significantly greater proportion of respondents felt Google Glass video quality was inadequate for telementoring versus the Apple iPhone's (82.4 vs 26.5%, p < 0.0001). Intraclass correlation coefficient was 0.924 (95% CI 0.660-0.999, p < 0.001). While Google Glass provides a great breadth of functionality as a wearable device with two-way communication capabilities, current hardware limitations prevent its use as a telementoring device in surgery as the video quality is inadequate for safe telementoring. As the device is still in initial phases of development, future iterations or competitor devices may provide a better telementoring application for wearable devices.
Cost-effective and compact wide-field fluorescent imaging on a cell-phone.
Zhu, Hongying; Yaglidere, Oguzhan; Su, Ting-Wei; Tseng, Derek; Ozcan, Aydogan
2011-01-21
We demonstrate wide-field fluorescent and darkfield imaging on a cell-phone with compact, light-weight and cost-effective optical components that are mechanically attached to the existing camera unit of the cell-phone. For this purpose, we used battery powered light-emitting diodes (LEDs) to pump the sample of interest from the side using butt-coupling, where the pump light was guided within the sample cuvette to uniformly excite the specimen. The fluorescent emission from the sample was then imaged using an additional lens that was positioned right in front of the existing lens of the cell-phone camera. Because the excitation occurs through guided waves that propagate perpendicular to our detection path, an inexpensive plastic colour filter was sufficient to create the dark-field background required for fluorescent imaging, without the need for a thin-film interference filter. We validate the performance of this platform by imaging various fluorescent micro-objects in 2 colours (i.e., red and green) over a large field-of-view (FOV) of ∼81 mm(2) with a raw spatial resolution of ∼20 μm. With additional digital processing of the captured cell-phone images, through the use of compressive sampling theory, we demonstrate ∼2 fold improvement in our resolving power, achieving ∼10 μm resolution without a trade-off in our FOV. Further, we also demonstrate darkfield imaging of non-fluorescent specimen using the same interface, where this time the scattered light from the objects is detected without the use of any filters. The capability of imaging a wide FOV would be exceedingly important to probe large sample volumes (e.g., >0.1 mL) of e.g., blood, urine, sputum or water, and for this end we also demonstrate fluorescent imaging of labeled white-blood cells from whole blood samples, as well as water-borne pathogenic protozoan parasites such as Giardia Lamblia cysts. Weighing only ∼28 g (∼1 ounce), this compact and cost-effective fluorescent imaging platform attached to a cell-phone could be quite useful especially for resource-limited settings, and might provide an important tool for wide-field imaging and quantification of various lab-on-a-chip assays developed for global health applications, such as monitoring of HIV+ patients for CD4 counts or viral load measurements.
Mobile Video in Everyday Social Interactions
NASA Astrophysics Data System (ADS)
Reponen, Erika; Lehikoinen, Jaakko; Impiö, Jussi
Video recording has become a spontaneous everyday activity for many people, thanks to the video capabilities of modern mobile phones. Internet connectivity of mobile phones enables fluent sharing of captured material even real-time, which makes video an up-and-coming everyday interaction medium. In this article we discuss the effect of the video camera in the social environment, everyday life situations, mainly based on a study where four groups of people used digital video cameras in their normal settings. We also reflect on another study of ours, relating to real-time mobile video communication and discuss future views. The aim of our research is to understand the possibilities in the domain of mobile video. Live and delayed sharing seem to have their special characteristics, live video being used as a virtual window between places whereas delayed video usage has more scope for good-quality content. While this novel way of interacting via mobile video enables new social patterns, it also raises new concerns for privacy and trust between participating persons in all roles, largely due to the widely spreading possibilities of videos. Video in a social situation affects cameramen (who record), targets (who are recorded), passers-by (who are unintentionally in the situation), and the audience (who follow the videos or recording situations) but also the other way around, the participants affect the video by their varying and evolving personal and communicational motivations for recording.
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila C.; Barriga, E. Simon; Harding, Simon P.; Lewallen, Susan; Taylor, Terrie E.; MacCormick, Ian J.; Joshi, Vinayak S.
2016-03-01
The purpose of this study was to test the suitability of three available camera technologies (desktop, portable, and iphone based) for imaging comatose children who presented with clinical symptoms of malaria. Ultimately, the results of the project would form the basis for a design of a future camera to screen for malaria retinopathy (MR) in a resource challenged environment. The desktop, portable, and i-phone based cameras were represented by the Topcon, Pictor Plus, and Peek cameras, respectively. These cameras were tested on N=23 children presenting with symptoms of cerebral malaria (CM) at a malaria clinic, Queen Elizabeth Teaching Hospital in Malawi, Africa. Each patient was dilated for binocular indirect ophthalmoscopy (BIO) exam by an ophthalmologist followed by imaging with all three cameras. Each of the cases was graded according to an internationally established protocol and compared to the BIO as the clinical ground truth. The reader used three principal retinal lesions as markers for MR: hemorrhages, retinal whitening, and vessel discoloration. The study found that the mid-priced Pictor Plus hand-held camera performed considerably better than the lower price mobile phone-based camera, and slightly the higher priced table top camera. When comparing the readings of digital images against the clinical reference standard (BIO), the Pictor Plus camera had sensitivity and specificity for MR of 100% and 87%, respectively. This compares to a sensitivity and specificity of 87% and 75% for the i-phone based camera and 100% and 75% for the desktop camera. The drawback of all the cameras were their limited field of view which did not allow complete view of the periphery where vessel discoloration occurs most frequently. The consequence was that vessel discoloration was not addressed in this study. None of the cameras offered real-time image quality assessment to ensure high quality images to afford the best possible opportunity for reading by a remotely located specialist.
Remer, Itay; Bilenca, Alberto
2015-11-01
Photoplethysmography is a well-established technique for the noninvasive measurement of blood pulsation. However, photoplethysmographic devices typically need to be in contact with the surface of the tissue and provide data from a single contact point. Extensions of conventional photoplethysmography to measurements over a wide field-of-view exist, but require advanced signal processing due to the low signal-to-noise-ratio of the photoplethysmograms. Here, we present a noncontact method based on temporal sampling of time-integrated speckle using a camera-phone for noninvasive, widefield measurements of physiological parameters across the human fingertip including blood pulsation and resting heart-rate frequency. The results show that precise estimation of these parameters with high spatial resolution is enabled by measuring the local temporal variation of speckle patterns of backscattered light from subcutaneous skin, thereby opening up the possibility for accurate high resolution blood pulsation imaging on a camera-phone. Camera-phone laser speckle imager along with measured relative blood perfusion maps of a fingertip showing skin perfusion response to a pulse pressure applied to the upper arm. The figure is for illustration only; the imager was stabilized on a stand throughout the experiments. © 2015 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Surface Plasmon Resonance Biosensor Based on Smart Phone Platforms.
Liu, Yun; Liu, Qiang; Chen, Shimeng; Cheng, Fang; Wang, Hanqi; Peng, Wei
2015-08-10
We demonstrate a fiber optic surface plasmon resonance (SPR) biosensor based on smart phone platforms. The light-weight optical components and sensing element are connected by optical fibers on a phone case. This SPR adaptor can be conveniently installed or removed from smart phones. The measurement, control and reference channels are illuminated by the light entering the lead-in fibers from the phone's LED flash, while the light from the end faces of the lead-out fibers is detected by the phone's camera. The SPR-sensing element is fabricated by a light-guiding silica capillary that is stripped off its cladding and coated with 50-nm gold film. Utilizing a smart application to extract the light intensity information from the camera images, the light intensities of each channel are recorded every 0.5 s with refractive index (RI) changes. The performance of the smart phone-based SPR platform for accurate and repeatable measurements was evaluated by detecting different concentrations of antibody binding to a functionalized sensing element, and the experiment results were validated through contrast experiments with a commercial SPR instrument. This cost-effective and portable SPR biosensor based on smart phones has many applications, such as medicine, health and environmental monitoring.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.
2017-03-01
Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.
Quantum dot enabled detection of Escherichia coli using a cell-phone†
Zhu, Hongying; Sikora, Uzair; Ozcan, Aydogan
2013-01-01
We report a cell-phone based Escherichia coli (E. coli) detection platform for screening of liquid samples. In this compact and cost-effective design attached to a cell-phone, we utilize anti-E. coli O157:H7 antibody functionalized glass capillaries as solid substrates to perform a quantum dot based sandwich assay for specific detection of E. coli O157:H7 in liquid samples. Using battery-powered inexpensive light-emitting-diodes (LEDs) we excite/pump these labelled E. coli particles captured on the capillary surface, where the emission from the quantum dots is then imaged using the cell-phone camera unit through an additional lens that is inserted between the capillary and the cell-phone. By quantifying the fluorescent light emission from each capillary tube, the concentration of E. coli in the sample is determined. We experimentally confirmed the detection limit of this cell-phone based fluorescent imaging and sensing platform as ~5 to 10 cfu mL−1 in buffer solution. We also tested the specificity of this E. coli detection platform by spiking samples with different species (e.g., Salmonella) to confirm that non-specific binding/detection is negligible. We further demonstrated the proof-of-concept of our approach in a complex food matrix, e.g., fat-free milk, where a similar detection limit of ~5 to 10 cfu mL−1 was achieved despite challenges associated with the density of proteins that exist in milk. Our results reveal the promising potential of this cell-phone enabled field-portable and cost-effective E. coli detection platform for e.g., screening of water and food samples even in resource limited environments. The presented platform can also be applicable to other pathogens of interest through the use of different antibodies. PMID:22396952
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
The Global Sensor Web: A Platform for Citizen Science (Invited)
NASA Astrophysics Data System (ADS)
Simons, A. L.
2013-12-01
The Global Sensor Web (GSW) is an effort to provide an infrastructure for the collection, sharing and visualizing sensor data from around the world. Over the past three years the GSW has been developed and tested as a standardized platform for citizen science. The most developed of the citizen science projects built onto the GSW has been Distributed Electronic Cosmic-ray Observatory (DECO), which is an Android application designed to harness a global network of mobile devices, to detect the origin and behavior of the cosmic radiation. Other projects which can be readily built on top of GSW as a platform are also discussed. A cosmic-ray track candidate captured on a cell phone camera.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
2012-06-08
Definitions Importantly, as an operational definition of ‘social media,’ I include Facebook, Twitter, YouTube, and social networking sites not specifically...the aforementioned social networking sites . As an operational definition of ‘security operations’ for the purposes of this paper, I use the...the existence of camera phones, Facebook, Twitter, and other social networking sites , individuals’ behavior changed with the advent of the Internet
Passive radiation detection using optically active CMOS sensors
NASA Astrophysics Data System (ADS)
Dosiek, Luke; Schalk, Patrick D.
2013-05-01
Recently, there have been a number of small-scale and hobbyist successes in employing commodity CMOS-based camera sensors for radiation detection. For example, several smartphone applications initially developed for use in areas near the Fukushima nuclear disaster are capable of detecting radiation using a cell phone camera, provided opaque tape is placed over the lens. In all current useful implementations, it is required that the sensor not be exposed to visible light. We seek to build a system that does not have this restriction. While building such a system would require sophisticated signal processing, it would nevertheless provide great benefits. In addition to fulfilling their primary function of image capture, cameras would also be able to detect unknown radiation sources even when the danger is considered to be low or non-existent. By experimentally profiling the image artifacts generated by gamma ray and β particle impacts, algorithms are developed to identify the unique features of radiation exposure, while discarding optical interaction and thermal noise effects. Preliminary results focus on achieving this goal in a laboratory setting, without regard to integration time or computational complexity. However, future work will seek to address these additional issues.
Pérez Escamirosa, Fernando; Ordorica Flores, Ricardo; Minor Martínez, Arturo
2015-04-01
In this article, we describe the construction and validation of a laparoscopic trainer using an iPhone 5 and a plastic document holder case. The abdominal cavity was simulated with a clear plastic document holder case. On 1 side of the case, 2 holes for entry of laparoscopic instruments were drilled. We added a window to place the camera of the iPhone, which works as our camera of the trainer. Twenty residents carried out 4 tasks using the iPhone Trainer and a physical laparoscopic trainer. The time of all tasks were analyzed with a simple paired t test. The construction of the trainer took 1 hour, with a cost of
Koban, K C; Leitsch, S; Holzbach, T; Volkmer, E; Metz, P M; Giunta, R E
2014-04-01
A new approach of using photographs from smartphones for three-dimensional (3D) imaging was introduced besides the standard high quality 3D camera systems. In this work, we investigated different capture preferences and compared the accuracy of this 3D reconstruction method with manual tape measurement and an established commercial 3D camera system. The facial region of one plastic mannequin head was labelled with 21 landmarks. A 3D reference model was captured with the Vectra 3D Imaging System®. In addition, 3D imaging was executed with the Autodesk 123d Catch® application using 16, 12, 9, 6 and 3 pictures from Apple® iPhone 4 s® and iPad® 3rd generation. The accuracy of 3D reconstruction was measured in 2 steps. First, 42 distance measurements from manual tape measurement and the 2 digital systems were compared. Second, the surface-to-surface deviation of different aesthetic units from the Vectra® reference model to Catch® generated models was analysed. For each 3D system the capturing and processing time was measured. The measurement showed no significant (p>0.05) difference between manual tape measurement and both digital distances from the Catch® application and Vectra®. Surface-to-surface deviation to the Vectra® reference model showed sufficient results for the 3D reconstruction of Catch® with 16, 12 and 9 picture sets. Use of 6 and 3 pictures resulted in large deviations. Lateral aesthetic units showed higher deviations than central units. Catch® needed 5 times longer to capture and compute 3D models (average 10 min vs. 2 min). The Autodesk 123d Catch® computed models suggests good accuracy of the 3D reconstruction for a standard mannequin model, in comparison to manual tape measurement and the surface-to-surface analysis with a 3D reference model. However, the prolonged capture time with multiple pictures is prone to errors. Further studies are needed to investigate its application and quality in capturing volunteer models. Soon mobile applications may offer an alternative for plastic surgeons to today's cost intensive, stationary 3D camera systems. © Georg Thieme Verlag KG Stuttgart · New York.
Welsh, Christopher
2016-01-01
As part of a comprehensive plan to attempt to minimize the diversion of prescribed controlled substances, many professional organization and licensing boards are recommending the use of "pill counts." This study sought to evaluate acceptability of the use of cellular phone and computer pictures/video for "pill counts" by patients in buprenorphine maintenance treatment. Patients prescribed buprenorphine/naloxone were asked a series of questions related to the type(s) of electronic communication to which they had access as well as their willingness to use these for the purpose of performing a "pill/film count." Of the 80 patients, 4 (5 percent) did not have a phone at all. Only 28 (35 percent) had a "smart phone" with some sort of data plan and Internet access. Forty (50 percent) of the patients had a phone with no camera and 10 (12.5 percent) had a phone with a camera but no video capability. All patients said that they would be willing to periodically use the video or camera on their phone or computer to have buprenorphine/naloxone pills or film counted as long as the communication was protected from electronic tampering. With the advent of applications for smart phones that allow for Health Insurance Portability and Accountability Act of 1996-compliant picture/video communication, a number of things can now be done that can enhance patient care as well as reduce the chances of misuse/diversion of prescribed medications. This could be used in settings where a larger proportion of controlled substances are prescribed including medication assisted therapy for opioid use disorders and pain management programs.
Mobile Phone Images and Video in Science Teaching and Learning
ERIC Educational Resources Information Center
Ekanayake, Sakunthala Yatigammana; Wishart, Jocelyn
2014-01-01
This article reports a study into how mobile phones could be used to enhance teaching and learning in secondary school science. It describes four lessons devised by groups of Sri Lankan teachers all of which centred on the use of the mobile phone cameras rather than their communication functions. A qualitative methodological approach was used to…
Determining Sala mango qualities with the use of RGB images captured by a mobile phone camera
NASA Astrophysics Data System (ADS)
Yahaya, Ommi Kalsom Mardziah; Jafri, Mohd Zubir Mat; Aziz, Azlan Abdul; Omar, Ahmad Fairuz
2015-04-01
Sala mango (Mangifera indicia) is one of the Malaysia's most popular tropical fruits that are widely marketed within the country. The degrees of ripeness of mangoes have conventionally been evaluated manually on the basis of color parameters, but a simple non-destructive technique using the Samsung Galaxy Note 1 mobile phone camera is introduced to replace the destructive technique. In this research, color parameters in terms of RGB values acquired using the ENVI software system were linked to detect Sala mango quality parameters. The features of mango were extracted from the acquired images and then used to classify of fruit skin color, which relates to the stages of ripening. A multivariate analysis method, multiple linear regression, was employed with the purpose of using RGB color parameters to estimate the pH, soluble solids content (SSC), and firmness. The relationship between these qualities parameters of Sala mango and its mean pixel values in the RGB system is analyzed. Findings show that pH yields the highest accuracy with a correlation coefficient R = 0.913 and root mean square of error RMSE = 0.166 pH. Meanwhile, firmness has R = 0.875 and RMSE = 1.392 kgf, whereas soluble solid content has the lowest accuracy with R = 0.814 and RMSE = 1.218°Brix with the correlation between color parameters. Therefore, this non-invasive method can be used to determine the quality attributes of mangoes.
Face detection assisted auto exposure: supporting evidence from a psychophysical study
NASA Astrophysics Data System (ADS)
Jin, Elaine W.; Lin, Sheng; Dharumalingam, Dhandapani
2010-01-01
Face detection has been implemented in many digital still cameras and camera phones with the promise of enhancing existing camera functions (e.g. auto exposure) and adding new features to cameras (e.g. blink detection). In this study we examined the use of face detection algorithms in assisting auto exposure (AE). The set of 706 images, used in this study, was captured using Canon Digital Single Lens Reflex cameras and subsequently processed with an image processing pipeline. A psychophysical study was performed to obtain optimal exposure along with the upper and lower bounds of exposure for all 706 images. Three methods of marking faces were utilized: manual marking, face detection algorithm A (FD-A), and face detection algorithm B (FD-B). The manual marking method found 751 faces in 426 images, which served as the ground-truth for face regions of interest. The remaining images do not have any faces or the faces are too small to be considered detectable. The two face detection algorithms are different in resource requirements and in performance. FD-A uses less memory and gate counts compared to FD-B, but FD-B detects more faces and has less false positives. A face detection assisted auto exposure algorithm was developed and tested against the evaluation results from the psychophysical study. The AE test results showed noticeable improvement when faces were detected and used in auto exposure. However, the presence of false positives would negatively impact the added benefit.
Cost-effective and compact wide-field fluorescent imaging on a cell-phone†
Zhu, Hongying; Yaglidere, Oguzhan; Su, Ting-Wei; Tseng, Derek
2011-01-01
We demonstrate wide-field fluorescent and darkfield imaging on a cell-phone with compact, light-weight and cost-effective optical components that are mechanically attached to the existing camera unit of the cell-phone. For this purpose, we used battery powered light-emitting diodes (LEDs) to pump the sample of interest from the side using butt-coupling, where the pump light was guided within the sample cuvette to uniformly excite the specimen. The fluorescent emission from the sample was then imaged using an additional lens that was positioned right in front of the existing lens of the cell-phone camera. Because the excitation occurs through guided waves that propagate perpendicular to our detection path, an inexpensive plastic colour filter was sufficient to create the dark-field background required for fluorescent imaging, without the need for a thin-film interference filter. We validate the performance of this platform by imaging various fluorescent micro-objects in 2 colours (i.e., red and green) over a large field-of-view (FOV) of ~81 mm2 with a raw spatial resolution of ~20 μm. With additional digital processing of the captured cell-phone images, through the use of compressive sampling theory, we demonstrate ~2 fold improvement in our resolving power, achieving ~10 μm resolution without a trade-off in our FOV. Further, we also demonstrate darkfield imaging of non-fluorescent specimen using the same interface, where this time the scattered light from the objects is detected without the use of any filters. The capability of imaging a wide FOV would be exceedingly important to probe large sample volumes (e.g., >0.1 mL) of e.g., blood, urine, sputum or water, and for this end we also demonstrate fluorescent imaging of labeled white-blood cells from whole blood samples, as well as water-borne pathogenic protozoan parasites such as Giardia Lamblia cysts. Weighing only ~28 g (~1 ounce), this compact and cost-effective fluorescent imaging platform attached to a cell-phone could be quite useful especially for resource-limited settings, and might provide an important tool for wide-field imaging and quantification of various lab-on-a-chip assays developed for global health applications, such as monitoring of HIV+ patients for CD4 counts or viral load measurements. PMID:21063582
76 FR 57941 - Retrospective Review Under E.O. 13563: Cargo Preference
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-19
... attendees are encouraged to limit bags and other items (e.g. mobile phones, laptops, cameras, etc.) they... phone [See also Registration]. Agenda released on regs.dot.gov September 28, 2011. and MarAd Web site...
Capturing method for integral three-dimensional imaging using multiviewpoint robotic cameras
NASA Astrophysics Data System (ADS)
Ikeya, Kensuke; Arai, Jun; Mishina, Tomoyuki; Yamaguchi, Masahiro
2018-03-01
Integral three-dimensional (3-D) technology for next-generation 3-D television must be able to capture dynamic moving subjects with pan, tilt, and zoom camerawork as good as in current TV program production. We propose a capturing method for integral 3-D imaging using multiviewpoint robotic cameras. The cameras are controlled through a cooperative synchronous system composed of a master camera controlled by a camera operator and other reference cameras that are utilized for 3-D reconstruction. When the operator captures a subject using the master camera, the region reproduced by the integral 3-D display is regulated in real space according to the subject's position and view angle of the master camera. Using the cooperative control function, the reference cameras can capture images at the narrowest view angle that does not lose any part of the object region, thereby maximizing the resolution of the image. 3-D models are reconstructed by estimating the depth from complementary multiviewpoint images captured by robotic cameras arranged in a two-dimensional array. The model is converted into elemental images to generate the integral 3-D images. In experiments, we reconstructed integral 3-D images of karate players and confirmed that the proposed method satisfied the above requirements.
Surface Plasmon Resonance Biosensor Based on Smart Phone Platforms
NASA Astrophysics Data System (ADS)
Liu, Yun; Liu, Qiang; Chen, Shimeng; Cheng, Fang; Wang, Hanqi; Peng, Wei
2015-08-01
We demonstrate a fiber optic surface plasmon resonance (SPR) biosensor based on smart phone platforms. The light-weight optical components and sensing element are connected by optical fibers on a phone case. This SPR adaptor can be conveniently installed or removed from smart phones. The measurement, control and reference channels are illuminated by the light entering the lead-in fibers from the phone’s LED flash, while the light from the end faces of the lead-out fibers is detected by the phone’s camera. The SPR-sensing element is fabricated by a light-guiding silica capillary that is stripped off its cladding and coated with 50-nm gold film. Utilizing a smart application to extract the light intensity information from the camera images, the light intensities of each channel are recorded every 0.5 s with refractive index (RI) changes. The performance of the smart phone-based SPR platform for accurate and repeatable measurements was evaluated by detecting different concentrations of antibody binding to a functionalized sensing element, and the experiment results were validated through contrast experiments with a commercial SPR instrument. This cost-effective and portable SPR biosensor based on smart phones has many applications, such as medicine, health and environmental monitoring.
A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones
Anderson, K.; Griffiths, D.; DeBell, L.; Hancock, S.; Duffy, J. P.; Shutler, J. D.; Reinhardt, W. J.; Griffiths, A.
2016-01-01
This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding ‘scheme blocks’ framework was used to build the application (‘app’), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing–a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for ‘UAV toolkit’ (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping). PMID:27144310
A Grassroots Remote Sensing Toolkit Using Live Coding, Smartphones, Kites and Lightweight Drones.
Anderson, K; Griffiths, D; DeBell, L; Hancock, S; Duffy, J P; Shutler, J D; Reinhardt, W J; Griffiths, A
2016-01-01
This manuscript describes the development of an android-based smartphone application for capturing aerial photographs and spatial metadata automatically, for use in grassroots mapping applications. The aim of the project was to exploit the plethora of on-board sensors within modern smartphones (accelerometer, GPS, compass, camera) to generate ready-to-use spatial data from lightweight aerial platforms such as drones or kites. A visual coding 'scheme blocks' framework was used to build the application ('app'), so that users could customise their own data capture tools in the field. The paper reports on the coding framework, then shows the results of test flights from kites and lightweight drones and finally shows how open-source geospatial toolkits were used to generate geographical information system (GIS)-ready GeoTIFF images from the metadata stored by the app. Two Android smartphones were used in testing-a high specification OnePlus One handset and a lower cost Acer Liquid Z3 handset, to test the operational limits of the app on phones with different sensor sets. We demonstrate that best results were obtained when the phone was attached to a stable single line kite or to a gliding drone. Results show that engine or motor vibrations from powered aircraft required dampening to ensure capture of high quality images. We demonstrate how the products generated from the open-source processing workflow are easily used in GIS. The app can be downloaded freely from the Google store by searching for 'UAV toolkit' (UAV toolkit 2016), and used wherever an Android smartphone and aerial platform are available to deliver rapid spatial data (e.g. in supporting decision-making in humanitarian disaster-relief zones, in teaching or for grassroots remote sensing and democratic mapping).
Wu, Jing; Dong, Mingling; Zhang, Cheng; Wang, Yu; Xie, Mengxia; Chen, Yiping
2017-06-05
Magnetic lateral flow strip (MLFS) based on magnetic bead (MB) and smart phone camera has been developed for quantitative detection of cocaine (CC) in urine samples. CC and CC-bovine serum albumin (CC-BSA) could competitively react with MB-antibody (MB-Ab) of CC on the surface of test line of MLFS. The color of MB-Ab conjugate on the test line relates to the concentration of target in the competition immunoassay format, which can be used as a visual signal. Furthermore, the color density of the MB-Ab conjugate can be transferred into digital signal (gray value) by a smart phone, which can be used as a quantitative signal. The linear detection range for CC is 5-500 ng/mL and the relative standard deviations are under 10%. The visual limit of detection was 5 ng/mL and the whole analysis time was within 10 min. The MLFS has been successfully employed for the detection of CC in urine samples without sample pre-treatment and the result is also agreed to that of enzyme-linked immunosorbent assay (ELISA). With the popularization of smart phone cameras, the MLFS has large potential in the detection of drug residues in virtue of its stability, speediness, and low-cost.
36 CFR 1254.26 - What can I take into a research room with me?
Code of Federal Regulations, 2010 CFR
2010-07-01
... wallet or purse is sufficiently small for purposes of this section. You may take cell phones, pagers, and...) and, for cell phone cameras, in § 1254.70(g). (b) Notes and reference materials. You may take notes...
36 CFR 1254.26 - What can I take into a research room with me?
Code of Federal Regulations, 2014 CFR
2014-07-01
... wallet or purse is sufficiently small for purposes of this section. You may take cell phones, pagers, and...) and, for cell phone cameras, in § 1254.70(g). (b) Notes and reference materials. You may take notes...
36 CFR 1254.26 - What can I take into a research room with me?
Code of Federal Regulations, 2012 CFR
2012-07-01
... wallet or purse is sufficiently small for purposes of this section. You may take cell phones, pagers, and...) and, for cell phone cameras, in § 1254.70(g). (b) Notes and reference materials. You may take notes...
NASA Astrophysics Data System (ADS)
Sirmacek, B.; Lindenbergh, R. C.; Menenti, M.
2013-10-01
Fusion of 3D airborne laser (LIDAR) data and terrestrial optical imagery can be applied in 3D urban modeling and model up-dating. The most challenging aspect of the fusion procedure is registering the terrestrial optical images on the LIDAR point clouds. In this article, we propose an approach for registering these two different data from different sensor sources. As we use iPhone camera images which are taken in front of the interested urban structure by the application user and the high resolution LIDAR point clouds of the acquired by an airborne laser sensor. After finding the photo capturing position and orientation from the iPhone photograph metafile, we automatically select the area of interest in the point cloud and transform it into a range image which has only grayscale intensity levels according to the distance from the image acquisition position. We benefit from local features for registering the iPhone image to the generated range image. In this article, we have applied the registration process based on local feature extraction and graph matching. Finally, the registration result is used for facade texture mapping on the 3D building surface mesh which is generated from the LIDAR point cloud. Our experimental results indicate possible usage of the proposed algorithm framework for 3D urban map updating and enhancing purposes.
Altered Perspectives: Immersive Environments
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Webley, P. W.
2016-12-01
Immersive environments provide an exciting experiential technology to visualize the natural world. Given the increasing accessibility of 360o cameras and virtual reality headsets we are now able to visualize artistic principles and scientific concepts in a fully immersive environment. The technology has become popular for photographers as well as designers, industry, educational groups, and museums. Here we show a sci-art perspective on the use of optics and light in the capture and manipulation of 360o images and video of geologic phenomena and cultural heritage sites in Alaska, England, and France. Additionally, we will generate intentionally altered perspectives to lend a surrealistic quality to the landscapes. Locations include the Catacombs of Paris, the Palace of Versailles, and the Northern Lights over Fairbanks, Alaska. Some 360o view cameras now use small portable dual lens technology extending beyond the 180o fish eye lens previously used, providing better coverage and image quality. Virtual reality headsets range in level of sophistication and cost, with the most affordable versions using smart phones and Google Cardboard viewers. The equipment used in this presentation includes a Ricoh Theta S spherical imaging camera. Here we will demonstrate the use of 360o imaging with attendees being able to be part of the immersive environment and experience our locations as if they were visiting themselves.
FlyCap: Markerless Motion Capture Using Multiple Autonomous Flying Cameras.
Xu, Lan; Liu, Yebin; Cheng, Wei; Guo, Kaiwen; Zhou, Guyue; Dai, Qionghai; Fang, Lu
2017-07-18
Aiming at automatic, convenient and non-instrusive motion capture, this paper presents a new generation markerless motion capture technique, the FlyCap system, to capture surface motions of moving characters using multiple autonomous flying cameras (autonomous unmanned aerial vehicles(UAVs) each integrated with an RGBD video camera). During data capture, three cooperative flying cameras automatically track and follow the moving target who performs large-scale motions in a wide space. We propose a novel non-rigid surface registration method to track and fuse the depth of the three flying cameras for surface motion tracking of the moving target, and simultaneously calculate the pose of each flying camera. We leverage the using of visual-odometry information provided by the UAV platform, and formulate the surface tracking problem in a non-linear objective function that can be linearized and effectively minimized through a Gaussian-Newton method. Quantitative and qualitative experimental results demonstrate the plausible surface and motion reconstruction results.
36 CFR § 1254.26 - What can I take into a research room with me?
Code of Federal Regulations, 2013 CFR
2013-07-01
... wallet or purse is sufficiently small for purposes of this section. You may take cell phones, pagers, and...) and, for cell phone cameras, in § 1254.70(g). (b) Notes and reference materials. You may take notes...
Can light-field photography ease focusing on the scalp and oral cavity?
Taheri, Arash; Feldman, Steven R
2013-08-01
Capturing a well-focused image using an autofocus camera can be difficult in oral cavity and on a hairy scalp. Light-field digital cameras capture data regarding the color, intensity, and direction of rays of light. Having information regarding direction of rays of light, computer software can be used to focus on different subjects in the field after the image data have been captured. A light-field camera was used to capture the images of the scalp and oral cavity. The related computer software was used to focus on scalp or different parts of oral cavity. The final pictures were compared with pictures taken with conventional, compact, digital cameras. The camera worked well for oral cavity. It also captured the pictures of scalp easily; however, we had to repeat clicking between the hairs on different points to choose the scalp for focusing. A major drawback of the system was the resolution of the resulting pictures that was lower than conventional digital cameras. Light-field digital cameras are fast and easy to use. They can capture more information on the full depth of field compared with conventional cameras. However, the resolution of the pictures is relatively low. © 2013 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Fighting Back: New Media and Military Operations
2008-11-01
combustible mix of 24/7 cable news, call-in radio and television programs, Internet bloggers and online websites, cell phones and iPods.”4 But, of...even individuals to affect strategic outcomes with minimal information infrastructure and little capital expenditure. Anyone with a camera cell phone and...areas of underdeveloped countries. The cell phone , however, as a means of mobile technology, is increasingly available worldwide and deserves discussion
Effects of Recording Food Intake Using Cell Phone Camera Pictures on Energy Intake and Food Choice.
Doumit, Rita; Long, JoAnn; Kazandjian, Chant; Gharibeh, Nathalie; Karam, Lina; Song, Huaxin; Boswell, Carol; Zeeni, Nadine
2016-06-01
The well-documented increases in obesity and unhealthy dietary practices substantiate the need for evidence-based tools that can help people improve their dietary habits. The current spread of mobile phone-embedded cameras offers new opportunities for recording food intake. Moreover, the act of taking pictures of food consumed may enhance visual consciousness of food choice and quantity. The present study aimed to assess the effect of using cell phone pictures to record food intake on energy intake and food choice in college students. The effectiveness and acceptability of cell phone picture-based diet recording also was assessed. A repeated measures crossover design was used. One group of participants entered their food intake online during 3 days based on their memory, although a second group recorded their food intake using cell phone pictures as their reference. Participants then crossed over to complete 3 more days of diet recording using the alternate method. Focus groups were conducted to obtain feedback on the effectiveness and acceptability of cell phone picture-based diet recording. Intake of meat and vegetable servings were significantly higher in the memory period compared with the cell phone period, regardless of the order. Results from the focus group indicated a positive attitude toward the use of cell phone pictures in recording food intake and an increased awareness of food choice and portion size. Cell phone pictures may be an easy, relevant, and accessible method of diet self-monitoring when aiming at dietary changes. Future trials should combine this technique with healthy eating education. © 2015 Sigma Theta Tau International.
Data upload capability of 3G mobile phones.
Moon, Jon K; Barden, Charles M; Wohlers, Erica M
2009-01-01
Mobile phones are becoming an important platform to measure free-living energy balance and to support weight management therapies. Sensor data, camera images and user input are needed by clinicians and researchers in close to real time. We assessed upload (reverse link) data transport rates for 2007-2008 model mobile phones on two major US wireless systems. Even the slowest phone (EVDO Rev 0) reliably uploaded 40 MB of data in less than 1 h. More than 95% of file uploads were successful in tests that simulated normal phone use over 3 d. Practical bandwidth and data currency from typical smart phones will likely keep pace with the data needs of energy balance studies and weight management therapy.
A paper-based device for double-stranded DNA detection with Zif268
NASA Astrophysics Data System (ADS)
Zhang, Daohong
2017-05-01
Here, a small analytical device was fabricated on both nitrocellulose membrane and filter paper, for the detection of biotinylated double-stranded DNA (dsDNA) from 1 nM. Zif268 was utilized for capturing the target DNA, which was a zinc finger protein that recognized only a dsDNA with specific sequence. Therefore, this detection platform could be utilized for PCR result detection, with the well-designed primers (interpolate both biotin and Zif268 binding sequence). The result of the assay could be recorded by a camera-phone, and analyzed with software. The whole assay finished within 1 hour. Due to the easy fabrication, operation and disposal of this device, this method can be employed in point-of-care detection or on-site monitoring.
NASA Astrophysics Data System (ADS)
Radun, Jenni E.; Virtanen, Toni; Olives, Jean-Luc; Vaahteranoksa, Mikko; Vuori, Tero; Nyman, Göte
2007-01-01
We present an effective method for comparing subjective audiovisual quality and the features related to the quality changes of different video cameras. Both quantitative estimation of overall quality and qualitative description of critical quality features are achieved by the method. The aim was to combine two image quality evaluation methods, the quantitative Absolute Category Rating (ACR) method with hidden reference removal and the qualitative Interpretation- Based Quality (IBQ) method in order to see how they complement each other in audiovisual quality estimation tasks. 26 observers estimated the audiovisual quality of six different cameras, mainly mobile phone video cameras. In order to achieve an efficient subjective estimation of audiovisual quality, only two contents with different quality requirements were recorded with each camera. The results show that the subjectively important quality features were more related to the overall estimations of cameras' visual video quality than to the features related to sound. The data demonstrated two significant quality dimensions related to visual quality: darkness and sharpness. We conclude that the qualitative methodology can complement quantitative quality estimations also with audiovisual material. The IBQ approach is valuable especially, when the induced quality changes are multidimensional.
Non-Invasive Detection of Anaemia Using Digital Photographs of the Conjunctiva.
Collings, Shaun; Thompson, Oliver; Hirst, Evan; Goossens, Louise; George, Anup; Weinkove, Robert
2016-01-01
Anaemia is a major health burden worldwide. Although the finding of conjunctival pallor on clinical examination is associated with anaemia, inter-observer variability is high, and definitive diagnosis of anaemia requires a blood sample. We aimed to detect anaemia by quantifying conjunctival pallor using digital photographs taken with a consumer camera and a popular smartphone. Our goal was to develop a non-invasive screening test for anaemia. The conjunctivae of haemato-oncology in- and outpatients were photographed in ambient lighting using a digital camera (Panasonic DMC-LX5), and the internal rear-facing camera of a smartphone (Apple iPhone 5S) alongside an in-frame calibration card. Following image calibration, conjunctival erythema index (EI) was calculated and correlated with laboratory-measured haemoglobin concentration. Three clinicians independently evaluated each image for conjunctival pallor. Conjunctival EI was reproducible between images (average coefficient of variation 2.96%). EI of the palpebral conjunctiva correlated more strongly with haemoglobin concentration than that of the forniceal conjunctiva. Using the compact camera, palpebral conjunctival EI had a sensitivity of 93% and 57% and specificity of 78% and 83% for detection of anaemia (haemoglobin < 110 g/L) in training and internal validation sets, respectively. Similar results were found using the iPhone camera, though the EI cut-off value differed. Conjunctival EI analysis compared favourably with clinician assessment, with a higher positive likelihood ratio for prediction of anaemia. Erythema index of the palpebral conjunctiva calculated from images taken with a compact camera or mobile phone correlates with haemoglobin and compares favourably to clinician assessment for prediction of anaemia. If confirmed in further series, this technique may be useful for the non-invasive screening for anaemia.
Enhancing Situational Awareness When Addressing Critical Incidents at Suburban and Rural Schools
2012-12-01
121 Amanda Lenhart, “ Teens , Cell Phones and Texting,” in Pew Internet & American Life Project (Washington, DC: Pew...Research Center, 2010), accessed July 22, 2012, http://pewresearch.org/pubs/1572/ teens -cell-phones-text-messages. 122 Dan Costa, “One Cell Phone Per Child...if a video camera were to be disabled, damaged or occluded by smoke, fire, or vandalism .132 The networking between BOCES, the school district, and
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
Jandee, Kasemsak; Kaewkungwal, Jaranit; Khamsiriwatchara, Amnat; Lawpoolsri, Saranath; Wongwit, Waranya; Wansatid, Peerawat
2015-07-20
Entering data onto paper-based forms, then digitizing them, is a traditional data-management method that might result in poor data quality, especially when the secondary data are incomplete, illegible, or missing. Transcription errors from source documents to case report forms (CRFs) are common, and subsequently the errors pass from the CRFs to the electronic database. This study aimed to demonstrate the usefulness and to evaluate the effectiveness of mobile phone camera applications in capturing health-related data, aiming for data quality and completeness as compared to current routine practices exercised by government officials. In this study, the concept of "data entry via phone image capture" (DEPIC) was introduced and developed to capture data directly from source documents. This case study was based on immunization history data recorded in a mother and child health (MCH) logbook. The MCH logbooks (kept by parents) were updated whenever parents brought their children to health care facilities for immunization. Traditionally, health providers are supposed to key in duplicate information of the immunization history of each child; both on the MCH logbook, which is returned to the parents, and on the individual immunization history card, which is kept at the health care unit to be subsequently entered into the electronic health care information system (HCIS). In this study, DEPIC utilized the photographic functionality of mobile phones to capture images of all immunization-history records on logbook pages and to transcribe these records directly into the database using a data-entry screen corresponding to logbook data records. DEPIC data were then compared with HCIS data-points for quality, completeness, and consistency. As a proof-of-concept, DEPIC captured immunization history records of 363 ethnic children living in remote areas from their MCH logbooks. Comparison of the 2 databases, DEPIC versus HCIS, revealed differences in the percentage of completeness and consistency of immunization history records. Comparing the records of each logbook in the DEPIC and HCIS databases, 17.3% (63/363) of children had complete immunization history records in the DEPIC database, but no complete records were reported in the HCIS database. Regarding the individual's actual vaccination dates, comparison of records taken from MCH logbook and those in the HCIS found that 24.2% (88/363) of the children's records were absolutely inconsistent. In addition, statistics derived from the DEPIC records showed a higher immunization coverage and much more compliance to immunization schedule by age group when compared to records derived from the HCIS database. DEPIC, or the concept of collecting data via image capture directly from their primary sources, has proven to be a useful data collection method in terms of completeness and consistency. In this study, DEPIC was implemented in data collection of a single survey. The DEPIC concept, however, can be easily applied in other types of survey research, for example, collecting data on changes or trends based on image evidence over time. With its image evidence and audit trail features, DEPIC has the potential for being used even in clinical studies since it could generate improved data integrity and more reliable statistics for use in both health care and research settings.
Mechanically assisted liquid lens zoom system for mobile phone cameras
NASA Astrophysics Data System (ADS)
Wippermann, F. C.; Schreiber, P.; Bräuer, A.; Berge, B.
2006-08-01
Camera systems with small form factor are an integral part of today's mobile phones which recently feature auto focus functionality. Ready to market solutions without moving parts have been developed by using the electrowetting technology. Besides virtually no deterioration, easy control electronics and simple and therefore cost-effective fabrication, this type of liquid lenses enables extremely fast settling times compared to mechanical approaches. As a next evolutionary step mobile phone cameras will be equipped with zoom functionality. We present first order considerations for the optical design of a miniaturized zoom system based on liquid-lenses and compare it to its mechanical counterpart. We propose a design of a zoom lens with a zoom factor of 2.5 considering state-of-the-art commercially available liquid lens products. The lens possesses auto focus capability and is based on liquid lenses and one additional mechanical actuator. The combination of liquid lenses and a single mechanical actuator enables extremely short settling times of about 20ms for the auto focus and a simplified mechanical system design leading to lower production cost and longer life time. The camera system has a mechanical outline of 24mm in length and 8mm in diameter. The lens with f/# 3.5 provides market relevant optical performance and is designed for an image circle of 6.25mm (1/2.8" format sensor).
NASA Astrophysics Data System (ADS)
Fan, Yang-Tung; Peng, Chiou-Shian; Chu, Cheng-Yu
2000-12-01
New markets are emerging for digital electronic image device, especially in visual communications, PC camera, mobile/cell phone, security system, toys, vehicle image system and computer peripherals for document capture. To enable one-chip image system that image sensor is with a full digital interface, can make image capture devices in our daily lives. Adding a color filter to such image sensor in a pattern of mosaics pixel or wide stripes can make image more real and colorful. We can say 'color filter makes the life more colorful color filter is? Color filter means can filter image light source except the color with specific wavelength and transmittance that is same as color filter itself. Color filter process is coating and patterning green, red and blue (or cyan, magenta and yellow) mosaic resists onto matched pixel in image sensing array pixels. According to the signal caught from each pixel, we can figure out the environment image picture. Widely use of digital electronic camera and multimedia applications today makes the feature of color filter becoming bright. Although it has challenge but it is very worthy to develop the process of color filter. We provide the best service on shorter cycle time, excellent color quality, high and stable yield. The key issues of advanced color process have to be solved and implemented are planarization and micro-lens technology. Lost of key points of color filter process technology have to consider will also be described in this paper.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Comparison and evaluation of datasets for off-angle iris recognition
NASA Astrophysics Data System (ADS)
Kurtuncu, Osman M.; Cerme, Gamze N.; Karakaya, Mahmut
2016-05-01
In this paper, we investigated the publicly available iris recognition datasets and their data capture procedures in order to determine if they are suitable for the stand-off iris recognition research. Majority of the iris recognition datasets include only frontal iris images. Even if a few datasets include off-angle iris images, the frontal and off-angle iris images are not captured at the same time. The comparison of the frontal and off-angle iris images shows not only differences in the gaze angle but also change in pupil dilation and accommodation as well. In order to isolate the effect of the gaze angle from other challenging issues including dilation and accommodation, the frontal and off-angle iris images are supposed to be captured at the same time by using two different cameras. Therefore, we developed an iris image acquisition platform by using two cameras in this work where one camera captures frontal iris image and the other one captures iris images from off-angle. Based on the comparison of Hamming distance between frontal and off-angle iris images captured with the two-camera- setup and one-camera-setup, we observed that Hamming distance in two-camera-setup is less than one-camera-setup ranging from 0.05 to 0.001. These results show that in order to have accurate results in the off-angle iris recognition research, two-camera-setup is necessary in order to distinguish the challenging issues from each other.
Cell Phone-Based System (Chaak) for Surveillance of Immatures of Dengue Virus Mosquito Vectors
LOZANO–FUENTES, SAUL; WEDYAN, FADI; HERNANDEZ–GARCIA, EDGAR; SADHU, DEVADATTA; GHOSH, SUDIPTO; BIEMAN, JAMES M.; TEP-CHEL, DIANA; GARCÍA–REJÓN, JULIÁN E.; EISEN, LARS
2014-01-01
Capture of surveillance data on mobile devices and rapid transfer of such data from these devices into an electronic database or data management and decision support systems promote timely data analyses and public health response during disease outbreaks. Mobile data capture is used increasingly for malaria surveillance and holds great promise for surveillance of other neglected tropical diseases. We focused on mosquito-borne dengue, with the primary aims of: 1) developing and field-testing a cell phone-based system (called Chaak) for capture of data relating to the surveillance of the mosquito immature stages, and 2) assessing, in the dengue endemic setting of Mérida, México, the cost-effectiveness of this new technology versus paper-based data collection. Chaak includes a desktop component, where a manager selects premises to be surveyed for mosquito immatures, and a cell phone component, where the surveyor receives the assigned tasks and captures the data. Data collected on the cell phone can be transferred to a central database through different modes of transmission, including near-real time where data are transferred immediately (e.g., over the Internet) or by first storing data on the cell phone for future transmission. Spatial data are handled in a novel, semantically driven, geographic information system. Compared with a pen-and-paper-based method, use of Chaak improved the accuracy and increased the speed of data transcription into an electronic database. The cost-effectiveness of using the Chaak system will depend largely on the up-front cost of purchasing cell phones and the recurring cost of data transfer over a cellular network. PMID:23926788
Design framework for a spectral mask for a plenoptic camera
NASA Astrophysics Data System (ADS)
Berkner, Kathrin; Shroff, Sapna A.
2012-01-01
Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.
Novel Robotic Tools for Piping Inspection and Repair, Phase 1
2014-02-13
35 Figure 57 - Accowle ODVS cross section and reflective path ......................................... 36 Figure 58 - Leopard Imaging HD...mounted to iPhone ............................................................................. 39 Figure 63 - Kogeto mounted to Leopard Imaging HD...40 Figure 65 - Leopard Imaging HD camera pipe test (letters) ............................................. 40 Figure 66 - Leopard Imaging HD camera
Hwang, Min Gu; Har, Dong Hwan
2017-11-01
This study designs a method of identifying the camera model used to take videos that are distributed through mobile phones and determines the original version of the mobile phone video for use as legal evidence. For this analysis, an experiment was conducted to find the unique characteristics of each mobile phone. The videos recorded by mobile phones were analyzed to establish the delay time of sound signals, and the differences between the delay times of sound signals for different mobile phones were traced by classifying their characteristics. Furthermore, the sound input signals for mobile phone videos used as legal evidence were analyzed to ascertain whether they have the unique characteristics of the original version. The objective of this study was to find a method for validating the use of mobile phone videos as legal evidence using mobile phones through differences in the delay times of sound input signals. © 2017 American Academy of Forensic Sciences.
An earthquake in Japan caused large waves in Norwegian fjords
NASA Astrophysics Data System (ADS)
Schult, Colin
2013-08-01
Early on a winter morning a few years ago, many residents of western Norway who lived or worked along the shores of the nation's fjords were startled to see the calm morning waters suddenly begin to rise and fall. Starting at around 7:15 A.M. local time and continuing for nearly 3 hours, waves up to 1.5 meters high coursed through the previously still fjord waters. The scene was captured by security cameras and by people with cell phones, reported to local media, and investigated by a local newspaper. Drawing on this footage, and using a computational model and observations from a nearby seismic station, Bondevik et al. identified the cause of the waves—the powerful magnitude 9.0 Tohoku earthquake that hit off the coast of Japan half an hour earlier.
A 3D photographic capsule endoscope system with full field of view
NASA Astrophysics Data System (ADS)
Ou-Yang, Mang; Jeng, Wei-De; Lai, Chien-Cheng; Kung, Yi-Chinn; Tao, Kuan-Heng
2013-09-01
Current capsule endoscope uses one camera to capture the surface image in the intestine. It can only observe the abnormal point, but cannot know the exact information of this abnormal point. Using two cameras can generate 3D images, but the visual plane changes while capsule endoscope rotates. It causes that two cameras can't capture the images information completely. To solve this question, this research provides a new kind of capsule endoscope to capture 3D images, which is 'A 3D photographic capsule endoscope system'. The system uses three cameras to capture images in real time. The advantage is increasing the viewing range up to 2.99 times respect to the two camera system. The system can accompany 3D monitor provides the exact information of symptom points, helping doctors diagnose the disease.
NASA Astrophysics Data System (ADS)
Liu, Yu-Che; Huang, Chung-Lin
2013-03-01
This paper proposes a multi-PTZ-camera control mechanism to acquire close-up imagery of human objects in a surveillance system. The control algorithm is based on the output of multi-camera, multi-target tracking. Three main concerns of the algorithm are (1) the imagery of human object's face for biometric purposes, (2) the optimal video quality of the human objects, and (3) minimum hand-off time. Here, we define an objective function based on the expected capture conditions such as the camera-subject distance, pan tile angles of capture, face visibility and others. Such objective function serves to effectively balance the number of captures per subject and quality of captures. In the experiments, we demonstrate the performance of the system which operates in real-time under real world conditions on three PTZ cameras.
Keeping Students and Staff Safe from Technology Abuse
ERIC Educational Resources Information Center
Ruder, Robert
2009-01-01
With the number of students who own cellular phones increasing at a dazzling rate, designing a school district-wide multilevel firewall to address electronic intrusions is a prudent course of action. In addition to invading a student's privacy in a locker room or bathroom, students can use camera phones to cheat on tests. Students can photograph a…
Registration of Large Motion Blurred Images
2016-05-09
in handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce...handling the dynamics of the capturing system, for example, a drone. CMOS sensors , used in recent times, when employed in these cameras produce two types...blur in the captured image when there is camera motion during exposure. However, contemporary CMOS sensors employ an electronic rolling shutter (RS
Evaluation of Android Smartphones for Telepathology
Ekong, Donald; Liu, Fang; Brown, G. Thomas; Ghosh, Arunima; Fontelo, Paul
2017-01-01
Background: In the year 2014, Android smartphones accounted for one-third of mobile connections globally but are predicted to increase to two-thirds by 2020. In developing countries, where teleconsultations can benefit health-care providers most, the ratio is even higher. This study compared the use of two Android phones, an 8 megapixel (MP) and a 16 MP phone, for capturing microscopic images. Method: The Android phones were used to capture images and videos of a gastrointestinal biopsy teaching set of referred cases from the Armed Forces Institute of Pathology (AFIP). The acquired images and videos were reviewed online by two pathologists for image quality, adequacy for diagnosis, usefulness of video overviews, and confidence in diagnosis, on a 5-point Likert scale. Results: The results show higher means in a 5-point Likert scale for the 8 MP versus the 16 MP phone that were statistically significant in adequacy of images (4.0 vs. 3.75) for rendering diagnosis and for agreement with the reference diagnosis (2.33 vs. 2.07). Although the quality of images was found higher in the 16 MP phone (3.8 vs. 3.65), these were not statistically significant. Adding video images of the entire specimen was found to be useful for evaluating the slides (combined mean, 4.0). Conclusion: For telepathology and other image dependent practices in developing countries, Android phones could be a useful tool for capturing images. PMID:28480119
Evaluation of Android Smartphones for Telepathology.
Ekong, Donald; Liu, Fang; Brown, G Thomas; Ghosh, Arunima; Fontelo, Paul
2017-01-01
In the year 2014, Android smartphones accounted for one-third of mobile connections globally but are predicted to increase to two-thirds by 2020. In developing countries, where teleconsultations can benefit health-care providers most, the ratio is even higher. This study compared the use of two Android phones, an 8 megapixel (MP) and a 16 MP phone, for capturing microscopic images. The Android phones were used to capture images and videos of a gastrointestinal biopsy teaching set of referred cases from the Armed Forces Institute of Pathology (AFIP). The acquired images and videos were reviewed online by two pathologists for image quality, adequacy for diagnosis, usefulness of video overviews, and confidence in diagnosis, on a 5-point Likert scale. The results show higher means in a 5-point Likert scale for the 8 MP versus the 16 MP phone that were statistically significant in adequacy of images (4.0 vs. 3.75) for rendering diagnosis and for agreement with the reference diagnosis (2.33 vs. 2.07). Although the quality of images was found higher in the 16 MP phone (3.8 vs. 3.65), these were not statistically significant. Adding video images of the entire specimen was found to be useful for evaluating the slides (combined mean, 4.0). For telepathology and other image dependent practices in developing countries, Android phones could be a useful tool for capturing images.
Tan, Johnson C H; Meadows, Howard; Gupta, Aanchal; Yeung, Sonia N; Moloney, Gregory
2014-03-01
The aim of this study was to describe a modification of the Miyake-Apple posterior video analysis for the simultaneous visualization of the anterior and posterior corneal surfaces during wet laboratory-based deep anterior lamellar keratoplasty (DALK). A human donor corneoscleral button was affixed to a microscope slide and placed onto a custom-made mounting box. A big bubble DALK was performed on the cornea in the wet laboratory. An 11-diopter intraocular lens was positioned over the aperture of the back camera of an iPhone. This served to video record the posterior view of the corneoscleral button during the big bubble formation. An overhead operating microscope with an attached video camcorder recorded the anterior view during the surgery. The anterior and posterior views of the wet laboratory-based DALK surgery were simultaneously captured and edited using video editing software. The formation of the big bubble can be studied. This video recording camera system has the potential to act as a valuable research and teaching tool in corneal lamellar surgery, especially in the behavior of the big bubble formation in DALK.
A vision-based method for planar position measurement
NASA Astrophysics Data System (ADS)
Chen, Zong-Hao; Huang, Peisen S.
2016-12-01
In this paper, a vision-based method is proposed for three-degree-of-freedom (3-DOF) planar position (XY{θZ} ) measurement. This method uses a single camera to capture the image of a 2D periodic pattern and then uses the 2D discrete Fourier transform (2D DFT) method to estimate the phase of its fundamental frequency component for position measurement. To improve position measurement accuracy, the phase estimation error of 2D DFT is analyzed and a phase estimation method is proposed. Different simulations are done to verify the feasibility of this method and study the factors that influence the accuracy and precision of phase estimation. To demonstrate the performance of the proposed method for position measurement, a prototype encoder consisting of a black-and-white industrial camera with VGA resolution (480 × 640 pixels) and an iPhone 4s has been developed. Experimental results show the peak-to-peak resolutions to be 3.5 nm in X axis, 8 nm in Y axis and 4 μ \\text{rad} in {θZ} axis. The corresponding RMS resolutions are 0.52 nm, 1.06 nm, and 0.60 μ \\text{rad} respectively.
The use of vision-based image quality metrics to predict low-light performance of camera phones
NASA Astrophysics Data System (ADS)
Hultgren, B.; Hertel, D.
2010-01-01
Small digital camera modules such as those in mobile phones have become ubiquitous. Their low-light performance is of utmost importance since a high percentage of images are made under low lighting conditions where image quality failure may occur due to blur, noise, and/or underexposure. These modes of image degradation are not mutually exclusive: they share common roots in the physics of the imager, the constraints of image processing, and the general trade-off situations in camera design. A comprehensive analysis of failure modes is needed in order to understand how their interactions affect overall image quality. Low-light performance is reported for DSLR, point-and-shoot, and mobile phone cameras. The measurements target blur, noise, and exposure error. Image sharpness is evaluated from three different physical measurements: static spatial frequency response, handheld motion blur, and statistical information loss due to image processing. Visual metrics for sharpness, graininess, and brightness are calculated from the physical measurements, and displayed as orthogonal image quality metrics to illustrate the relative magnitude of image quality degradation as a function of subject illumination. The impact of each of the three sharpness measurements on overall sharpness quality is displayed for different light levels. The power spectrum of the statistical information target is a good representation of natural scenes, thus providing a defined input signal for the measurement of power-spectrum based signal-to-noise ratio to characterize overall imaging performance.
The future of consumer cameras
NASA Astrophysics Data System (ADS)
Battiato, Sebastiano; Moltisanti, Marco
2015-03-01
In the last two decades multimedia, and in particular imaging devices (camcorders, tablets, mobile phones, etc.) have been dramatically diffused. Moreover the increasing of their computational performances, combined with an higher storage capability, allows them to process large amount of data. In this paper an overview of the current trends of consumer cameras market and technology will be given, providing also some details about the recent past (from Digital Still Camera up today) and forthcoming key issues.
Measurement of soil color: a comparison between smartphone camera and the Munsell color charts
USDA-ARS?s Scientific Manuscript database
Soil color is one of the most valuable soil properties for assessing and monitoring soil health. Here we present the results of tests of a new soil color app for mobile phones. The comparisons include various smartphones cameras under different natural illumination conditions (sunny and cloudy) and ...
Using a Smartphone Camera for Nanosatellite Attitude Determination
NASA Astrophysics Data System (ADS)
Shimmin, R.
2014-09-01
The PhoneSat project at NASA Ames Research Center has repeatedly flown a commercial cellphone in space. As this project continues, additional utility is being extracted from the cell phone hardware to enable more complex missions. The camera in particular shows great potential as an instrument for position and attitude determination, but this requires complex image processing. This paper outlines progress towards that image processing capability. Initial tests on a small collection of sample images have demonstrated the determination of a Moon vector from an image by automatic thresholding and centroiding, allowing the calibration of existing attitude control systems. Work has been undertaken on a further set of sample images towards horizon detection using a variety of techniques including thresholding, edge detection, applying a Hough transform, and circle fitting. Ultimately it is hoped this will allow calculation of an Earth vector for attitude determination and an approximate altitude. A quick discussion of work towards using the camera as a star tracker is then presented, followed by an introduction to further applications of the camera on space missions.
NASA Astrophysics Data System (ADS)
McGuire, P. C.; Gross, C.; Wendt, L.; Bonnici, A.; Souza-Egipsy, V.; Ormö, J.; Díaz-Martínez, E.; Foing, B. H.; Bose, R.; Walter, S.; Oesker, M.; Ontrup, J.; Haschke, R.; Ritter, H.
2010-01-01
In previous work, a platform was developed for testing computer-vision algorithms for robotic planetary exploration. This platform consisted of a digital video camera connected to a wearable computer for real-time processing of images at geological and astrobiological field sites. The real-time processing included image segmentation and the generation of interest points based upon uncommonness in the segmentation maps. Also in previous work, this platform for testing computer-vision algorithms has been ported to a more ergonomic alternative platform, consisting of a phone camera connected via the Global System for Mobile Communications (GSM) network to a remote-server computer. The wearable-computer platform has been tested at geological and astrobiological field sites in Spain (Rivas Vaciamadrid and Riba de Santiuste), and the phone camera has been tested at a geological field site in Malta. In this work, we (i) apply a Hopfield neural-network algorithm for novelty detection based upon colour, (ii) integrate a field-capable digital microscope on the wearable computer platform, (iii) test this novelty detection with the digital microscope at Rivas Vaciamadrid, (iv) develop a Bluetooth communication mode for the phone-camera platform, in order to allow access to a mobile processing computer at the field sites, and (v) test the novelty detection on the Bluetooth-enabled phone camera connected to a netbook computer at the Mars Desert Research Station in Utah. This systems engineering and field testing have together allowed us to develop a real-time computer-vision system that is capable, for example, of identifying lichens as novel within a series of images acquired in semi-arid desert environments. We acquired sequences of images of geologic outcrops in Utah and Spain consisting of various rock types and colours to test this algorithm. The algorithm robustly recognized previously observed units by their colour, while requiring only a single image or a few images to learn colours as familiar, demonstrating its fast learning capability.
Endockscope: using mobile technology to create global point of service endoscopy.
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B; Dash, Atreya; Clayman, Ralph; Lee, Hak J
2013-09-01
Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy.
Endockscope: Using Mobile Technology to Create Global Point of Service Endoscopy
Sohn, William; Shreim, Samir; Yoon, Renai; Huynh, Victor B.; Dash, Atreya; Clayman, Ralph
2013-01-01
Abstract Background and Purpose Recent advances and the widespread availability of smartphones have ushered in a new wave of innovations in healthcare. We present our initial experience with Endockscope, a new docking system that optimizes the coupling of the iPhone 4S with modern endoscopes. Materials and Methods Using the United States Air Force resolution target, we compared the image resolution (line pairs/mm) of a flexible cystoscope coupled to the Endockscope+iPhone to the Storz high definition (HD) camera (H3-Z Versatile). We then used the Munsell ColorChecker chart to compare the color resolution with a 0° laparoscope. Furthermore, 12 expert endoscopists blindly compared and evaluated images from a porcine model using a cystoscope and ureteroscope for both systems. Finally, we also compared the cost (average of two company listed prices) and weight (lb) of the two systems. Results Overall, the image resolution allowed by the Endockscope was identical to the traditional HD camera (4.49 vs 4.49 lp/mm). Red (ΔE=9.26 vs 9.69) demonstrated better color resolution for iPhone, but green (ΔE=7.76 vs 10.95), and blue (ΔE=12.35 vs 14.66) revealed better color resolution with the Storz HD camera. Expert reviews of cystoscopic images acquired with the HD camera were superior in image, color, and overall quality (P=0.002, 0.042, and 0.003). In contrast, the ureteroscopic reviews yielded no statistical difference in image, color, and overall (P=1, 0.203, and 0.120) quality. The overall cost of the Endockscope+iPhone was $154 compared with $46,623 for a standard HD system. The weight of the mobile-coupled system was 0.47 lb and 1.01 lb for the Storz HD camera. Conclusion Endockscope demonstrated feasibility of coupling endoscopes to a smartphone. The lighter and inexpensive Endockscope acquired images of the same resolution and acceptable color resolution. When evaluated by expert endoscopists, the quality of the images overall were equivalent for flexible ureteroscopy and somewhat inferior, but still acceptable for flexible cystoscopy. PMID:23701228
Cyber Bullying: An Old Problem in a New Guise?
ERIC Educational Resources Information Center
Campbell, Marilyn A.
2005-01-01
Although technology provides numerous benefits to young people, it also has a "dark side", as it can be used for harm, not only by some adults but also by the young people themselves. E-mail, texting, chat rooms, mobile phones, mobile phone cameras and web sites can and are being used by young people to bully peers. It is now a global…
Dangers for Principals and Students When Conducting Investigations of Sexting in Schools
ERIC Educational Resources Information Center
Hachiya, Robert F.
2017-01-01
Cell phones and the use of social media have changed the environment in schools, and principals recognize all too well that new technology is almost always accompanied by new ways to misuse or abuse that technology. The addition of a camera to cell phones has unfortunately been accompanied with the serious problem of "sexting" by youth…
NASA Technical Reports Server (NTRS)
Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob
2001-01-01
To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.
Mobile cosmetics advisor: an imaging based mobile service
NASA Astrophysics Data System (ADS)
Bhatti, Nina; Baker, Harlyn; Chao, Hui; Clearwater, Scott; Harville, Mike; Jain, Jhilmil; Lyons, Nic; Marguier, Joanna; Schettino, John; Süsstrunk, Sabine
2010-01-01
Selecting cosmetics requires visual information and often benefits from the assessments of a cosmetics expert. In this paper we present a unique mobile imaging application that enables women to use their cell phones to get immediate expert advice when selecting personal cosmetic products. We derive the visual information from analysis of camera phone images, and provide the judgment of the cosmetics specialist through use of an expert system. The result is a new paradigm for mobile interactions-image-based information services exploiting the ubiquity of camera phones. The application is designed to work with any handset over any cellular carrier using commonly available MMS and SMS features. Targeted at the unsophisticated consumer, it must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system and not on the handset itself. We present the imaging pipeline technology and a comparison of the services' accuracy with respect to human experts.
a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment
NASA Astrophysics Data System (ADS)
Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan
2016-06-01
Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.
Data Mining and Information Technology: Its Impact on Intelligence Collection and Privacy Rights
2007-11-26
sources include: Cameras - Digital cameras (still and video ) have been improving in capability while simultaneously dropping in cost at a rate...citizen is caught on camera 300 times each day.5 The power of extensive video coverage is magnified greatly by the nascent capability for voice and...software on security videos and tracking cell phone usage in the local area. However, it would only return the names and data of those who
Human movement activity classification approaches that use wearable sensors and mobile devices
NASA Astrophysics Data System (ADS)
Kaghyan, Sahak; Sarukhanyan, Hakob; Akopian, David
2013-03-01
Cell phones and other mobile devices become part of human culture and change activity and lifestyle patterns. Mobile phone technology continuously evolves and incorporates more and more sensors for enabling advanced applications. Latest generations of smart phones incorporate GPS and WLAN location finding modules, vision cameras, microphones, accelerometers, temperature sensors etc. The availability of these sensors in mass-market communication devices creates exciting new opportunities for data mining applications. Particularly healthcare applications exploiting build-in sensors are very promising. This paper reviews different approaches of human activity recognition.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Light field rendering with omni-directional camera
NASA Astrophysics Data System (ADS)
Todoroki, Hiroshi; Saito, Hideo
2003-06-01
This paper presents an approach to capture visual appearance of a real environment such as an interior of a room. We propose the method for generating arbitrary viewpoint images by building light field with the omni-directional camera, which can capture the wide circumferences. Omni-directional camera used in this technique is a special camera with the hyperbolic mirror in the upper part of a camera, so that we can capture luminosity in the environment in the range of 360 degree of circumferences in one image. We apply the light field method, which is one technique of Image-Based-Rendering(IBR), for generating the arbitrary viewpoint images. The light field is a kind of the database that records the luminosity information in the object space. We employ the omni-directional camera for constructing the light field, so that we can collect many view direction images in the light field. Thus our method allows the user to explore the wide scene, that can acheive realistic representation of virtual enviroment. For demonstating the proposed method, we capture image sequence in our lab's interior environment with an omni-directional camera, and succesfully generate arbitray viewpoint images for virual tour of the environment.
NASA Astrophysics Data System (ADS)
Ceylan Koydemir, Hatice; Bogoch, Isaac I.; Tseng, Derek; Ephraim, Richard K. D.; Duah, Evans; Tee, Joseph; Andrews, Jason R.; Ozcan, Aydogan
2016-03-01
Schistosomiasis is a parasitic and neglected tropical disease, and affects <200-million people across the world, with school-aged children disproportionately affected. Here we present field-testing results of a handheld and cost effective smartphone-based microscope in rural Ghana, Africa, for point-of-care diagnosis of S. haematobium infection. In this mobile-phone microscope, a custom-designed 3D printed opto-mechanical attachment (~150g) is placed in contact with the smartphone camera-lens, creating an imaging-system with a half-pitch resolution of ~0.87µm. This unit includes an external lens (also taken from a mobile-phone camera), a sample tray, a z-stage to adjust the focus, two light-emitting-diodes (LEDs) and two diffusers for uniform illumination of the sample. In our field-testing, 60 urine samples, collected from children, were used, where the prevalence of the infection was 72.9%. After concentration of the sample with centrifugation, the sediment was placed on a glass-slide and S. haematobium eggs were first identified/quantified using conventional benchtop microscopy by an expert diagnostician, and then a second expert, blinded to these results, determined the presence/absence of eggs using our mobile-phone microscope. Compared to conventional microscopy, our mobile-phone microscope had a diagnostic sensitivity of 72.1%, specificity of 100%, positive-predictive-value of 100%, and a negative-predictive-value of 57.1%. Furthermore, our mobile-phone platform demonstrated a sensitivity of 65.7% and 100% for low-intensity infections (≤50 eggs/10 mL urine) and high-intensity infections (<50 eggs/10 mL urine), respectively. We believe that this cost-effective and field-portable mobile-phone microscope may play an important role in the diagnosis of schistosomiasis and various other global health challenges.
Optofluidic Fluorescent Imaging Cytometry on a Cell Phone
Zhu, Hongying; Mavandadi, Sam; Coskun, Ahmet F.; Yaglidere, Oguzhan; Ozcan, Aydogan
2012-01-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical sciences. Cost-effective translation of these technologies to remote and resource-limited environments could create new opportunities especially for telemedicine applications. Toward this direction, here we demonstrate the integration of imaging cytometry and fluorescent microscopy on a cell phone using a compact, lightweight, and cost-effective optofluidic attachment. In this cell-phone-based optofluidic imaging cytometry platform, fluorescently labeled particles or cells of interest are continuously delivered to our imaging volume through a disposable microfluidic channel that is positioned above the existing camera unit of the cell phone. The same microfluidic device also acts as a multilayered optofluidic waveguide and efficiently guides our excitation light, which is butt-coupled from the side facets of our microfluidic channel using inexpensive light-emitting diodes. Since the excitation of the sample volume occurs through guided waves that propagate perpendicular to the detection path, our cell-phone camera can record fluorescent movies of the specimens as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the target solution of interest. We tested the performance of our cell-phone-based imaging cytometer by measuring the density of white blood cells in human blood samples, which provided a decent match to a commercially available hematology analyzer. We further characterized the imaging quality of the same platform to demonstrate a spatial resolution of ~2 μm. This cell-phone-enabled optofluidic imaging flow cytometer could especially be useful for rapid and sensitive imaging of bodily fluids for conducting various cell counts (e.g., toward monitoring of HIV+ patients) or rare cell analysis as well as for screening of water quality in remote and resource-poor settings. PMID:21774454
Optofluidic fluorescent imaging cytometry on a cell phone.
Zhu, Hongying; Mavandadi, Sam; Coskun, Ahmet F; Yaglidere, Oguzhan; Ozcan, Aydogan
2011-09-01
Fluorescent microscopy and flow cytometry are widely used tools in biomedical sciences. Cost-effective translation of these technologies to remote and resource-limited environments could create new opportunities especially for telemedicine applications. Toward this direction, here we demonstrate the integration of imaging cytometry and fluorescent microscopy on a cell phone using a compact, lightweight, and cost-effective optofluidic attachment. In this cell-phone-based optofluidic imaging cytometry platform, fluorescently labeled particles or cells of interest are continuously delivered to our imaging volume through a disposable microfluidic channel that is positioned above the existing camera unit of the cell phone. The same microfluidic device also acts as a multilayered optofluidic waveguide and efficiently guides our excitation light, which is butt-coupled from the side facets of our microfluidic channel using inexpensive light-emitting diodes. Since the excitation of the sample volume occurs through guided waves that propagate perpendicular to the detection path, our cell-phone camera can record fluorescent movies of the specimens as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the target solution of interest. We tested the performance of our cell-phone-based imaging cytometer by measuring the density of white blood cells in human blood samples, which provided a decent match to a commercially available hematology analyzer. We further characterized the imaging quality of the same platform to demonstrate a spatial resolution of ~2 μm. This cell-phone-enabled optofluidic imaging flow cytometer could especially be useful for rapid and sensitive imaging of bodily fluids for conducting various cell counts (e.g., toward monitoring of HIV+ patients) or rare cell analysis as well as for screening of water quality in remote and resource-poor settings.
Shah, Kamal G; Singh, Vidhi; Kauffman, Peter C; Abe, Koji; Yager, Paul
2018-05-14
Paper-based diagnostic tests based on the lateral flow immunoassay concept promise low-cost, point-of-care detection of infectious diseases, but such assays suffer from poor limits of detection. One factor that contributes to poor analytical performance is a reliance on low-contrast chromophoric optical labels such as gold nanoparticles. Previous attempts to improve the sensitivity of paper-based diagnostics include replacing chromophoric labels with enzymes, fluorophores, or phosphors at the expense of increased fluidic complexity or the need for device readers with costly optoelectronics. Several groups, including our own, have proposed mobile phones as suitable point-of-care readers due to their low cost, ease of use, and ubiquity. However, extant mobile phone fluorescence readers require costly optical filters and were typically validated with only one camera sensor module, which is inappropriate for potential point-of-care use. In response, we propose to couple low-cost ultraviolet light-emitting diodes with long Stokes-shift quantum dots to enable ratiometric mobile phone fluorescence measurements without optical filters. Ratiometric imaging with unmodified smartphone cameras improves the contrast and attenuates the impact of excitation intensity variability by 15×. Practical application was shown with a lateral flow immunoassay for influenza A with nucleoproteins spiked into simulated nasal matrix. Limits of detection of 1.5 and 2.6 fmol were attained on two mobile phones, which are comparable to a gel imager (1.9 fmol), 10× better than imaging gold nanoparticles on a scanner (18 fmol), and >2 orders of magnitude better than gold nanoparticle-labeled assays imaged with mobile phones. Use of the proposed filter-free mobile phone imaging scheme is a first step toward enabling a new generation of highly sensitive, point-of-care fluorescence assays.
Camera trap placement and the potential for bias due to trails and other features
Forrester, Tavis D.
2017-01-01
Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11–33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9–38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design. PMID:29045478
Camera trap placement and the potential for bias due to trails and other features.
Kolowski, Joseph M; Forrester, Tavis D
2017-01-01
Camera trapping has become an increasingly widespread tool for wildlife ecologists, with large numbers of studies relying on photo capture rates or presence/absence information. It is increasingly clear that camera placement can directly impact this kind of data, yet these biases are poorly understood. We used a paired camera design to investigate the effect of small-scale habitat features on species richness estimates, and capture rate and detection probability of several mammal species in the Shenandoah Valley of Virginia, USA. Cameras were deployed at either log features or on game trails with a paired camera at a nearby random location. Overall capture rates were significantly higher at trail and log cameras compared to their paired random cameras, and some species showed capture rates as much as 9.7 times greater at feature-based cameras. We recorded more species at both log (17) and trail features (15) than at their paired control cameras (13 and 12 species, respectively), yet richness estimates were indistinguishable after 659 and 385 camera nights of survey effort, respectively. We detected significant increases (ranging from 11-33%) in detection probability for five species resulting from the presence of game trails. For six species detection probability was also influenced by the presence of a log feature. This bias was most pronounced for the three rodents investigated, where in all cases detection probability was substantially higher (24.9-38.2%) at log cameras. Our results indicate that small-scale factors, including the presence of game trails and other features, can have significant impacts on species detection when camera traps are employed. Significant biases may result if the presence and quality of these features are not documented and either incorporated into analytical procedures, or controlled for in study design.
Obstacle Detection and Avoidance of a Mobile Robotic Platform Using Active Depth Sensing
2014-06-01
price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its environment in three...inception. At the price of nearly one tenth of a laser range finder, the Xbox Kinect uses an infrared projector and camera to capture images of its...cropped between 280 and 480 pixels. ........11 Figure 9. RGB image captured by the camera on the Xbox Kinect. ...............................12 Figure
Introducing the depth transfer curve for 3D capture system characterization
NASA Astrophysics Data System (ADS)
Goma, Sergio R.; Atanassov, Kalin; Ramachandra, Vikas
2011-03-01
3D technology has recently made a transition from movie theaters to consumer electronic devices such as 3D cameras and camcorders. In addition to what 2D imaging conveys, 3D content also contains information regarding the scene depth. Scene depth is simulated through the strongest brain depth cue, namely retinal disparity. This can be achieved by capturing an image by horizontally separated cameras. Objects at different depths will be projected with different horizontal displacement on the left and right camera images. These images, when fed separately to either eye, leads to retinal disparity. Since the perception of depth is the single most important 3D imaging capability, an evaluation procedure is needed to quantify the depth capture characteristics. Evaluating depth capture characteristics subjectively is a very difficult task since the intended and/or unintended side effects from 3D image fusion (depth interpretation) by the brain are not immediately perceived by the observer, nor do such effects lend themselves easily to objective quantification. Objective evaluation of 3D camera depth characteristics is an important tool that can be used for "black box" characterization of 3D cameras. In this paper we propose a methodology to evaluate the 3D cameras' depth capture capabilities.
Stereoscopic augmented reality with pseudo-realistic global illumination effects
NASA Astrophysics Data System (ADS)
de Sorbier, Francois; Saito, Hideo
2014-03-01
Recently, augmented reality has become very popular and has appeared in our daily life with gaming, guiding systems or mobile phone applications. However, inserting object in such a way their appearance seems natural is still an issue, especially in an unknown environment. This paper presents a framework that demonstrates the capabilities of Kinect for convincing augmented reality in an unknown environment. Rather than pre-computing a reconstruction of the scene like proposed by most of the previous method, we propose a dynamic capture of the scene that allows adapting to live changes of the environment. Our approach, based on the update of an environment map, can also detect the position of the light sources. Combining information from the environment map, the light sources and the camera tracking, we can display virtual objects using stereoscopic devices with global illumination effects such as diffuse and mirror reflections, refractions and shadows in real time.
Mobile Phones: Potential Sources of Nickel and Cobalt Exposure for Metal Allergic Patients
Mucci, Tania; Chong, Melanie; Lorton, Mark Davis; Fonacier, Luz
2013-01-01
The use of cellular phones has risen exponentially with over 300 million subscribers. Nickel has been detected in cell phones and reports of contact dermatitis attributable to metals are present in the literature. We determined nickel and cobalt content in popular cell phones in the United States. Adults (>18 years) who owned a flip phone, Blackberry®, or iPhone® were eligible. Seventy-two cell phones were tested using SmartPractice's® commercially available nickel and cobalt spot tests. Test areas included buttons, keypad, speakers, camera, and metal panels. Of the 72 cell phones tested, no iPhones or Droids® tested positive for nickel or cobalt. About 29.4% of Blackberrys [95% confidence interval (CI), 13%–53%] tested positive for nickel; none were positive for cobalt. About 90.5% of flip phones (95% CI, 70%–99%) tested positive for nickel and 52.4% of flip phones (95% CI, 32%–72%) tested positive for cobalt. Our study indicates that nickel and cobalt are present in popular cell phones. Patients with known nickel or cobalt allergy may consider their cellular phones as a potential source of exposure. Further studies are needed to examine whether there is a direct association with metal content in cell phones and the manifestation of metal allergy. PMID:24380018
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System
Manduchi, R.; Coughlan, J.; Ivanchenko, V.
2016-01-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed. PMID:26949755
Search Strategies of Visually Impaired Persons using a Camera Phone Wayfinding System.
Manduchi, R; Coughlan, J; Ivanchenko, V
2008-07-01
We report new experiments conducted using a camera phone wayfinding system, which is designed to guide a visually impaired user to machine-readable signs (such as barcodes) labeled with special color markers. These experiments specifically investigate search strategies of such users detecting, localizing and touching color markers that have been mounted in various ways in different environments: in a corridor (either flush with the wall or mounted perpendicular to it) or in a large room with obstacles between the user and the markers. The results show that visually impaired users are able to reliably find color markers in all the conditions that we tested, using search strategies that vary depending on the environment in which they are placed.
Space trajectory calculation based on G-sensor
NASA Astrophysics Data System (ADS)
Xu, Biya; Zhan, Yinwei; Shao, Yang
2017-08-01
At present, without full use of the mobile phone around us, most of the research in human body posture recognition field is use camera or portable acceleration sensor to collect data. In this paper, G-sensor built-in mobile phone is use to collect data. After processing data with the way of moving average filter and acceleration integral, joint point's space three-dimensional coordinates can be abtained accurately.
ERIC Educational Resources Information Center
Benedict, Lucille; Pence, Harry E.
2012-01-01
Increasing numbers of college students own cell phones, and many of these phones are smartphones, which include features such as still and video cameras, global positioning systems, Internet access, and computers as powerful as the desktop models of only a few years ago. A number of chemical educators are already using these devices for education.…
Opto-fluidics based microscopy and flow cytometry on a cell phone for blood analysis.
Zhu, Hongying; Ozcan, Aydogan
2015-01-01
Blood analysis is one of the most important clinical tests for medical diagnosis. Flow cytometry and optical microscopy are widely used techniques to perform blood analysis and therefore cost-effective translation of these technologies to resource limited settings is critical for various global health as well as telemedicine applications. In this chapter, we review our recent progress on the integration of imaging flow cytometry and fluorescent microscopy on a cell phone using compact, light-weight and cost-effective opto-fluidic attachments integrated onto the camera module of a smartphone. In our cell-phone based opto-fluidic imaging cytometry design, fluorescently labeled cells are delivered into the imaging area using a disposable micro-fluidic chip that is positioned above the existing camera unit of the cell phone. Battery powered light-emitting diodes (LEDs) are butt-coupled to the sides of this micro-fluidic chip without any lenses, which effectively acts as a multimode slab waveguide, where the excitation light is guided to excite the fluorescent targets within the micro-fluidic chip. Since the excitation light propagates perpendicular to the detection path, an inexpensive plastic absorption filter is able to reject most of the scattered light and create a decent dark-field background for fluorescent imaging. With this excitation geometry, the cell-phone camera can record fluorescent movies of the particles/cells as they are flowing through the microchannel. The digital frames of these fluorescent movies are then rapidly processed to quantify the count and the density of the labeled particles/cells within the solution under test. With a similar opto-fluidic design, we have recently demonstrated imaging and automated counting of stationary blood cells (e.g., labeled white blood cells or unlabeled red blood cells) loaded within a disposable cell counting chamber. We tested the performance of this cell-phone based imaging cytometry and blood analysis platform by measuring the density of red and white blood cells as well as hemoglobin concentration in human blood samples, which showed a good match to our measurement results obtained using a commercially available hematology analyzer. Such a cell-phone enabled opto-fluidics microscopy, flow cytometry, and blood analysis platform could be especially useful for various telemedicine applications in remote and resource-limited settings.
In vivo burn diagnosis by camera-phone diffuse reflectance laser speckle detection.
Ragol, S; Remer, I; Shoham, Y; Hazan, S; Willenz, U; Sinelnikov, I; Dronov, V; Rosenberg, L; Bilenca, A
2016-01-01
Burn diagnosis using laser speckle light typically employs widefield illumination of the burn region to produce two-dimensional speckle patterns from light backscattered from the entire irradiated tissue volume. Analysis of speckle contrast in these time-integrated patterns can then provide information on burn severity. Here, by contrast, we use point illumination to generate diffuse reflectance laser speckle patterns of the burn. By examining spatiotemporal fluctuations in these time-integrated patterns along the radial direction from the incident point beam, we show the ability to distinguish partial-thickness burns in a porcine model in vivo within the first 24 hours post-burn. Furthermore, our findings suggest that time-integrated diffuse reflectance laser speckle can be useful for monitoring burn healing over time post-burn. Unlike conventional diffuse reflectance laser speckle detection systems that utilize scientific or industrial-grade cameras, our system is designed with a camera-phone, demonstrating the potential for burn diagnosis with a simple imager.
In vivo burn diagnosis by camera-phone diffuse reflectance laser speckle detection
Ragol, S.; Remer, I.; Shoham, Y.; Hazan, S.; Willenz, U.; Sinelnikov, I.; Dronov, V.; Rosenberg, L.; Bilenca, A.
2015-01-01
Burn diagnosis using laser speckle light typically employs widefield illumination of the burn region to produce two-dimensional speckle patterns from light backscattered from the entire irradiated tissue volume. Analysis of speckle contrast in these time-integrated patterns can then provide information on burn severity. Here, by contrast, we use point illumination to generate diffuse reflectance laser speckle patterns of the burn. By examining spatiotemporal fluctuations in these time-integrated patterns along the radial direction from the incident point beam, we show the ability to distinguish partial-thickness burns in a porcine model in vivo within the first 24 hours post-burn. Furthermore, our findings suggest that time-integrated diffuse reflectance laser speckle can be useful for monitoring burn healing over time post-burn. Unlike conventional diffuse reflectance laser speckle detection systems that utilize scientific or industrial-grade cameras, our system is designed with a camera-phone, demonstrating the potential for burn diagnosis with a simple imager. PMID:26819831
Using focused plenoptic cameras for rich image capture.
Georgiev, T; Lumsdaine, A; Chunev, G
2011-01-01
This approach uses a focused plenoptic camera to capture the plenoptic function's rich "non 3D" structure. It employs two techniques. The first simultaneously captures multiple exposures (or other aspects) based on a microlens array having an interleaved set of different filters. The second places multiple filters at the main lens aperture.
A mobile phone system to find crosswalks for visually impaired pedestrians
Shen, Huiying; Chan, Kee-Yip; Coughlan, James; Brabyn, John
2010-01-01
Urban intersections are the most dangerous parts of a blind or visually impaired pedestrian’s travel. A prerequisite for safely crossing an intersection is entering the crosswalk in the right direction and avoiding the danger of straying outside the crosswalk. This paper presents a proof of concept system that seeks to provide such alignment information. The system consists of a standard mobile phone with built-in camera that uses computer vision algorithms to detect any crosswalk visible in the camera’s field of view; audio feedback from the phone then helps the user align him/herself to it. Our prototype implementation on a Nokia mobile phone runs in about one second per image, and is intended for eventual use in a mobile phone system that will aid blind and visually impaired pedestrians in navigating traffic intersections. PMID:20411035
Computer-generated hologram calculation for real scenes using a commercial portable plenoptic camera
NASA Astrophysics Data System (ADS)
Endo, Yutaka; Wakunami, Koki; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ichihashi, Yasuyuki; Yamamoto, Kenji; Ito, Tomoyoshi
2015-12-01
This paper shows the process used to calculate a computer-generated hologram (CGH) for real scenes under natural light using a commercial portable plenoptic camera. In the CGH calculation, a light field captured with the commercial plenoptic camera is converted into a complex amplitude distribution. Then the converted complex amplitude is propagated to a CGH plane. We tested both numerical and optical reconstructions of the CGH and showed that the CGH calculation from captured data with the commercial plenoptic camera was successful.
2016-06-25
The equipment used in this procedure includes: Ann Arbor distortion tester with 50-line grating reticule, IQeye 720 digital video camera with 12...and import them into MATLAB. In order to digitally capture images of the distortion in an optical sample, an IQeye 720 video camera with a 12... video camera and Ann Arbor distortion tester. Figure 8. Computer interface for capturing images seen by IQeye 720 camera. Once an image was
Volume estimation using food specific shape templates in mobile image-based dietary assessment
NASA Astrophysics Data System (ADS)
Chae, Junghoon; Woo, Insoo; Kim, SungYe; Maciejewski, Ross; Zhu, Fengqing; Delp, Edward J.; Boushey, Carol J.; Ebert, David S.
2011-03-01
As obesity concerns mount, dietary assessment methods for prevention and intervention are being developed. These methods include recording, cataloging and analyzing daily dietary records to monitor energy and nutrient intakes. Given the ubiquity of mobile devices with built-in cameras, one possible means of improving dietary assessment is through photographing foods and inputting these images into a system that can determine the nutrient content of foods in the images. One of the critical issues in such the image-based dietary assessment tool is the accurate and consistent estimation of food portion sizes. The objective of our study is to automatically estimate food volumes through the use of food specific shape templates. In our system, users capture food images using a mobile phone camera. Based on information (i.e., food name and code) determined through food segmentation and classification of the food images, our system choose a particular food template shape corresponding to each segmented food. Finally, our system reconstructs the three-dimensional properties of the food shape from a single image by extracting feature points in order to size the food shape template. By employing this template-based approach, our system automatically estimates food portion size, providing a consistent method for estimation food volume.
Locating and decoding barcodes in fuzzy images captured by smart phones
NASA Astrophysics Data System (ADS)
Deng, Wupeng; Hu, Jiwei; Liu, Quan; Lou, Ping
2017-07-01
With the development of barcodes for commercial use, people's requirements for detecting barcodes by smart phone become increasingly pressing. The low quality of barcode image captured by mobile phone always affects the decoding and recognition rates. This paper focuses on locating and decoding EAN-13 barcodes in fuzzy images. We present a more accurate locating algorithm based on segment length and high fault-tolerant rate algorithm for decoding barcodes. Unlike existing approaches, location algorithm is based on the edge segment length of EAN -13 barcodes, while our decoding algorithm allows the appearance of fuzzy region in barcode image. Experimental results are performed on damaged, contaminated and scratched digital images, and provide a quite promising result for EAN -13 barcode location and decoding.
Implementation of smart phone video plethysmography and dependence on lighting parameters.
Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue
2015-08-01
The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.
Phonesat In-flight Experience Results
NASA Technical Reports Server (NTRS)
Attai, Watson; Guillen, Salas Alberto; Oyadomari, Ken Yuji; Priscal, Cedric; Shimmin, Rogan Stuart; Gazulla, Oriol Tintore; Wolfe, Jasper Lewis
2014-01-01
Consumer technology, over the last decade, has begun to encompass devices that enable us to figure out where we are, which way we are pointing, observe the world around us, and store and transmit this information to wherever we want. Once separate consumer products such as GPS units, digital cameras and mobile phones are now combined into the modern day Smartphone. Since these capabilities are remarkably similar to those required for the multi-million dollar satellites - so why not use a multihundred dollar Smartphone instead? The PhoneSat project of NASA Ames Research Center is developing technology demonstrations utilizing these extraordinary advances to show just how simple and cheap Space can be. The style of development revolves around the "release early, release often" Silicon Valley mentality. PhoneSat is a series of 1U CubeSat size spacecrafts that use an off-the-shelf Smartphone as their onboard computer. By doing so, PhoneSat takes advantage of the high computational capability, large memory as well as ultra-tiny sensors like high-resolution cameras and navigation devices that Smartphones offer. Along with a Smartphone, PhoneSat is equipped with other commercially available technology products, such as medical brushless motors that are used as reaction wheels. Over the four years that NASA Ames Research Center has been developing the PhoneSat project, different suborbital and orbital flight activities have proven the validity of this revolutionary approach. In early 2013, the PhoneSat project launched the first triage of PhoneSats into LEO. In the five day orbital life time, the nano-satellites flew the first functioning Smartphone based satellites (using the Nexus One and Nexus S phones), the cheapest satellite (a total parts cost below $3,500) and one of the fastest on-board processors (CPU speed of 1GHz). In late 2013, the PhoneSat project launched an improved version of its bus to a higher altitude orbit which provided data about the overall system's tolerance to the space environment. In this paper, an overview of the PhoneSat project as well as a summary of the in-flight experimental results is presented. NASA Ames Research Center is carrying on its effort to bring a paradigm shift in the way we conceive Space exploration, this new approach is certainly incarnated by PhoneSat. A set of eight PhoneSat-based CubeSats is manifested to launch in 2014 with the purpose of demonstrating new technical capabilities and being a pathfinder for future Spacecraft technology missions.
New generation of meteorology cameras
NASA Astrophysics Data System (ADS)
Janout, Petr; Blažek, Martin; Páta, Petr
2017-12-01
A new generation of the WILLIAM (WIde-field aLL-sky Image Analyzing Monitoring system) camera includes new features such as monitoring of rain and storm clouds during the day observation. Development of the new generation of weather monitoring cameras responds to the demand for monitoring of sudden weather changes. However, new WILLIAM cameras are ready to process acquired image data immediately, release warning against sudden torrential rains, and send it to user's cell phone and email. Actual weather conditions are determined from image data, and results of image processing are complemented by data from sensors of temperature, humidity, and atmospheric pressure. In this paper, we present the architecture, image data processing algorithms of mentioned monitoring camera and spatially-variant model of imaging system aberrations based on Zernike polynomials.
An Example-Based Super-Resolution Algorithm for Selfie Images
William, Jino Hans; Venkateswaran, N.; Narayanan, Srinath; Ramachandran, Sandeep
2016-01-01
A selfie is typically a self-portrait captured using the front camera of a smartphone. Most state-of-the-art smartphones are equipped with a high-resolution (HR) rear camera and a low-resolution (LR) front camera. As selfies are captured by front camera with limited pixel resolution, the fine details in it are explicitly missed. This paper aims to improve the resolution of selfies by exploiting the fine details in HR images captured by rear camera using an example-based super-resolution (SR) algorithm. HR images captured by rear camera carry significant fine details and are used as an exemplar to train an optimal matrix-value regression (MVR) operator. The MVR operator serves as an image-pair priori which learns the correspondence between the LR-HR patch-pairs and is effectively used to super-resolve LR selfie images. The proposed MVR algorithm avoids vectorization of image patch-pairs and preserves image-level information during both learning and recovering process. The proposed algorithm is evaluated for its efficiency and effectiveness both qualitatively and quantitatively with other state-of-the-art SR algorithms. The results validate that the proposed algorithm is efficient as it requires less than 3 seconds to super-resolve LR selfie and is effective as it preserves sharp details without introducing any counterfeit fine details. PMID:27064500
3D Encoding of Musical Score Information and the Playback Method Used by the Cellular Phone
NASA Astrophysics Data System (ADS)
Kubo, Hitoshi; Sugiura, Akihiko
Recently, 3G cellular phone that can take a movie has spread by improving the digital camera function. And, 2Dcode has accurate readout and high operability. And it has spread as an information transmission means. However, the symbol is expanded and complicated when information of 2D codes increases. To solve these, 3D code was proposed. But it need the special equipment for readout, and specializes in the enhancing reality feeling technology. Therefore, it is difficult to apply it to the cellular phone. And so, we propose 3D code that can be recognized by the movie shooting function of the cellular phone. And, score information was encoded. We apply Gray Code to the property of music, and encode it. And the effectiveness was verified.
Qiu, Kang-Fu
2017-01-01
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2–2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking. PMID:29027950
Wang, Jeremy H-S; Qiu, Kang-Fu; Chao, Paul C-P
2017-10-13
This study presents design, digital implementation and performance validation of a lead-lag controller for a 2-degree-of-freedom (DOF) translational optical image stabilizer (OIS) installed with a digital image sensor in mobile camera phones. Nowadays, OIS is an important feature of modern commercial mobile camera phones, which aims to mechanically reduce the image blur caused by hand shaking while shooting photos. The OIS developed in this study is able to move the imaging lens by actuating its voice coil motors (VCMs) at the required speed to the position that significantly compensates for imaging blurs by hand shaking. The compensation proposed is made possible by first establishing the exact, nonlinear equations of motion (EOMs) for the OIS, which is followed by designing a simple lead-lag controller based on established nonlinear EOMs for simple digital computation via a field-programmable gate array (FPGA) board in order to achieve fast response. Finally, experimental validation is conducted to show the favorable performance of the designed OIS; i.e., it is able to stabilize the lens holder to the desired position within 0.02 s, which is much less than previously reported times of around 0.1 s. Also, the resulting residual vibration is less than 2.2-2.5 μm, which is commensurate to the very small pixel size found in most of commercial image sensors; thus, significantly minimizing image blur caused by hand shaking.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
Moonrungsee, Nuntaporn; Pencharee, Somkid; Jakmunee, Jaroon
2015-05-01
A field deployable colorimetric analyzer based on an "Android mobile phone" was developed for the determination of available phosphorus content in soil. An inexpensive mobile phone embedded with digital camera was used for taking photograph of the chemical solution under test. The method involved a reaction of the phosphorus (orthophosphate form), ammonium molybdate and potassium antimonyl tartrate to form phosphomolybdic acid which was reduced by ascorbic acid to produce the intense colored molybdenum blue. The software program was developed to use with the phone for recording and analyzing RGB color of the picture. A light tight box with LED light to control illumination was fabricated to improve precision and accuracy of the measurement. Under the optimum conditions, the calibration graph was created by measuring blue color intensity of a series of standard phosphorus solution (0.0-1.0mgPL(-1)), then, the calibration equation obtained was retained by the program for the analysis of sample solution. The results obtained from the proposed method agreed well with the spectrophotometric method, with a detection limit of 0.01mgPL(-1) and a sample throughput about 40h(-1) was achieved. The developed system provided good accuracy (RE<5%) and precision (RSD<2%, intra- and inter-day), fast and cheap analysis, and especially convenient to use in crop field for soil analysis of phosphorus nutrient. Copyright © 2015 Elsevier B.V. All rights reserved.
Medically relevant assays with a simple smartphone and tablet based fluorescence detection system.
Wargocki, Piotr; Deng, Wei; Anwer, Ayad G; Goldys, Ewa M
2015-05-20
Cell phones and smart phones can be reconfigured as biomedical sensor devices but this requires specialized add-ons. In this paper we present a simple cell phone-based portable bioassay platform, which can be used with fluorescent assays in solution. The system consists of a tablet, a polarizer, a smart phone (camera) and a box that provides dark readout conditions. The assay in a well plate is placed on the tablet screen acting as an excitation source. A polarizer on top of the well plate separates excitation light from assay fluorescence emission enabling assay readout with a smartphone camera. The assay result is obtained by analysing the intensity of image pixels in an appropriate colour channel. With this device we carried out two assays, for collagenase and trypsin using fluorescein as the detected fluorophore. The results of collagenase assay with the lowest measured concentration of 3.75 µg/mL and 0.938 µg in total in the sample were comparable to those obtained by a microplate reader. The lowest measured amount of trypsin was 930 pg, which is comparable to the low detection limit of 400 pg for this assay obtained in a microplate reader. The device is sensitive enough to be used in point-of-care medical diagnostics of clinically relevant conditions, including arthritis, cystic fibrosis and acute pancreatitis.
A machine learning approach for detecting cell phone usage
NASA Astrophysics Data System (ADS)
Xu, Beilei; Loce, Robert P.
2015-03-01
Cell phone usage while driving is common, but widely considered dangerous due to distraction to the driver. Because of the high number of accidents related to cell phone usage while driving, several states have enacted regulations that prohibit driver cell phone usage while driving. However, to enforce the regulation, current practice requires dispatching law enforcement officers at road side to visually examine incoming cars or having human operators manually examine image/video records to identify violators. Both of these practices are expensive, difficult, and ultimately ineffective. Therefore, there is a need for a semi-automatic or automatic solution to detect driver cell phone usage. In this paper, we propose a machine-learning-based method for detecting driver cell phone usage using a camera system directed at the vehicle's front windshield. The developed method consists of two stages: first, the frontal windshield region localization using the deformable part model (DPM), next, we utilize Fisher vectors (FV) representation to classify the driver's side of the windshield into cell phone usage violation and non-violation classes. The proposed method achieved about 95% accuracy with a data set of more than 100 images with drivers in a variety of challenging poses with or without cell phones.
3D kinematic measurement of human movement using low cost fish-eye cameras
NASA Astrophysics Data System (ADS)
Islam, Atiqul; Asikuzzaman, Md.; Garratt, Matthew A.; Pickering, Mark R.
2017-02-01
3D motion capture is difficult when the capturing is performed in an outdoor environment without controlled surroundings. In this paper, we propose a new approach of using two ordinary cameras arranged in a special stereoscopic configuration and passive markers on a subject's body to reconstruct the motion of the subject. Firstly for each frame of the video, an adaptive thresholding algorithm is applied for extracting the markers on the subject's body. Once the markers are extracted, an algorithm for matching corresponding markers in each frame is applied. Zhang's planar calibration method is used to calibrate the two cameras. As the cameras use the fisheye lens, they cannot be well estimated using a pinhole camera model which makes it difficult to estimate the depth information. In this work, to restore the 3D coordinates we use a unique calibration method for fisheye lenses. The accuracy of the 3D coordinate reconstruction is evaluated by comparing with results from a commercially available Vicon motion capture system.
The impact of geo-tagging on the photo industry and creating revenue streams
NASA Astrophysics Data System (ADS)
Richter, Rolf; Böge, Henning; Weckmann, Christoph; Schloen, Malte
2010-02-01
Internet geo and mapping services like Google Maps, Google Earth and Microsoft Bing Maps have reinvented the use of geographical information and have reached an enormous popularity. Besides that, location technologies like GPS have become affordable and are now being integrated in many camera phones. GPS is also available for standalone cameras as add on products or integrated in cameras. These developments are the enabler for new products for the photo industry or they enhance existing products. New commercial opportunities have been identified in the areas of photo hardware, internet/software and photo finishing.
Internet of Things Platform for Smart Farming: Experiences and Lessons Learnt.
Jayaraman, Prem Prakash; Yavari, Ali; Georgakopoulos, Dimitrios; Morshed, Ahsan; Zaslavsky, Arkady
2016-11-09
Improving farm productivity is essential for increasing farm profitability and meeting the rapidly growing demand for food that is fuelled by rapid population growth across the world. Farm productivity can be increased by understanding and forecasting crop performance in a variety of environmental conditions. Crop recommendation is currently based on data collected in field-based agricultural studies that capture crop performance under a variety of conditions (e.g., soil quality and environmental conditions). However, crop performance data collection is currently slow, as such crop studies are often undertaken in remote and distributed locations, and such data are typically collected manually. Furthermore, the quality of manually collected crop performance data is very low, because it does not take into account earlier conditions that have not been observed by the human operators but is essential to filter out collected data that will lead to invalid conclusions (e.g., solar radiation readings in the afternoon after even a short rain or overcast in the morning are invalid, and should not be used in assessing crop performance). Emerging Internet of Things (IoT) technologies, such as IoT devices (e.g., wireless sensor networks, network-connected weather stations, cameras, and smart phones) can be used to collate vast amount of environmental and crop performance data, ranging from time series data from sensors, to spatial data from cameras, to human observations collected and recorded via mobile smart phone applications. Such data can then be analysed to filter out invalid data and compute personalised crop recommendations for any specific farm. In this paper, we present the design of SmartFarmNet, an IoT-based platform that can automate the collection of environmental, soil, fertilisation, and irrigation data; automatically correlate such data and filter-out invalid data from the perspective of assessing crop performance; and compute crop forecasts and personalised crop recommendations for any particular farm. SmartFarmNet can integrate virtually any IoT device, including commercially available sensors, cameras, weather stations, etc., and store their data in the cloud for performance analysis and recommendations. An evaluation of the SmartFarmNet platform and our experiences and lessons learnt in developing this system concludes the paper. SmartFarmNet is the first and currently largest system in the world (in terms of the number of sensors attached, crops assessed, and users it supports) that provides crop performance analysis and recommendations.
Desai, Nandini J.; Gupta, B. D.; Patel, Pratik Narendrabhai
2014-01-01
Introduction: Obtaining images of slides viewed by a microscope can be invaluable for both diagnosis and teaching.They can be transferred among technologically-advanced hospitals for further consultation and evaluation. But a standard microscopic photography camera unit (MPCU)(MIPS-Microscopic Image projection System) is costly and not available in resource poor settings. The aim of our endeavour was to find a comparable and cheaper alternative method for photomicrography. Materials and Methods: We used a NIKON Coolpix S6150 camera (box type digital camera) with Olympus CH20i microscope and a fluorescent microscope for the purpose of this study. Results: We got comparable results for capturing images of light microscopy, but the results were not as satisfactory for fluorescent microscopy. Conclusion: A box type digital camera is a comparable, less expensive and convenient alternative to microscopic photography camera unit. PMID:25478350
Avatar DNA Nanohybrid System in Chip-on-a-Phone
NASA Astrophysics Data System (ADS)
Park, Dae-Hwan; Han, Chang Jo; Shul, Yong-Gun; Choy, Jin-Ho
2014-05-01
Long admired for informational role and recognition function in multidisciplinary science, DNA nanohybrids have been emerging as ideal materials for molecular nanotechnology and genetic information code. Here, we designed an optical machine-readable DNA icon on microarray, Avatar DNA, for automatic identification and data capture such as Quick Response and ColorZip codes. Avatar icon is made of telepathic DNA-DNA hybrids inscribed on chips, which can be identified by camera of smartphone with application software. Information encoded in base-sequences can be accessed by connecting an off-line icon to an on-line web-server network to provide message, index, or URL from database library. Avatar DNA is then converged with nano-bio-info-cogno science: each building block stands for inorganic nanosheets, nucleotides, digits, and pixels. This convergence could address item-level identification that strengthens supply-chain security for drug counterfeits. It can, therefore, provide molecular-level vision through mobile network to coordinate and integrate data management channels for visual detection and recording.
Avatar DNA Nanohybrid System in Chip-on-a-Phone
Park, Dae-Hwan; Han, Chang Jo; Shul, Yong-Gun; Choy, Jin-Ho
2014-01-01
Long admired for informational role and recognition function in multidisciplinary science, DNA nanohybrids have been emerging as ideal materials for molecular nanotechnology and genetic information code. Here, we designed an optical machine-readable DNA icon on microarray, Avatar DNA, for automatic identification and data capture such as Quick Response and ColorZip codes. Avatar icon is made of telepathic DNA-DNA hybrids inscribed on chips, which can be identified by camera of smartphone with application software. Information encoded in base-sequences can be accessed by connecting an off-line icon to an on-line web-server network to provide message, index, or URL from database library. Avatar DNA is then converged with nano-bio-info-cogno science: each building block stands for inorganic nanosheets, nucleotides, digits, and pixels. This convergence could address item-level identification that strengthens supply-chain security for drug counterfeits. It can, therefore, provide molecular-level vision through mobile network to coordinate and integrate data management channels for visual detection and recording. PMID:24824876
Smartphone-based photoplethysmographic imaging for heart rate monitoring.
Alafeef, Maha
2017-07-01
The purpose of this study is to make use of visible light reflected mode photoplethysmographic (PPG) imaging for heart rate (HR) monitoring via smartphones. The system uses the built-in camera feature in mobile phones to capture video from the subject's index fingertip. The video is processed, and then the PPG signal resulting from the video stream processing is used to calculate the subject's heart rate. Records from 19 subjects were used to evaluate the system's performance. The HR values obtained by the proposed method were compared with the actual HR. The obtained results show an accuracy of 99.7% and a maximum absolute error of 0.4 beats/min where most of the absolute errors lay in the range of 0.04-0.3 beats/min. Given the encouraging results, this type of HR measurement can be adopted with great benefit, especially in the conditions of personal use or home-based care. The proposed method represents an efficient portable solution for HR accurate detection and recording.
Smith, Zachary J; Chu, Kaiqin; Wachsmann-Hogiu, Sebastian
2012-01-01
We report on the construction of a Fourier plane imaging system attached to a cell phone. By illuminating particle suspensions with a collimated beam from an inexpensive diode laser, angularly resolved scattering patterns are imaged by the phone's camera. Analyzing these patterns with Mie theory results in predictions of size distributions of the particles in suspension. Despite using consumer grade electronics, we extracted size distributions of sphere suspensions with better than 20 nm accuracy in determining the mean size. We also show results from milk, yeast, and blood cells. Performing these measurements on a portable device presents opportunities for field-testing of food quality, process monitoring, and medical diagnosis.
Bengtsson, Ulrika; Kjellgren, Karin; Höfer, Stefan; Taft, Charles; Ring, Lena
2014-10-01
Self-management support tools using technology may improve adherence to hypertension treatment. There is a need for user-friendly tools facilitating patients' understanding of the interconnections between blood pressure, wellbeing and lifestyle. This study aimed to examine comprehension, comprehensiveness and relevance of items, and further to evaluate the usability and reliability of an interactive hypertension-specific mobile phone self-report system. Areas important in supporting self-management and candidate items were derived from five focus group interviews with patients and healthcare professionals (n = 27), supplemented by a literature review. Items and response formats were drafted to meet specifications for mobile phone administration and were integrated into a mobile phone data-capture system. Content validity and usability were assessed iteratively in four rounds of cognitive interviews with patients (n = 21) and healthcare professionals (n = 4). Reliability was examined using a test-retest. Focus group analyses yielded six areas covered by 16 items. The cognitive interviews showed satisfactory item comprehension, relevance and coverage; however, one item was added. The mobile phone self-report system was reliable and perceived easy to use. The mobile phone self-report system appears efficiently to capture information relevant in patients' self-management of hypertension. Future studies need to evaluate the effectiveness of this tool in improving self-management of hypertension in clinical practice.
Mount Sharp Panorama in Raw Colors
2013-03-15
This mosaic of images from the Mastcam onboard NASA Mars rover Curiosity shows Mount Sharp in raw color. Raw color shows the scene colors as they would look in a typical smart-phone camera photo, before any adjustment.
MEMS-based thermally-actuated image stabilizer for cellular phone camera
NASA Astrophysics Data System (ADS)
Lin, Chun-Ying; Chiou, Jin-Chern
2012-11-01
This work develops an image stabilizer (IS) that is fabricated using micro-electro-mechanical system (MEMS) technology and is designed to counteract the vibrations when human using cellular phone cameras. The proposed IS has dimensions of 8.8 × 8.8 × 0.3 mm3 and is strong enough to suspend an image sensor. The processes that is utilized to fabricate the IS includes inductive coupled plasma (ICP) processes, reactive ion etching (RIE) processes and the flip-chip bonding method. The IS is designed to enable the electrical signals from the suspended image sensor to be successfully emitted out using signal output beams, and the maximum actuating distance of the stage exceeds 24.835 µm when the driving current is 155 mA. Depending on integration of MEMS device and designed controller, the proposed IS can decrease the hand tremor by 72.5%.
Cost-effective and Rapid Blood Analysis on a Cell-phone
Zhu, Hongying; Sencan, Ikbal; Wong, Justin; Dimitrov, Stoyan; Tseng, Derek; Nagashima, Keita; Ozcan, Aydogan
2013-01-01
We demonstrate a compact and cost-effective imaging cytometry platform installed on a cell-phone for the measurement of the density of red and white blood cells as well as hemoglobin concentration in human blood samples. Fluorescent and bright-field images of blood samples are captured using separate optical attachments to the cell-phone and are rapidly processed through a custom-developed smart application running on the phone for counting of blood cells and determining hemoglobin density. We evaluated the performance of this cell-phone based blood analysis platform using anonymous human blood samples and achieved comparable results to a standard bench-top hematology analyser. Test results can either be stored on the cell-phone memory or be transmitted to a central server, providing remote diagnosis opportunities even in field settings. PMID:23392286
Cost-effective and rapid blood analysis on a cell-phone.
Zhu, Hongying; Sencan, Ikbal; Wong, Justin; Dimitrov, Stoyan; Tseng, Derek; Nagashima, Keita; Ozcan, Aydogan
2013-04-07
We demonstrate a compact and cost-effective imaging cytometry platform installed on a cell-phone for the measurement of the density of red and white blood cells as well as hemoglobin concentration in human blood samples. Fluorescent and bright-field images of blood samples are captured using separate optical attachments to the cell-phone and are rapidly processed through a custom-developed smart application running on the phone for counting of blood cells and determining hemoglobin density. We evaluated the performance of this cell-phone based blood analysis platform using anonymous human blood samples and achieved comparable results to a standard bench-top hematology analyser. Test results can either be stored on the cell-phone memory or be transmitted to a central server, providing remote diagnosis opportunities even in field settings.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Taneja, S; Fru, L Che; Desai, V
Purpose: It is now commonplace to handle treatments of hyperthyroidism using iodine-131 as an outpatient procedure due to lower costs and less stringent federal regulations. The Nuclear Regulatory Commission has currently updated release guidelines for these procedures, but there is still a large uncertainty in the dose to the public. Current guidelines to minimize dose to the public require patients to remain isolated after treatment. The purpose of this study was to use a low-cost common device, such as a cell phone, to estimate exposure emitted from a patient to the general public. Methods: Measurements were performed using an Applemore » iPhone 3GS and a Cs-137 irradiator. The charge-coupled device (CCD) camera on the phone was irradiated to exposure rates ranging from 0.1 mR/hr to 100 mR/hr and 30-sec videos were taken during irradiation with the camera lens covered by electrical tape. Interactions were detected as white pixels on a black background in each video. Both single threshold (ST) and colony counting (CC) methods were performed using MATLAB®. Calibration curves were determined by comparing the total pixel intensity output from each method to the known exposure rate. Results: The calibration curve showed a linear relationship above 5 mR/hr for both analysis techniques. The number of events counted per unit exposure rate within the linear region was 19.5 ± 0.7 events/mR and 8.9 ± 0.4 events/mR for the ST and CC methods respectively. Conclusion: Two algorithms were developed and show a linear relationship between photons detected by a CCD camera and low exposure rates, in the range of 5 mR/hr to 100-mR/hr. Future work aims to refine this model by investigating the dose-rate and energy dependencies of the camera response. This algorithm allows for quantitative monitoring of exposure from patients treated with iodine-131 using a simple device outside of the hospital.« less
Cameras and settings for optimal image capture from UAVs
NASA Astrophysics Data System (ADS)
Smith, Mike; O'Connor, James; James, Mike R.
2017-04-01
Aerial image capture has become very common within the geosciences due to the increasing affordability of low payload (<20 kg) Unmanned Aerial Vehicles (UAVs) for consumer markets. Their application to surveying has led to many studies being undertaken using UAV imagery captured from consumer grade cameras as primary data sources. However, image quality and the principles of image capture are seldom given rigorous discussion which can lead to experiments being difficult to accurately reproduce. In this contribution we revisit the underpinning concepts behind image capture, from which the requirements for acquiring sharp, well exposed and suitable imagery are derived. This then leads to discussion of how to optimise the platform, camera, lens and imaging settings relevant to image quality planning, presenting some worked examples as a guide. Finally, we challenge the community to make their image data open for review in order to ensure confidence in the outputs/error estimates, allow reproducibility of the results and have these comparable with future studies. We recommend providing open access imagery where possible, a range of example images, and detailed metadata to rigorously describe the image capture process.
Bayesian inference in camera trapping studies for a class of spatial capture-recapture models
Royle, J. Andrew; Karanth, K. Ullas; Gopalaswamy, Arjun M.; Kumar, N. Samba
2009-01-01
We develop a class of models for inference about abundance or density using spatial capture-recapture data from studies based on camera trapping and related methods. The model is a hierarchical model composed of two components: a point process model describing the distribution of individuals in space (or their home range centers) and a model describing the observation of individuals in traps. We suppose that trap- and individual-specific capture probabilities are a function of distance between individual home range centers and trap locations. We show that the models can be regarded as generalized linear mixed models, where the individual home range centers are random effects. We adopt a Bayesian framework for inference under these models using a formulation based on data augmentation. We apply the models to camera trapping data on tigers from the Nagarahole Reserve, India, collected over 48 nights in 2006. For this study, 120 camera locations were used, but cameras were only operational at 30 locations during any given sample occasion. Movement of traps is common in many camera-trapping studies and represents an important feature of the observation model that we address explicitly in our application.
Video-based real-time on-street parking occupancy detection system
NASA Astrophysics Data System (ADS)
Bulan, Orhan; Loce, Robert P.; Wu, Wencheng; Wang, YaoRong; Bernal, Edgar A.; Fan, Zhigang
2013-10-01
Urban parking management is receiving significant attention due to its potential to reduce traffic congestion, fuel consumption, and emissions. Real-time parking occupancy detection is a critical component of on-street parking management systems, where occupancy information is relayed to drivers via smart phone apps, radio, Internet, on-road signs, or global positioning system auxiliary signals. Video-based parking occupancy detection systems can provide a cost-effective solution to the sensing task while providing additional functionality for traffic law enforcement and surveillance. We present a video-based on-street parking occupancy detection system that can operate in real time. Our system accounts for the inherent challenges that exist in on-street parking settings, including illumination changes, rain, shadows, occlusions, and camera motion. Our method utilizes several components from video processing and computer vision for motion detection, background subtraction, and vehicle detection. We also present three traffic law enforcement applications: parking angle violation detection, parking boundary violation detection, and exclusion zone violation detection, which can be integrated into the parking occupancy cameras as a value-added option. Our experimental results show that the proposed parking occupancy detection method performs in real-time at 5 frames/s and achieves better than 90% detection accuracy across several days of videos captured in a busy street block under various weather conditions such as sunny, cloudy, and rainy, among others.
Capturing and analyzing wheelchair maneuvering patterns with mobile cloud computing.
Fu, Jicheng; Hao, Wei; White, Travis; Yan, Yuqing; Jones, Maria; Jan, Yih-Kuen
2013-01-01
Power wheelchairs have been widely used to provide independent mobility to people with disabilities. Despite great advancements in power wheelchair technology, research shows that wheelchair related accidents occur frequently. To ensure safe maneuverability, capturing wheelchair maneuvering patterns is fundamental to enable other research, such as safe robotic assistance for wheelchair users. In this study, we propose to record, store, and analyze wheelchair maneuvering data by means of mobile cloud computing. Specifically, the accelerometer and gyroscope sensors in smart phones are used to record wheelchair maneuvering data in real-time. Then, the recorded data are periodically transmitted to the cloud for storage and analysis. The analyzed results are then made available to various types of users, such as mobile phone users, traditional desktop users, etc. The combination of mobile computing and cloud computing leverages the advantages of both techniques and extends the smart phone's capabilities of computing and data storage via the Internet. We performed a case study to implement the mobile cloud computing framework using Android smart phones and Google App Engine, a popular cloud computing platform. Experimental results demonstrated the feasibility of the proposed mobile cloud computing framework.
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things
Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing
2015-01-01
The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004
NASA Technical Reports Server (NTRS)
Humphreys, Brad; Bellisario, Brian; Gallo, Christopher; Thompson, William K.; Lewandowski, Beth
2016-01-01
Long duration space travel to Mars or to an asteroid will expose astronauts to extended periods of reduced gravity. Since gravity is not present to aid loading, astronauts will use resistive and aerobic exercise regimes for the duration of the space flight to minimize the loss of bone density, muscle mass and aerobic capacity that occurs during exposure to a reduced gravity environment. Unlike the International Space Station (ISS), the area available for an exercise device in the next generation of spacecraft is limited. Therefore, compact resistance exercise device prototypes are being developed. The NASA Digital Astronaut Project (DAP) is supporting the Advanced Exercise Concepts (AEC) Project, Exercise Physiology and Countermeasures (ExPC) project and the National Space Biomedical Research Institute (NSBRI) funded researchers by developing computational models of exercising with these new advanced exercise device concepts. To perform validation of these models and to support the Advanced Exercise Concepts Project, several candidate devices have been flown onboard NASAs Reduced Gravity Aircraft. In terrestrial laboratories, researchers typically have available to them motion capture systems for the measurement of subject kinematics. Onboard the parabolic flight aircraft it is not practical to utilize the traditional motion capture systems due to the large working volume they require and their relatively high replacement cost if damaged. To support measuring kinematics on board parabolic aircraft, a motion capture system is being developed utilizing open source computer vision code with commercial off the shelf (COTS) video camera hardware. While the systems accuracy is lower than lab setups, it provides a means to produce quantitative comparison motion capture kinematic data. Additionally, data such as required exercise volume for small spaces such as the Orion capsule can be determined. METHODS: OpenCV is an open source computer vision library that provides the ability to perform multi-camera 3 dimensional reconstruction. Utilizing OpenCV, via the Python programming language, a set of tools has been developed to perform motion capture in confined spaces using commercial cameras. Four Sony Video Cameras were intrinsically calibrated prior to flight. Intrinsic calibration provides a set of camera specific parameters to remove geometric distortion of the lens and sensor (specific to each individual camera). A set of high contrast markers were placed on the exercising subject (safety also necessitated that they be soft in case they become detached during parabolic flight); small yarn balls were used. Extrinsic calibration, the determination of camera location and orientation parameters, is performed using fixed landmark markers shared by the camera scenes. Additionally a wand calibration, the sweeping of the camera scenes simultaneously, was also performed. Techniques have been developed to perform intrinsic calibration, extrinsic calibration, isolation of the markers in the scene, calculation of marker 2D centroids, and 3D reconstruction from multiple cameras. These methods have been tested in the laboratory side-by-side comparison to a traditional motion capture system and also on a parabolic flight.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
The suitability of lightfield camera depth maps for coordinate measurement applications
NASA Astrophysics Data System (ADS)
Rangappa, Shreedhar; Tailor, Mitul; Petzing, Jon; Kinnell, Peter; Jackson, Michael
2015-12-01
Plenoptic cameras can capture 3D information in one exposure without the need for structured illumination, allowing grey scale depth maps of the captured image to be created. The Lytro, a consumer grade plenoptic camera, provides a cost effective method of measuring depth of multiple objects under controlled lightning conditions. In this research, camera control variables, environmental sensitivity, image distortion characteristics, and the effective working range of two Lytro first generation cameras were evaluated. In addition, a calibration process has been created, for the Lytro cameras, to deliver three dimensional output depth maps represented in SI units (metre). The novel results show depth accuracy and repeatability of +10.0 mm to -20.0 mm, and 0.5 mm respectively. For the lateral X and Y coordinates, the accuracy was +1.56 μm to -2.59 μm and the repeatability was 0.25 μm.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA).
Harrefors, Christina; Sävenstedt, Stefan; Lundquist, Anders; Lundquist, Bengt; Axelsson, Karin
2012-01-01
Cognitive impairments influence the possibility of persons with dementia to remember daily events and maintain a sense of self. In order to address these problems a digital photo diary was developed to capture information about events in daily life. The device consisted of a wearable digital camera, smart phone with Global Positioning System (GPS) and a home memory station with computer for uploading the photographs and touch screen. The aim of this study was to describe professional caregiver’s perceptions on how persons with mild dementia might experience the usage of this digital photo diary from both a situation when wearing the camera and a situation when viewing the uploaded photos, through a questionnaire with 408 respondents. In order to catch the professional caregivers’ perceptions a questionnaire with the semantic differential technique was used and the main question was “How do you think Hilda (the fictive person in the questionnaire) feels when she is using the digital photo diary?”. The factor analysis revealed three factors; Sense of autonomy, Sense of self-esteem and Sense of trust. An interesting conclusion that can be drawn is that professional caregivers had an overall positive view of the usage of digital photo diary as supporting autonomy for persons with mild dementia. The meaningfulness of each situation when wearing the camera and viewing the uploaded pictures to be used in two different situations and a part of an integrated assistive device has to be considered separately. Individual needs and desires of the person who is living with dementia and the context of each individual has to be reflected on and taken into account before implementing assistive digital devices as a tool in care. PMID:22509232
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
DataPlay's mobile recording technology
NASA Astrophysics Data System (ADS)
Bell, Bernard W., Jr.
2002-01-01
A small rotating memory device which utilizes optical prerecorded and writeable technology to provide a mobile recording technology solution for digital cameras, cell phones, music players, PDA's, and hybrid multipurpose devices have been developed. This solution encompasses writeable, read only, and encrypted storage media.
Integrated Rapid-Diagnostic-Test Reader Platform on a Cellphone
Mudanyali, Onur; Dimitrov, Stoyan; Sikora, Uzair; Padmanabhan, Swati; Navruz, Isa; Ozcan, Aydogan
2012-01-01
We demonstrate a cellphone based Rapid-Diagnostic-Test (RDT) reader platform that can work with various lateral flow immuno-chromatographic assays and similar tests to sense the presence of a target analyte in a sample. This compact and cost-effective digital RDT reader, weighing only ~65 grams, mechanically attaches to the existing camera unit of a cellphone, where various types of RDTs can be inserted to be imaged in reflection or transmission modes under light-emitting-diode (LED) based illumination. Captured raw images of these tests are then digitally processed (within less than 0.2 sec/image) through a smart application running on the cellphone for validation of the RDT as well as for automated reading of its diagnostic result. The same smart application running on the cellphone then transmits the resulting data, together with the RDT images and other related information (e.g., demographic data) to a central server, which presents the diagnostic results on a world-map through geo-tagging. This dynamic spatio-temporal map of various RDT results can then be viewed and shared using internet browsers or through the same cellphone application. We tested this platform using malaria, tuberculosis (TB) as well as HIV RDTs by installing it on both Android based smart-phones as well as an iPhone. Providing real-time spatio-temporal statistics for the prevalence of various infectious diseases, this smart RDT reader platform running on cellphones might assist health-care professionals and policy makers to track emerging epidemics worldwide and help epidemic preparedness. PMID:22596243
Francis, Filbert; Ishengoma, Deus S; Mmbando, Bruno P; Rutta, Acleus S M; Malecela, Mwelecele N; Mayala, Benjamin; Lemnge, Martha M; Michael, Edwin
2017-08-01
Early detection of febrile illnesses at community level is essential for improved malaria case management and control. Currently, mobile phone-based technology has been commonly used to collect and transfer health information and services in different settings. This study assessed the applicability of mobile phone-based technology in real-time reporting of fever cases and management of malaria by village health workers (VHWs) in north-eastern Tanzania. The community mobile phone-based disease surveillance and treatment for malaria (ComDSTM) platform, combined with mobile phones and web applications, was developed and implemented in three villages and one dispensary in Muheza district from November 2013 to October 2014. A baseline census was conducted in May 2013. The data were uploaded on a web-based database and updated during follow-up home visits by VHWs. Active and passive case detection (ACD, PCD) of febrile cases were done by VHWs and cases found positive by malaria rapid diagnostic test (RDT) were given the first dose of artemether-lumefantrine (AL) at the dispensary. Each patient was visited at home by VHWs daily for the first 3 days to supervise intake of anti-malarial and on day 7 to monitor the recovery process. The data were captured and transmitted to the database using mobile phones. The baseline population in the three villages was 2934 in 678 households. A total of 1907 febrile cases were recorded by VHWs and 1828 (95.9%) were captured using mobile phones. At the dispensary, 1778 (93.2%) febrile cases were registered and of these, 84.2% were captured through PCD. Positivity rates were 48.2 and 45.8% by RDT and microscopy, respectively. Nine cases had treatment failure reported on day 7 post-treatment and adherence to treatment was 98%. One patient with severe febrile illness was referred to Muheza district hospital. The study showed that mobile phone-based technology can be successfully used by VHWs in surveillance and timely reporting of fever episodes and monitoring of treatment failure in remote areas. Further optimization and scaling-up will be required to utilize the tools for improved malaria case management and drug resistance surveillance.
FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †
Lee, Sukhan
2018-01-01
The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506
Automated Meteor Detection by All-Sky Digital Camera Systems
NASA Astrophysics Data System (ADS)
Suk, Tomáš; Šimberová, Stanislava
2017-12-01
We have developed a set of methods to detect meteor light traces captured by all-sky CCD cameras. Operating at small automatic observatories (stations), these cameras create a network spread over a large territory. Image data coming from these stations are merged in one central node. Since a vast amount of data is collected by the stations in a single night, robotic storage and analysis are essential to processing. The proposed methodology is adapted to data from a network of automatic stations equipped with digital fish-eye cameras and includes data capturing, preparation, pre-processing, analysis, and finally recognition of objects in time sequences. In our experiments we utilized real observed data from two stations.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-008 (4 Dec 1993) --- This view of the Earth-orbiting Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view was taken during rendezvous operations. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Electronic Still Camera image of Astronaut Claude Nicollier working with RMS
1993-12-05
S61-E-006 (5 Dec 1993) --- The robot arm controlling work of Swiss scientist Claude Nicollier was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. With the mission specialist's assistance, Endeavour's crew captured the Hubble Space Telescope (HST) on December 4, 1993. Four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
NASA Astrophysics Data System (ADS)
Humphreys, Kenneth; Ward, Tomas; Markham, Charles
2007-04-01
We present a camera-based device capable of capturing two photoplethysmographic (PPG) signals at two different wavelengths simultaneously, in a remote noncontact manner. The system comprises a complementary metal-oxide semiconductor camera and dual wavelength array of light emitting diodes (760 and 880nm). By alternately illuminating a region of tissue with each wavelength of light, and detecting the backscattered photons with the camera at a rate of 16frames/wavelengths, two multiplexed PPG wave forms are simultaneously captured. This process is the basis of pulse oximetry, and we describe how, with the inclusion of a calibration procedure, this system could be used as a noncontact pulse oximeter to measure arterial oxygen saturation (SpO2) remotely. Results from an experiment on ten subjects, exhibiting normal SpO2 readings, that demonstrate the instrument's ability to capture signals from a range of subjects under realistic lighting and environmental conditions are presented. We compare the signals captured by the noncontact system to a conventional PPG signal captured concurrently from a finger, and show by means of a J. Bland and D. Altman [Lancet 327, 307 (1986); Statistician 32, 307 (1983)] test, the noncontact device to be comparable to a contact device as a monitor of heart rate. We highlight some considerations that should be made when using camera-based "integrative" sampling methods and demonstrate through simulation, the suitability of the captured PPG signals for application of existing pulse oximetry calibration procedures.
Quality and noise measurements in mobile phone video capture
NASA Astrophysics Data System (ADS)
Petrescu, Doina; Pincenti, John
2011-02-01
The quality of videos captured with mobile phones has become increasingly important particularly since resolutions and formats have reached a level that rivals the capabilities available in the digital camcorder market, and since many mobile phones now allow direct playback on large HDTVs. The video quality is determined by the combined quality of the individual parts of the imaging system including the image sensor, the digital color processing, and the video compression, each of which has been studied independently. In this work, we study the combined effect of these elements on the overall video quality. We do this by evaluating the capture under various lighting, color processing, and video compression conditions. First, we measure full reference quality metrics between encoder input and the reconstructed sequence, where the encoder input changes with light and color processing modifications. Second, we introduce a system model which includes all elements that affect video quality, including a low light additive noise model, ISP color processing, as well as the video encoder. Our experiments show that in low light conditions and for certain choices of color processing the system level visual quality may not improve when the encoder becomes more capable or the compression ratio is reduced.
NGEE Arctic Webcam Photographs, Barrow Environmental Observatory, Barrow, Alaska
Bob Busey; Larry Hinzman
2012-04-01
The NGEE Arctic Webcam (PTZ Camera) captures two views of seasonal transitions from its generally south-facing position on a tower located at the Barrow Environmental Observatory near Barrow, Alaska. Images are captured every 30 minutes. Historical images are available for download. The camera is operated by the U.S. DOE sponsored Next Generation Ecosystem Experiments - Arctic (NGEE Arctic) project.
Method and apparatus for calibrating a display using an array of cameras
NASA Technical Reports Server (NTRS)
Johnson, Michael J. (Inventor); Chen, Chung-Jen (Inventor); Chandrasekhar, Rajesh (Inventor)
2001-01-01
The present invention overcomes many of the disadvantages of the prior art by providing a display that can be calibrated and re-calibrated with a minimal amount of manual intervention. To accomplish this, the present invention provides one or more cameras to capture an image that is projected on a display screen. In one embodiment, the one or more cameras are placed on the same side of the screen as the projectors. In another embodiment, an array of cameras is provided on either or both sides of the screen for capturing a number of adjacent and/or overlapping capture images of the screen. In either of these embodiments, the resulting capture images are processed to identify any non-desirable characteristics including any visible artifacts such as seams, bands, rings, etc. Once the non-desirable characteristics are identified, an appropriate transformation function is determined. The transformation function is used to pre-warp the input video signal to the display such that the non-desirable characteristics are reduced or eliminated from the display. The transformation function preferably compensates for spatial non-uniformity, color non-uniformity, luminance non-uniformity, and/or other visible artifacts.
Coded aperture solution for improving the performance of traffic enforcement cameras
NASA Astrophysics Data System (ADS)
Masoudifar, Mina; Pourreza, Hamid Reza
2016-10-01
A coded aperture camera is proposed for automatic license plate recognition (ALPR) systems. It captures images using a noncircular aperture. The aperture pattern is designed for the rapid acquisition of high-resolution images while preserving high spatial frequencies of defocused regions. It is obtained by minimizing an objective function, which computes the expected value of perceptual deblurring error. The imaging conditions and camera sensor specifications are also considered in the proposed function. The designed aperture improves the depth of field (DoF) and subsequently ALPR performance. The captured images can be directly analyzed by the ALPR software up to a specific depth, which is 13 m in our case, though it is 11 m for the circular aperture. Moreover, since the deblurring results of images captured by our aperture yield fewer artifacts than those captured by the circular aperture, images can be first deblurred and then analyzed by the ALPR software. In this way, the DoF and recognition rate can be improved at the same time. Our case study shows that the proposed camera can improve the DoF up to 17 m while it is limited to 11 m in the conventional aperture.
Depth measurements through controlled aberrations of projected patterns.
Birch, Gabriel C; Tyo, J Scott; Schwiegerling, Jim
2012-03-12
Three-dimensional displays have become increasingly present in consumer markets. However, the ability to capture three-dimensional images in space confined environments and without major modifications to current cameras is uncommon. Our goal is to create a simple modification to a conventional camera that allows for three dimensional reconstruction. We require such an imaging system have imaging and illumination paths coincident. Furthermore, we require that any three-dimensional modification to a camera also permits full resolution 2D image capture.Here we present a method of extracting depth information with a single camera and aberrated projected pattern. A commercial digital camera is used in conjunction with a projector system with astigmatic focus to capture images of a scene. By using an astigmatic projected pattern we can create two different focus depths for horizontal and vertical features of a projected pattern, thereby encoding depth. By designing an aberrated projected pattern, we are able to exploit this differential focus in post-processing designed to exploit the projected pattern and optical system. We are able to correlate the distance of an object at a particular transverse position from the camera to ratios of particular wavelet coefficients.We present our information regarding construction, calibration, and images produced by this system. The nature of linking a projected pattern design and image processing algorithms will be discussed.
Suitability of digital camcorders for virtual reality image data capture
NASA Astrophysics Data System (ADS)
D'Apuzzo, Nicola; Maas, Hans-Gerd
1998-12-01
Today's consumer market digital camcorders offer features which make them appear quite interesting devices for virtual reality data capture. The paper compares a digital camcorder with an analogue camcorder and a machine vision type CCD camera and discusses the suitability of these three cameras for virtual reality applications. Besides the discussion of technical features of the cameras, this includes a detailed accuracy test in order to define the range of applications. In combination with the cameras, three different framegrabbers are tested. The geometric accuracy potential of all three cameras turned out to be surprisingly large, and no problems were noticed in the radiometric performance. On the other hand, some disadvantages have to be reported: from the photogrammetrists point of view, the major disadvantage of most camcorders is the missing possibility to synchronize multiple devices, limiting the suitability for 3-D motion data capture. Moreover, the standard video format contains interlacing, which is also undesirable for all applications dealing with moving objects or moving cameras. Further disadvantages are computer interfaces with functionality, which is still suboptimal. While custom-made solutions to these problems are probably rather expensive (and will make potential users turn back to machine vision like equipment), this functionality could probably be included by the manufacturers at almost zero cost.
Cell phones as imaging sensors
NASA Astrophysics Data System (ADS)
Bhatti, Nina; Baker, Harlyn; Marguier, Joanna; Berclaz, Jérôme; Süsstrunk, Sabine
2010-04-01
Camera phones are ubiquitous, and consumers have been adopting them faster than any other technology in modern history. When connected to a network, though, they are capable of more than just picture taking: Suddenly, they gain access to the power of the cloud. We exploit this capability by providing a series of image-based personal advisory services. These are designed to work with any handset over any cellular carrier using commonly available Multimedia Messaging Service (MMS) and Short Message Service (SMS) features. Targeted at the unsophisticated consumer, these applications must be quick and easy to use, not requiring download capabilities or preplanning. Thus, all application processing occurs in the back-end system (i.e., as a cloud service) and not on the handset itself. Presenting an image to an advisory service in the cloud, a user receives information that can be acted upon immediately. Two of our examples involve color assessment - selecting cosmetics and home décor paint palettes; the third provides the ability to extract text from a scene. In the case of the color imaging applications, we have shown that our service rivals the advice quality of experts. The result of this capability is a new paradigm for mobile interactions - image-based information services exploiting the ubiquity of camera phones.
Color constancy by characterization of illumination chromaticity
NASA Astrophysics Data System (ADS)
Nikkanen, Jarno T.
2011-05-01
Computational color constancy algorithms play a key role in achieving desired color reproduction in digital cameras. Failure to estimate illumination chromaticity correctly will result in invalid overall colour cast in the image that will be easily detected by human observers. A new algorithm is presented for computational color constancy. Low computational complexity and low memory requirement make the algorithm suitable for resource-limited camera devices, such as consumer digital cameras and camera phones. Operation of the algorithm relies on characterization of the range of possible illumination chromaticities in terms of camera sensor response. The fact that only illumination chromaticity is characterized instead of the full color gamut, for example, increases robustness against variations in sensor characteristics and against failure of diagonal model of illumination change. Multiple databases are used in order to demonstrate the good performance of the algorithm in comparison to the state-of-the-art color constancy algorithms.
Telecytology: Is it possible with smartphone images?
Sahin, Davut; Hacisalihoglu, Uguray Payam; Kirimlioglu, Saime Hale
2018-01-01
This study aimed to discuss smartphone usage in telecytology and determine intraobserver concordance between microscopic cytopathological diagnoses and diagnoses derived via static smartphone images. The study was conducted with 172 cytologic material. A pathologist captured static images of the cytology slides from the ocular lens of a microscope using a smartphone. The images were transferred via WhatsApp® to a cytopathologist working in another center who made all the microscopic cytopathological diagnoses 5-27 months ago. The cytopathologist diagnosed images on a smartphone without knowledge of their previous microscopic diagnoses. The Kappa agreement between microscopic cytopathological diagnoses and smartphone image diagnoses was determined. The average image capturing, transfer, and remote cytopathological diagnostic time for one case was 6.20 minutes. The percentage of cases whose microscopic and smartphone image diagnoses were concordant was 84.30%, and the percentage of those whose diagnoses were discordant was 15.69%. The highest Kappa agreement was observed in endoscopic ultrasound-guided fine needle aspiration (1.000), and the lowest agreement was observed in urine cytology (0.665). Patient management changed with smart phone image diagnoses at 11.04%. This study showed that easy, fast, and high-quality image capturing and transfer is possible from cytology slides using smartphones. The intraobserver Kappa agreement between the microscopic cytopathological diagnoses and remote smartphone image diagnoses was high. It was found that remote diagnosis due to difficulties in telecytology might change patient management. The developments in the smartphone camera technology and transfer software make them efficient telepathology and telecytology tools. © 2017 Wiley Periodicals, Inc.
The Application of Data Mining Techniques to Create Promotion Strategy for Mobile Phone Shop
NASA Astrophysics Data System (ADS)
Khasanah, A. U.; Wibowo, K. S.; Dewantoro, H. F.
2017-12-01
The number of mobile shop is growing very fast in various regions in Indonesia including in Yogyakarta due to the increasing demand of mobile phone. This fact leads high competition among the mobile phone shops. In these conditions the mobile phone shop should have a good promotion strategy in order to survive in competition, especially for a small mobile phone shop. To create attractive promotion strategy, the companies/shops should know their customer segmentation and the buying pattern of their target market. These kind of analysis can be done using Data mining technique. This study aims to segment customer using Agglomerative Hierarchical Clustering and know customer buying pattern using Association Rule Mining. This result conducted in a mobile shop in Sleman Yogyakarta. The clustering result shows that the biggest customer segment of the shop was male university student who come on weekend and from association rule mining, it can be concluded that tempered glass and smart phone “x” as well as action camera and waterproof monopod and power bank have strong relationship. This results that used to create promotion strategies which are presented in the end of the study.
Consumer electronic optics: how small can a lens be: the case of panomorph lenses
NASA Astrophysics Data System (ADS)
Thibault, Simon; Parent, Jocelyn; Zhang, Hu; Du, Xiaojun; Roulet, Patrice
2014-09-01
In 2014, miniature camera modules are applied to a variety of applications such as webcam, mobile phone, automotive, endoscope, tablets, portable computers and many other products. Mobile phone cameras are probably one of the most challenging parts due to the need for smaller and smaller total track length (TTL) and optimized embedded image processing algorithms. As the technology is developing, higher resolution and higher image quality, new capabilities are required to fulfil the market needs. Consequently, the lens system becomes more complex and requires more optical elements and/or new optical elements. What is the limit? How small an injection molded lens can be? We will discuss those questions by comparing two wide angle lenses for consumer electronic market. The first lens is a 6.56 mm (TTL) panoramic (180° FOV) lens built in 2012. The second is a more recent (2014) panoramic lens (180° FOV) with a TTL of 3.80 mm for mobile phone camera. Both optics are panomorph lenses used with megapixel sensors. Between 2012 and 2014, the development in design and plastic injection molding allowed a reduction of the TTL by more than 40%. This TTL reduction has been achieved by pushing the lens design to the extreme (edge/central air and material thicknesses as well as lens shape). This was also possible due to a better control of the injection molding process and material (low birefringence, haze and thermal stability). These aspects will be presented and discussed. During the next few years, we don't know if new material will come or new process but we will still need innovative people and industries to push again the limits.
Fringe projection profilometry with portable consumer devices
NASA Astrophysics Data System (ADS)
Liu, Danji; Pan, Zhipeng; Wu, Yuxiang; Yue, Huimin
2018-01-01
A fringe projection profilometry (FPP) using portable consumer devices is attractive because it can realize optical three dimensional (3D) measurement for ordinary consumers in their daily lives. We demonstrate a FPP using a camera in a smart mobile phone and a digital consumer mini projector. In our experiment of testing the smart phone (iphone7) camera performance, the rare-facing camera in the iphone7 causes the FPP to have a fringe contrast ratio of 0.546, nonlinear carrier phase aberration value of 0.6 rad, and nonlinear phase error of 0.08 rad and RMS random phase error of 0.033 rad. In contrast, the FPP using the industrial camera has a fringe contrast ratio of 0.715, nonlinear carrier phase aberration value of 0.5 rad, nonlinear phase error of 0.05 rad and RMS random phase error of 0.011 rad. Good performance is achieved by using the FPP composed of an iphone7 and a mini projector. 3D information of a facemask with a size for an adult is also measured by using the FPP that uses portable consumer devices. After the system calibration, the 3D absolute information of the facemask is obtained. The measured results are in good agreement with the ones that are carried out in a traditional way. Our results show that it is possible to use portable consumer devices to construct a good FPP, which is useful for ordinary people to get 3D information in their daily lives.
Broadly available imaging devices enable high-quality low-cost photometry.
Christodouleas, Dionysios C; Nemiroski, Alex; Kumar, Ashok A; Whitesides, George M
2015-09-15
This paper demonstrates that, for applications in resource-limited environments, expensive microplate spectrophotometers that are used in many central laboratories for parallel measurement of absorbance of samples can be replaced by photometers based on inexpensive and ubiquitous, consumer electronic devices (e.g., scanners and cell-phone cameras). Two devices, (i) a flatbed scanner operating in transmittance mode and (ii) a camera-based photometer (constructed from a cell phone camera, a planar light source, and a cardboard box), demonstrate the concept. These devices illuminate samples in microtiter plates from one side and use the RGB-based imaging sensors of the scanner/camera to measure the light transmitted to the other side. The broadband absorbance of samples (RGB-resolved absorbance) can be calculated using the RGB color values of only three pixels per microwell. Rigorous theoretical analysis establishes a well-defined relationship between the absorbance spectrum of a sample and its corresponding RGB-resolved absorbance. The linearity and precision of measurements performed with these low-cost photometers on different dyes, which absorb across the range of the visible spectrum, and chromogenic products of assays (e.g., enzymatic, ELISA) demonstrate that these low-cost photometers can be used reliably in a broad range of chemical and biochemical analyses. The ability to perform accurate measurements of absorbance on liquid samples, in parallel and at low cost, would enable testing, typically reserved for well-equipped clinics and laboratories, to be performed in circumstances where resources and expertise are limited.
Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns.
Baldini, Gianmarco; Dimc, Franc; Kamnik, Roman; Steri, Gary; Giuliani, Raimondo; Gentile, Claudio
2017-04-06
We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice.
Identification of Mobile Phones Using the Built-In Magnetometers Stimulated by Motion Patterns
Baldini, Gianmarco; Dimc, Franc; Kamnik, Roman; Steri, Gary; Giuliani, Raimondo; Gentile, Claudio
2017-01-01
We investigate the identification of mobile phones through their built-in magnetometers. These electronic components have started to be widely deployed in mass market phones in recent years, and they can be exploited to uniquely identify mobile phones due their physical differences, which appear in the digital output generated by them. This is similar to approaches reported in the literature for other components of the mobile phone, including the digital camera, the microphones or their RF transmission components. In this paper, the identification is performed through an inexpensive device made up of a platform that rotates the mobile phone under test and a fixed magnet positioned on the edge of the rotating platform. When the mobile phone passes in front of the fixed magnet, the built-in magnetometer is stimulated, and its digital output is recorded and analyzed. For each mobile phone, the experiment is repeated over six different days to ensure consistency in the results. A total of 10 phones of different brands and models or of the same model were used in our experiment. The digital output from the magnetometers is synchronized and correlated, and statistical features are extracted to generate a fingerprint of the built-in magnetometer and, consequently, of the mobile phone. A SVM machine learning algorithm is used to classify the mobile phones on the basis of the extracted statistical features. Our results show that inter-model classification (i.e., different models and brands classification) is possible with great accuracy, but intra-model (i.e., phones with different serial numbers and same model) classification is more challenging, the resulting accuracy being just slightly above random choice. PMID:28383482
Spatial capture–recapture with partial identity: An application to camera traps
Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.
2018-01-01
Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-021 (7 Dec 1993) --- This close-up view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members have been working in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Hubble Space Telescope photographed by Electronic Still Camera
1993-12-04
S61-E-001 (4 Dec 1993) --- This medium close-up view of the top portion of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-07
S61-E-020 (7 Dec 1993) --- This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993, in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Influence of camera parameters on the quality of mobile 3D capture
NASA Astrophysics Data System (ADS)
Georgiev, Mihail; Boev, Atanas; Gotchev, Atanas; Hannuksela, Miska
2010-01-01
We investigate the effect of camera de-calibration on the quality of depth estimation. Dense depth map is a format particularly suitable for mobile 3D capture (scalable and screen independent). However, in real-world scenario cameras might move (vibrations, temp. bend) form their designated positions. For experiments, we create a test framework, described in the paper. We investigate how mechanical changes will affect different (4) stereo-matching algorithms. We also assess how different geometric corrections (none, motion compensation-like, full rectification) will affect the estimation quality (how much offset can be still compensated with "crop" over a larger CCD). Finally, we show how estimated camera pose change (E) relates with stereo-matching, which can be used for "rectification quality" measure.
iHand: an interactive bare-hand-based augmented reality interface on commercial mobile phones
NASA Astrophysics Data System (ADS)
Choi, Junyeong; Park, Jungsik; Park, Hanhoon; Park, Jong-Il
2013-02-01
The performance of mobile phones has rapidly improved, and they are emerging as a powerful platform. In many vision-based applications, human hands play a key role in natural interaction. However, relatively little attention has been paid to the interaction between human hands and the mobile phone. Thus, we propose a vision- and hand gesture-based interface in which the user holds a mobile phone in one hand but sees the other hand's palm through a built-in camera. The virtual contents are faithfully rendered on the user's palm through palm pose estimation, and reaction with hand and finger movements is achieved that is recognized by hand shape recognition. Since the proposed interface is based on hand gestures familiar to humans and does not require any additional sensors or markers, the user can freely interact with virtual contents anytime and anywhere without any training. We demonstrate that the proposed interface works at over 15 fps on a commercial mobile phone with a 1.2-GHz dual core processor and 1 GB RAM.
The Use of Phone Technology in Outpatient Populations: A Systematic Review
Duarte, Ana C.; Thomas, Sue A.
2016-01-01
Objective: A systematic review was conducted to identify the types of phone technology used in the adult outpatient population with a focus on Hispanic patients and psychiatric populations. Methods: A search for articles was conducted on the EMBASE, PubMed and PsycINFO databases. Articles reviewed were peer-reviewed, full-text, English language and published through mid-November 2014. Results: Twenty-one articles were included in this review and grouped according to combinations of phone technology, medical specialty area and population. For all articles, phone technology was defined as telephone, cell, or smart phone. Technology was used in psychiatry with Hispanic population in four articles, in psychiatry with non-Hispanic population in seven articles and in other specialties with Hispanic population in ten articles. Articles were evaluated for quality. Six articles were assessed as strong, eight were moderate and seven were weak in global quality. Interventions included direct communication, text messaging, interactive voice response, camera and smart phone app. Studies with Hispanic populations used more text messaging, while studies in psychiatry favored direct communication. The majority of articles in all groups yielded improvements in health outcomes. Conclusion: Few studies have been conducted using phone technology in Hispanic and psychiatric populations. Various phone technologies can be helpful to patients in diverse populations and have demonstrated success in improving a variety of specific and overall healthcare outcomes. Phone technologies are easily adapted to numerous settings and populations and are valuable tools in efforts to increase access to care. PMID:27347255
4K Video of Colorful Liquid in Space
2015-10-09
Once again, astronauts on the International Space Station dissolved an effervescent tablet in a floating ball of water, and captured images using a camera capable of recording four times the resolution of normal high-definition cameras. The higher resolution images and higher frame rate videos can reveal more information when used on science investigations, giving researchers a valuable new tool aboard the space station. This footage is one of the first of its kind. The cameras are being evaluated for capturing science data and vehicle operations by engineers at NASA's Marshall Space Flight Center in Huntsville, Alabama.
Reducing flicker due to ambient illumination in camera captured images
NASA Astrophysics Data System (ADS)
Kim, Minwoong; Bengtson, Kurt; Li, Lisa; Allebach, Jan P.
2013-02-01
The flicker artifact dealt with in this paper is the scanning distortion arising when an image is captured by a digital camera using a CMOS imaging sensor with an electronic rolling shutter under strong ambient light sources powered by AC. This type of camera scans a target line-by-line in a frame. Therefore, time differences exist between the lines. This mechanism causes a captured image to be corrupted by the change of illumination. This phenomenon is called the flicker artifact. The non-content area of the captured image is used to estimate a flicker signal that is a key to being able to compensate the flicker artifact. The average signal of the non-content area taken along the scan direction has local extrema where the peaks of flicker exist. The locations of the extrema are very useful information to estimate the desired distribution of pixel intensities assuming that the flicker artifact does not exist. The flicker-reduced images compensated by our approach clearly demonstrate the reduced flicker artifact, based on visual observation.
Miniature Raman spectroscopy utilizing stabilized diode lasers and 2D CMOS detector arrays
NASA Astrophysics Data System (ADS)
Auz, Bryan; Bonvallet, Joseph; Rodriguez, John; Olmstead, Ty
2017-02-01
A miniature Raman spectrometer was designed in a rapid development cycle (< 4 months) to investigate the performance capabilities achievable with two dimensional (2D) CMOS detectors found in cell phone camera modules and commercial off the shelf optics (COTS). This paper examines the design considerations and tradeoffs made during the development cycle. The final system developed measures 40 mm in length, 40 mm in width, 15 mm tall and couples directly with the cell phone camera optics. Two variants were made: one with an excitation wavelength of 638 nm and the other with a 785 nm excitation wavelength. Raman spectra of the following samples were gathered at both excitations: Toluene, Cyclohexane, Bis(MSB), Aspirin, Urea, and Ammonium Nitrate. The system obtained a resolution of 40 cm-1. The spectra produced at 785 nm excitation required integration times of up to 10 times longer than the 1.5 seconds at 638 nm, however, contained reduced stray light and less fluorescence which led to an overall cleaner signal.
Evaluation of multispectral plenoptic camera
NASA Astrophysics Data System (ADS)
Meng, Lingfei; Sun, Ting; Kosoglow, Rich; Berkner, Kathrin
2013-01-01
Plenoptic cameras enable capture of a 4D lightfield, allowing digital refocusing and depth estimation from data captured with a compact portable camera. Whereas most of the work on plenoptic camera design has been based a simplistic geometric-optics-based characterization of the optical path only, little work has been done of optimizing end-to-end system performance for a specific application. Such design optimization requires design tools that need to include careful parameterization of main lens elements, as well as microlens array and sensor characteristics. In this paper we are interested in evaluating the performance of a multispectral plenoptic camera, i.e. a camera with spectral filters inserted into the aperture plane of the main lens. Such a camera enables single-snapshot spectral data acquisition.1-3 We first describe in detail an end-to-end imaging system model for a spectrally coded plenoptic camera that we briefly introduced in.4 Different performance metrics are defined to evaluate the spectral reconstruction quality. We then present a prototype which is developed based on a modified DSLR camera containing a lenslet array on the sensor and a filter array in the main lens. Finally we evaluate the spectral reconstruction performance of a spectral plenoptic camera based on both simulation and measurements obtained from the prototype.
Use of a Digital Camera To Document Student Observations in a Microbiology Laboratory Class.
ERIC Educational Resources Information Center
Mills, David A.; Kelley, Kevin; Jones, Michael
2001-01-01
Points out the lack of microscopic images of wine-related microbes. Uses a digital camera during a wine microbiology laboratory to capture student-generated microscope images. Discusses the advantages of using a digital camera in a teaching lab. (YDS)
An Algorithm Enabling Blind Users to Find and Read Barcodes
Tekin, Ender; Coughlan, James M.
2010-01-01
Most camera-based systems for finding and reading barcodes are designed to be used by sighted users (e.g. the Red Laser iPhone app), and assume the user carefully centers the barcode in the image before the barcode is read. Blind individuals could benefit greatly from such systems to identify packaged goods (such as canned goods in a supermarket), but unfortunately in their current form these systems are completely inaccessible because of their reliance on visual feedback from the user. To remedy this problem, we propose a computer vision algorithm that processes several frames of video per second to detect barcodes from a distance of several inches; the algorithm issues directional information with audio feedback (e.g. “left,” “right”) and thereby guides a blind user holding a webcam or other portable camera to locate and home in on a barcode. Once the barcode is detected at sufficiently close range, a barcode reading algorithm previously developed by the authors scans and reads aloud the barcode and the corresponding product information. We demonstrate encouraging experimental results of our proposed system implemented on a desktop computer with a webcam held by a blindfolded user; ultimately the system will be ported to a camera phone for use by visually impaired users. PMID:20617114
A practical indoor context-aware surveillance system with multi-Kinect sensors
NASA Astrophysics Data System (ADS)
Jia, Lili; You, Ying; Li, Tiezhu; Zhang, Shun
2014-11-01
In this paper we develop a novel practical application, which give scalable services to the end users when abnormal actives are happening. Architecture of the application has been presented consisting of network infrared cameras and a communication module. In this intelligent surveillance system we use Kinect sensors as the input cameras. Kinect is an infrared laser camera which its user can access the raw infrared sensor stream. We install several Kinect sensors in one room to track the human skeletons. Each sensor returns the body positions with 15 coordinates in its own coordinate system. We use calibration algorithms to calibrate all the body positions points into one unified coordinate system. With the body positions points, we can infer the surveillance context. Furthermore, the messages from the metadata index matrix will be sent to mobile phone through communication module. User will instantly be aware of an abnormal case happened in the room without having to check the website. In conclusion, theoretical analysis and experimental results in this paper show that the proposed system is reasonable and efficient. And the application method introduced in this paper is not only to discourage the criminals and assist police in the apprehension of suspects, but also can enabled the end-users monitor the indoor environments anywhere and anytime by their phones.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Visibility through the gaseous smoke in airborne remote sensing using a DSLR camera
NASA Astrophysics Data System (ADS)
Chabok, Mirahmad; Millington, Andrew; Hacker, Jorg M.; McGrath, Andrew J.
2016-08-01
Visibility and clarity of remotely sensed images acquired by consumer grade DSLR cameras, mounted on an unmanned aerial vehicle or a manned aircraft, are critical factors in obtaining accurate and detailed information from any area of interest. The presence of substantial haze, fog or gaseous smoke particles; caused, for example, by an active bushfire at the time of data capture, will dramatically reduce image visibility and quality. Although most modern hyperspectral imaging sensors are capable of capturing a large number of narrow range bands of the shortwave and thermal infrared spectral range, which have the potential to penetrate smoke and haze, the resulting images do not contain sufficient spatial detail to enable locating important objects or assist search and rescue or similar applications which require high resolution information. We introduce a new method for penetrating gaseous smoke without compromising spatial resolution using a single modified DSLR camera in conjunction with image processing techniques which effectively improves the visibility of objects in the captured images. This is achieved by modifying a DSLR camera and adding a custom optical filter to enable it to capture wavelengths from 480-1200nm (R, G and Near Infrared) instead of the standard RGB bands (400-700nm). With this modified camera mounted on an aircraft, images were acquired over an area polluted by gaseous smoke from an active bushfire. Processed data using our proposed method shows significant visibility improvements compared with other existing solutions.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
High-dynamic-range imaging for cloud segmentation
NASA Astrophysics Data System (ADS)
Dev, Soumyabrata; Savoy, Florian M.; Lee, Yee Hui; Winkler, Stefan
2018-04-01
Sky-cloud images obtained from ground-based sky cameras are usually captured using a fisheye lens with a wide field of view. However, the sky exhibits a large dynamic range in terms of luminance, more than a conventional camera can capture. It is thus difficult to capture the details of an entire scene with a regular camera in a single shot. In most cases, the circumsolar region is overexposed, and the regions near the horizon are underexposed. This renders cloud segmentation for such images difficult. In this paper, we propose HDRCloudSeg - an effective method for cloud segmentation using high-dynamic-range (HDR) imaging based on multi-exposure fusion. We describe the HDR image generation process and release a new database to the community for benchmarking. Our proposed approach is the first using HDR radiance maps for cloud segmentation and achieves very good results.
Optical character recognition of camera-captured images based on phase features
NASA Astrophysics Data System (ADS)
Diaz-Escobar, Julia; Kober, Vitaly
2015-09-01
Nowadays most of digital information is obtained using mobile devices specially smartphones. In particular, it brings the opportunity for optical character recognition in camera-captured images. For this reason many recognition applications have been recently developed such as recognition of license plates, business cards, receipts and street signal; document classification, augmented reality, language translator and so on. Camera-captured images are usually affected by geometric distortions, nonuniform illumination, shadow, noise, which make difficult the recognition task with existing systems. It is well known that the Fourier phase contains a lot of important information regardless of the Fourier magnitude. So, in this work we propose a phase-based recognition system exploiting phase-congruency features for illumination/scale invariance. The performance of the proposed system is tested in terms of miss classifications and false alarms with the help of computer simulation.
3D reconstruction based on light field images
NASA Astrophysics Data System (ADS)
Zhu, Dong; Wu, Chunhong; Liu, Yunluo; Fu, Dongmei
2018-04-01
This paper proposed a method of reconstructing three-dimensional (3D) scene from two light field images capture by Lytro illium. The work was carried out by first extracting the sub-aperture images from light field images and using the scale-invariant feature transform (SIFT) for feature registration on the selected sub-aperture images. Structure from motion (SFM) algorithm is further used on the registration completed sub-aperture images to reconstruct the three-dimensional scene. 3D sparse point cloud was obtained in the end. The method shows that the 3D reconstruction can be implemented by only two light field camera captures, rather than at least a dozen times captures by traditional cameras. This can effectively solve the time-consuming, laborious issues for 3D reconstruction based on traditional digital cameras, to achieve a more rapid, convenient and accurate reconstruction.
Cell phone camera ballistics: attacks and countermeasures
NASA Astrophysics Data System (ADS)
Steinebach, Martin; Liu, Huajian; Fan, Peishuai; Katzenbeisser, Stefan
2010-01-01
Multimedia forensics deals with the analysis of multimedia data to gather information on its origin and authenticity. One therefore needs to distinguish classical criminal forensics (which today also uses multimedia data as evidence) and multimedia forensics where the actual case is based on a media file. One example for the latter is camera forensics where pixel error patters are used as fingerprints identifying a camera as the source of an image. Of course multimedia forensics can become a tool for criminal forensics when evidence used in a criminal investigation is likely to be manipulated. At this point an important question arises: How reliable are these algorithms? Can a judge trust their results? How easy are they to manipulate? In this work we show how camera forensics can be attacked and introduce a potential countermeasure against these attacks.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Can Wireless Technology Enable New Diabetes Management Tools?
Hedtke, Paul A.
2008-01-01
Mobile computing and communications technology embodied in the modern cell phone device can be employed to improve the lives of diabetes patients by giving them better tools for self-management. Several companies are working on the development of diabetes management tools that leverage the ubiquitous cell phone to bring self-management tools to the hand of the diabetes patient. Integration of blood glucose monitoring (BGM) technology with the cell phone platform adds a level of convenience for the person with diabetes, but, more importantly, allows BGM data to be automatically captured, logged, and processed in near real time in order to provide the diabetes patient with assistance in managing their blood glucose levels. Other automatic measurements can estimate physical activity, and information regarding medication events and food intake can be captured and analyzed in order to provide the diabetes patient with continual assistance in managing their therapy and behaviors in order to improve glycemic control. The path to realization of such solutions is not, however, without obstacles. PMID:19885187
Can wireless technology enable new diabetes management tools?
Hedtke, Paul A
2008-01-01
Mobile computing and communications technology embodied in the modern cell phone device can be employed to improve the lives of diabetes patients by giving them better tools for self-management. Several companies are working on the development of diabetes management tools that leverage the ubiquitous cell phone to bring self-management tools to the hand of the diabetes patient. Integration of blood glucose monitoring (BGM) technology with the cell phone platform adds a level of convenience for the person with diabetes, but, more importantly, allows BGM data to be automatically captured, logged, and processed in near real time in order to provide the diabetes patient with assistance in managing their blood glucose levels. Other automatic measurements can estimate physical activity, and information regarding medication events and food intake can be captured and analyzed in order to provide the diabetes patient with continual assistance in managing their therapy and behaviors in order to improve glycemic control. The path to realization of such solutions is not, however, without obstacles.
Grommon, Eric
2018-02-01
Cell phones in correctional facilities have emerged as one of the most pervasive forms of modern contraband. This issue has been identified as a top priority for many correctional administrators in the United States. Managed access, a technology that utilizes cellular signals to capture transmissions from contraband phones, has received notable attention as a promising tool to combat this problem. However, this technology has received little evaluative attention. The present study offers a foundational process evaluation and draws upon output measures and stakeholder interviews to identify salient operational challenges and subsequent lessons learned about implementing and maintaining a managed access system. Findings suggest that while managed access captures large volumes of contraband cellular transmissions, the technology requires significant implementation planning, personnel support, and complex partnerships with commercial cellular carriers. Lessons learned provide guidance for practitioners to navigate these challenges and for scholars to improve future evaluations of managed access. Copyright © 2017 Elsevier Ltd. All rights reserved.
Lim, Eugene Y; Lee, Chiang; Cai, Weidong; Feng, Dagan; Fulham, Michael
2007-01-01
Medical practice is characterized by a high degree of heterogeneity in collaborative and cooperative patient care. Fast and effective communication between medical practitioners can improve patient care. In medical imaging, the fast delivery of medical reports to referring medical practitioners is a major component of cooperative patient care. Recently, mobile phones have been actively deployed in telemedicine applications. The mobile phone is an ideal medium to achieve faster delivery of reports to the referring medical practitioners. In this study, we developed an electronic medical report delivery system from a medical imaging department to the mobile phones of the referring doctors. The system extracts a text summary of medical report and a screen capture of diagnostic medical image in JPEG format, which are transmitted to 3G GSM mobile phones.
An intelligent space for mobile robot localization using a multi-camera system.
Rampinelli, Mariana; Covre, Vitor Buback; de Queiroz, Felippe Mendonça; Vassallo, Raquel Frizera; Bastos-Filho, Teodiano Freire; Mazo, Manuel
2014-08-15
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
An Intelligent Space for Mobile Robot Localization Using a Multi-Camera System
Rampinelli, Mariana.; Covre, Vitor Buback.; de Queiroz, Felippe Mendonça.; Vassallo, Raquel Frizera.; Bastos-Filho, Teodiano Freire.; Mazo, Manuel.
2014-01-01
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization. PMID:25196009
Density estimation in a wolverine population using spatial capture-recapture models
Royle, J. Andrew; Magoun, Audrey J.; Gardner, Beth; Valkenbury, Patrick; Lowell, Richard E.; McKelvey, Kevin
2011-01-01
Classical closed-population capture-recapture models do not accommodate the spatial information inherent in encounter history data obtained from camera-trapping studies. As a result, individual heterogeneity in encounter probability is induced, and it is not possible to estimate density objectively because trap arrays do not have a well-defined sample area. We applied newly-developed, capture-recapture models that accommodate the spatial attribute inherent in capture-recapture data to a population of wolverines (Gulo gulo) in Southeast Alaska in 2008. We used camera-trapping data collected from 37 cameras in a 2,140-km2 area of forested and open habitats largely enclosed by ocean and glacial icefields. We detected 21 unique individuals 115 times. Wolverines exhibited a strong positive trap response, with an increased tendency to revisit previously visited traps. Under the trap-response model, we estimated wolverine density at 9.7 individuals/1,000-km2(95% Bayesian CI: 5.9-15.0). Our model provides a formal statistical framework for estimating density from wolverine camera-trapping studies that accounts for a behavioral response due to baited traps. Further, our model-based estimator does not have strict requirements about the spatial configuration of traps or length of trapping sessions, providing considerable operational flexibility in the development of field studies.
Printed products for digital cameras and mobile devices
NASA Astrophysics Data System (ADS)
Fageth, Reiner; Schmidt-Sacht, Wulf
2005-01-01
Digital photography is no longer simply a successor to film. The digital market is now driven by additional devices such as mobile phones with camera and video functions (camphones) as well as innovative products derived from digital files. A large number of consumers do not print their images and non-printing has become the major enemy of wholesale printers, home printing suppliers and retailers. This paper addresses the challenge facing our industry, namely how to encourage the consumer to print images easily and conveniently from all types of digital media.
Restoration of hot pixels in digital imagers using lossless approximation techniques
NASA Astrophysics Data System (ADS)
Hadar, O.; Shleifer, A.; Cohen, E.; Dotan, Y.
2015-09-01
During the last twenty years, digital imagers have spread into industrial and everyday devices, such as satellites, security cameras, cell phones, laptops and more. "Hot pixels" are the main defects in remote digital cameras. In this paper we prove an improvement of existing restoration methods that use (solely or as an auxiliary tool) some average of the surrounding single pixel, such as the method of the Chapman-Koren study 1,2. The proposed method uses the CALIC algorithm and adapts it to a full use of the surrounding pixels.
ERIC Educational Resources Information Center
Daneman, Kathy
1998-01-01
Describes the integration of security systems to provide enhanced security that is both effective and long lasting. Examines combining card-access systems with camera surveillance, and highly visible emergency phones and security officers. as one of many possible combinations. Some systems most capable of being integrated are listed. (GR)
Cyberbullying Knows No Borders
ERIC Educational Resources Information Center
Miller, Jerold D.; Hufstedler, Shirley M.
2009-01-01
Cyberbullying is a global problem with a wide range of incidents reported in many countries. This form of bullying may be defined as harassment using technology such as social websites (MySpace/Facebook), email, chat rooms, mobile phone texting and cameras, picture messages (including sexting), IM (instant messages), or blogs. Cyberbullying…
Smartphone based point-of-care detector of urine albumin
NASA Astrophysics Data System (ADS)
Cmiel, Vratislav; Svoboda, Ondrej; Koscova, Pavlina; Provaznik, Ivo
2016-03-01
Albumin plays an important role in human body. Its changed level in urine may indicate serious kidney disorders. We present a new point-of-care solution for sensitive detection of urine albumin - the miniature optical adapter for iPhone with in-built optical filters and a sample slot. The adapter exploits smart-phone flash to generate excitation light and camera to measure the level of emitted light. Albumin Blue 580 is used as albumin reagent. The proposed light-weight adapter can be produced at low cost using a 3D printer. Thus, the miniaturized detector is easy to use out of lab.
Muon Trigger for Mobile Phones
NASA Astrophysics Data System (ADS)
Borisyak, M.; Usvyatsov, M.; Mulhearn, M.; Shimmin, C.; Ustyuzhanin, A.
2017-10-01
The CRAYFIS experiment proposes to use privately owned mobile phones as a ground detector array for Ultra High Energy Cosmic Rays. Upon interacting with Earth’s atmosphere, these events produce extensive particle showers which can be detected by cameras on mobile phones. A typical shower contains minimally-ionizing particles such as muons. As these particles interact with CMOS image sensors, they may leave tracks of faintly-activated pixels that are sometimes hard to distinguish from random detector noise. Triggers that rely on the presence of very bright pixels within an image frame are not efficient in this case. We present a trigger algorithm based on Convolutional Neural Networks which selects images containing such tracks and are evaluated in a lazy manner: the response of each successive layer is computed only if activation of the current layer satisfies a continuation criterion. Usage of neural networks increases the sensitivity considerably comparable with image thresholding, while the lazy evaluation allows for execution of the trigger under the limited computational power of mobile phones.
Internet of Things Platform for Smart Farming: Experiences and Lessons Learnt
Jayaraman, Prem Prakash; Yavari, Ali; Georgakopoulos, Dimitrios; Morshed, Ahsan; Zaslavsky, Arkady
2016-01-01
Improving farm productivity is essential for increasing farm profitability and meeting the rapidly growing demand for food that is fuelled by rapid population growth across the world. Farm productivity can be increased by understanding and forecasting crop performance in a variety of environmental conditions. Crop recommendation is currently based on data collected in field-based agricultural studies that capture crop performance under a variety of conditions (e.g., soil quality and environmental conditions). However, crop performance data collection is currently slow, as such crop studies are often undertaken in remote and distributed locations, and such data are typically collected manually. Furthermore, the quality of manually collected crop performance data is very low, because it does not take into account earlier conditions that have not been observed by the human operators but is essential to filter out collected data that will lead to invalid conclusions (e.g., solar radiation readings in the afternoon after even a short rain or overcast in the morning are invalid, and should not be used in assessing crop performance). Emerging Internet of Things (IoT) technologies, such as IoT devices (e.g., wireless sensor networks, network-connected weather stations, cameras, and smart phones) can be used to collate vast amount of environmental and crop performance data, ranging from time series data from sensors, to spatial data from cameras, to human observations collected and recorded via mobile smart phone applications. Such data can then be analysed to filter out invalid data and compute personalised crop recommendations for any specific farm. In this paper, we present the design of SmartFarmNet, an IoT-based platform that can automate the collection of environmental, soil, fertilisation, and irrigation data; automatically correlate such data and filter-out invalid data from the perspective of assessing crop performance; and compute crop forecasts and personalised crop recommendations for any particular farm. SmartFarmNet can integrate virtually any IoT device, including commercially available sensors, cameras, weather stations, etc., and store their data in the cloud for performance analysis and recommendations. An evaluation of the SmartFarmNet platform and our experiences and lessons learnt in developing this system concludes the paper. SmartFarmNet is the first and currently largest system in the world (in terms of the number of sensors attached, crops assessed, and users it supports) that provides crop performance analysis and recommendations. PMID:27834862
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-010 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-005 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Latch of HST aft shroud photographed by Electronic Still Camera
1993-12-04
S61-E-004 (4 Dec 1993) --- This close-up view of a latch on the minus V3 aft shroud door of the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Nonvolatile memory chips: critical technology for high-performance recce systems
NASA Astrophysics Data System (ADS)
Kaufman, Bruce
2000-11-01
Airborne recce systems universally require nonvolatile storage of recorded data. Both present and next generation designs make use of flash memory chips. Flash memory devices are in high volume use for a variety of commercial products ranging form cellular phones to digital cameras. Fortunately, commercial applications call for increasing capacities and fast write times. These parameters are important to the designer of recce recorders. Of economic necessity COTS devices are used in recorders that must perform in military avionics environments. Concurrently, recording rates are moving to $GTR10Gb/S. Thus to capture imagery for even a few minutes of record time, tactically meaningful solid state recorders will require storage capacities in the 100s of Gbytes. Even with memory chip densities at present day 512Mb, such capacities require thousands of chips. The demands on packaging technology are daunting. This paper will consider the differing flash chip architectures, both available and projected and discuss the impact on recorder architecture and performance. Emerging nonvolatile memory technologies, FeRAM AND MIRAM will be reviewed with regard to their potential use in recce recorders.
The Feasibility of 3d Point Cloud Generation from Smartphones
NASA Astrophysics Data System (ADS)
Alsubaie, N.; El-Sheimy, N.
2016-06-01
This paper proposes a new technique for increasing the accuracy of direct geo-referenced image-based 3D point cloud generated from low-cost sensors in smartphones. The smartphone's motion sensors are used to directly acquire the Exterior Orientation Parameters (EOPs) of the captured images. These EOPs, along with the Interior Orientation Parameters (IOPs) of the camera/ phone, are used to reconstruct the image-based 3D point cloud. However, because smartphone motion sensors suffer from poor GPS accuracy, accumulated drift and high signal noise, inaccurate 3D mapping solutions often result. Therefore, horizontal and vertical linear features, visible in each image, are extracted and used as constraints in the bundle adjustment procedure. These constraints correct the relative position and orientation of the 3D mapping solution. Once the enhanced EOPs are estimated, the semi-global matching algorithm (SGM) is used to generate the image-based dense 3D point cloud. Statistical analysis and assessment are implemented herein, in order to demonstrate the feasibility of 3D point cloud generation from the consumer-grade sensors in smartphones.
A real-time remote video streaming platform for ultrasound imaging.
Ahmadi, Mehdi; Gross, Warren J; Kadoury, Samuel
2016-08-01
Ultrasound is a viable imaging technology in remote and resources-limited areas. Ultrasonography is a user-dependent skill which depends on a high degree of training and hands-on experience. However, there is a limited number of skillful sonographers located in remote areas. In this work, we aim to develop a real-time video streaming platform which allows specialist physicians to remotely monitor ultrasound exams. To this end, an ultrasound stream is captured and transmitted through a wireless network into remote computers, smart-phones and tablets. In addition, the system is equipped with a camera to track the position of the ultrasound probe. The main advantage of our work is using an open source platform for video streaming which gives us more control over streaming parameters than the available commercial products. The transmission delays of the system are evaluated for several ultrasound video resolutions and the results show that ultrasound videos close to the high-definition (HD) resolution can be received and displayed on an Android tablet with the delay of 0.5 seconds which is acceptable for accurate real-time diagnosis.
Remote gaze tracking system on a large display.
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-10-07
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°~±0.775° and a speed of 5~10 frames/s.
Remote Gaze Tracking System on a Large Display
Lee, Hyeon Chang; Lee, Won Oh; Cho, Chul Woo; Gwon, Su Yeong; Park, Kang Ryoung; Lee, Heekyung; Cha, Jihun
2013-01-01
We propose a new remote gaze tracking system as an intelligent TV interface. Our research is novel in the following three ways: first, because a user can sit at various positions in front of a large display, the capture volume of the gaze tracking system should be greater, so the proposed system includes two cameras which can be moved simultaneously by panning and tilting mechanisms, a wide view camera (WVC) for detecting eye position and an auto-focusing narrow view camera (NVC) for capturing enlarged eye images. Second, in order to remove the complicated calibration between the WVC and NVC and to enhance the capture speed of the NVC, these two cameras are combined in a parallel structure. Third, the auto-focusing of the NVC is achieved on the basis of both the user's facial width in the WVC image and a focus score calculated on the eye image of the NVC. Experimental results showed that the proposed system can be operated with a gaze tracking accuracy of ±0.737°∼±0.775° and a speed of 5∼10 frames/s. PMID:24105351
A low-cost dual-camera imaging system for aerial applicators
USDA-ARS?s Scientific Manuscript database
Agricultural aircraft provide a readily available remote sensing platform as low-cost and easy-to-use consumer-grade cameras are being increasingly used for aerial imaging. In this article, we report on a dual-camera imaging system we recently assembled that can capture RGB and near-infrared (NIR) i...
ERIC Educational Resources Information Center
Caldwell, Andy
2005-01-01
In 2001, the "All Sky Camera Network" came to life as an outreach program to connect the Denver Museum of Nature and Science (DMNS) exhibit "Space Odyssey" with Colorado schools. The network is comprised of cameras placed strategically at schools throughout Colorado to capture fireballs--rare events that produce meteorites.…
Camera Ready: Capturing a Digital History of Chester
ERIC Educational Resources Information Center
Lehman, Kathy
2008-01-01
Armed with digital cameras, voice recorders, and movie cameras, students from Thomas Dale High School in Chester, Virginia, have been exploring neighborhoods, interviewing residents, and collecting memories of their hometown. In this article, the author describes "Digital History of Chester", a project for creating a commemorative DVD.…
5 CFR 1201.52 - Public hearings.
Code of Federal Regulations, 2013 CFR
2013-01-01
.... Any objections to the order will be made a part of the record. (b) Electronic devices. Absent express... room; all cell phones, text devices, and all other two-way communications devices shall be powered off in the hearing room. Further, no cameras, recording devices, and/or transmitting devices may be...
5 CFR 1201.52 - Public hearings.
Code of Federal Regulations, 2014 CFR
2014-01-01
.... Any objections to the order will be made a part of the record. (b) Electronic devices. Absent express... room; all cell phones, text devices, and all other two-way communications devices shall be powered off in the hearing room. Further, no cameras, recording devices, and/or transmitting devices may be...
ERIC Educational Resources Information Center
Giles, Rebecca McMahon
2006-01-01
Exposure to cell phones, DVD players, video games, computers, digital cameras, and iPods has made today's young people more technologically advanced than those of any previous generation. As a result, parents are now concerned that their children are spending too much time in front of the computer. In this article, the author focuses her…
ERIC Educational Resources Information Center
Becker, Rick
2012-01-01
The opening sentence in an article posted on edutechteacher.org titled "Video in the Classroom" states "In addition to being fun and motivating, video projects teach students to plan, organize, write, communicate, collaborate, and analyze (2012)." The author goes on to say "With the proliferation of webcams, phone cameras, flip cams, digital…
High-speed imaging using 3CCD camera and multi-color LED flashes
NASA Astrophysics Data System (ADS)
Hijazi, Ala; Friedl, Alexander; Cierpka, Christian; Kähler, Christian; Madhavan, Vis
2017-11-01
This paper demonstrates the possibility of capturing full-resolution, high-speed image sequences using a regular 3CCD color camera in conjunction with high-power light emitting diodes of three different colors. This is achieved using a novel approach, referred to as spectral-shuttering, where a high-speed image sequence is captured using short duration light pulses of different colors that are sent consecutively in very close succession. The work presented in this paper demonstrates the feasibility of configuring a high-speed camera system using low cost and readily available off-the-shelf components. This camera can be used for recording six-frame sequences at frame rates up to 20 kHz or three-frame sequences at even higher frame rates. Both color crosstalk and spatial matching between the different channels of the camera are found to be within acceptable limits. A small amount of magnification difference between the different channels is found and a simple calibration procedure for correcting the images is introduced. The images captured using the approach described here are of good quality to be used for obtaining full-field quantitative information using techniques such as digital image correlation and particle image velocimetry. A sequence of six high-speed images of a bubble splash recorded at 400 Hz is presented as a demonstration.
A position and attitude vision measurement system for wind tunnel slender model
NASA Astrophysics Data System (ADS)
Cheng, Lei; Yang, Yinong; Xue, Bindang; Zhou, Fugen; Bai, Xiangzhi
2014-11-01
A position and attitude vision measurement system for drop test slender model in wind tunnel is designed and developed. The system used two high speed cameras, one is put to the side of the model and another is put to the position where the camera can look up the model. Simple symbols are set on the model. The main idea of the system is based on image matching technique between the 3D-digital model projection image and the image captured by the camera. At first, we evaluate the pitch angles, the roll angles and the position of the centroid of a model through recognizing symbols in the images captured by the side camera. And then, based on the evaluated attitude info, giving a series of yaw angles, a series of projection images of the 3D-digital model are obtained. Finally, these projection images are matched with the image which captured by the looking up camera, and the best match's projection images corresponds to the yaw angle is the very yaw angle of the model. Simulation experiments are conducted and the results show that the maximal error of attitude measurement is less than 0.05°, which can meet the demand of test in wind tunnel.
Spectral colors capture and reproduction based on digital camera
NASA Astrophysics Data System (ADS)
Chen, Defen; Huang, Qingmei; Li, Wei; Lu, Yang
2018-01-01
The purpose of this work is to develop a method for the accurate reproduction of the spectral colors captured by digital camera. The spectral colors being the purest color in any hue, are difficult to reproduce without distortion on digital devices. In this paper, we attempt to achieve accurate hue reproduction of the spectral colors by focusing on two steps of color correction: the capture of the spectral colors and the color characterization of digital camera. Hence it determines the relationship among the spectral color wavelength, the RGB color space of the digital camera device and the CIEXYZ color space. This study also provides a basis for further studies related to the color spectral reproduction on digital devices. In this paper, methods such as wavelength calibration of the spectral colors and digital camera characterization were utilized. The spectrum was obtained through the grating spectroscopy system. A photo of a clear and reliable primary spectrum was taken by adjusting the relative parameters of the digital camera, from which the RGB values of color spectrum was extracted in 1040 equally-divided locations. Calculated using grating equation and measured by the spectrophotometer, two wavelength values were obtained from each location. The polynomial fitting method for the camera characterization was used to achieve color correction. After wavelength calibration, the maximum error between the two sets of wavelengths is 4.38nm. According to the polynomial fitting method, the average color difference of test samples is 3.76. This has satisfied the application needs of the spectral colors in digital devices such as display and transmission.
NASA Astrophysics Data System (ADS)
Sato, M.; Takahashi, Y.; Kudo, T.; Yanagi, Y.; Kobayashi, N.; Yamada, T.; Project, N.; Stenbaek-Nielsen, H. C.; McHarg, M. G.; Haaland, R. K.; Kammae, T.; Cummer, S. A.; Yair, Y.; Lyons, W. A.; Ahrns, J.; Yukman, P.; Warner, T. A.; Sonnenfeld, R. G.; Li, J.; Lu, G.
2011-12-01
The time evolution and spatial distributions of transient luminous events (TLEs) are the key parameters to identify the relationship between TLEs and parent lightning discharges, roles of electromagnetic pulses (EMPs) emitted by horizontal and vertical lightning currents in the formation of TLEs, and the occurrence condition and mechanisms of TLEs. Since the time scales of TLEs is typically less than a few milliseconds, new imaging technique that enable us to capture images with a high time resolution of < 1ms is awaited. By courtesy of "Cosmic Shore" Project conducted by Japan Broadcasting Corporation (NHK), we have carried out optical observations using a high-speed Image-Intensified (II) CMOS camera and a high-vision three-CCD camera from a jet aircraft on November 28 and December 3, 2010 in winter Japan. Using the high-speed II-CMOS camera, it is possible to capture images with 8,300 frames per second (fps), which corresponds to the time resolution of 120 us. Using the high-vision three-CCD camera, it is possible to capture high quality, true color images of TLEs with a 1920x1080 pixel size and with a frame rate of 30 fps. During the two observation flights, we have succeeded to detect 28 sprite events, and 3 elves events totally. In response to this success, we have conducted a combined aircraft and ground-based campaign of TLE observations at the High Plains in summer US. We have installed same NHK high-speed and high-vision cameras in a jet aircraft. In the period from June 27 and July 10, 2011, we have operated aircraft observations in 8 nights, and we have succeeded to capture TLE images for over a hundred events by the high-vision camera and succeeded to acquire over 40 high-speed images simultaneously. At the presentation, we will introduce the outlines of the two aircraft campaigns, and will introduce the characteristics of the time evolution and spatial distributions of TLEs observed in winter Japan, and will show the initial results of high-speed image data analysis of TLEs in summer US.
Mobile phone-based biosensing: An emerging "diagnostic and communication" technology.
Quesada-González, Daniel; Merkoçi, Arben
2017-06-15
In this review we discuss recent developments on the use of mobile phones and similar devices for biosensing applications in which diagnostics and communications are coupled. Owing to the capabilities of mobile phones (their cameras, connectivity, portability, etc.) and to advances in biosensing, the coupling of these two technologies is enabling portable and user-friendly analytical devices. Any user can now perform quick, robust and easy (bio)assays anywhere and at any time. Among the most widely reported of such devices are paper-based platforms. Herein we provide an overview of a broad range of biosensing possibilities, from optical to electrochemical measurements; explore the various reported designs for adapters; and consider future opportunities for this technology in fields such as health diagnostics, safety & security, and environment monitoring. Copyright © 2016 Elsevier B.V. All rights reserved.
Using reality mining to improve public health and medicine.
Pentland, Alex; Lazer, David; Brewer, Devon; Heibeck, Tracy
2009-01-01
We live our lives in digital networks. We wake up in the morning, check our e-mail, make a quick phone call, commute to work, buy lunch. Many of these transactions leave digital breadcrumbs--tiny records of our daily experiences. Reality mining, which pulls together these crumbs using statistical analysis and machine learning methods, offers an increasingly comprehensive picture of our lives, both individually and collectively, with the potential of transforming our understanding of ourselves, our organizations, and our society in a fashion that was barely conceivable just a few years ago. It is for this reason that reality mining was recently identified by Technology Review as one of "10 emerging technologies that could change the world". Many everyday devices provide the raw database upon which reality mining builds; sensors in mobile phones, cars, security cameras, RFID ('smart card') readers, and others, all allow for the measurement of human physical and social activity. Computational models based on such data have the potential to dramatically transform the arenas of both individual and community health. Reality mining can provide new opportunities with respect to diagnosis, patient and treatment monitoring, health services planning, surveillance of disease and risk factors, and public health investigation and disease control. Currently, the single most important source of reality mining data is the ubiquitous mobile phone. Every time a person uses a mobile phone, a few bits of information are left behind. The phone pings the nearest mobile-phone towers, revealing its location. The mobile phone service provider records the duration of the call and the number dialed. In the near future, mobile phones and other technologies will collect even more information about their users, recording everything from their physical activity to their conversational cadences. While such data pose a potential threat to individual privacy, they also offer great potential value both to individuals and communities. With the aid of data-mining algorithms, these data could shed light on individual patterns of behavior and even on the well-being of communities, creating new ways to improve public health and medicine. To illustrate, consider two examples of how reality mining may benefit individual health care. By taking advantage of special sensors in mobile phones, such as the microphone or the accelerometers built into newer devices such as Apple's iPhone, important diagnostic data can be captured. Clinical pilot data demonstrate that it may be possible to diagnose depression from the way a person talks--a depressed person tends to speak more slowly, a change that speech analysis software on a phone might recognize more readily than friends or family do. Similarly, monitoring a phone's motion sensors can also reveal small changes in gait, which could be an early indicator of ailments such as Parkinson's disease. Within the next few years reality mining will become more common, thanks in part to the proliferation and increasing sophistication of mobile phones. Many handheld devices now have the processing power of low-end desktop computers, and they can also collect more varied data, due to components such as GPS chips that track location. The Chief Technology Officer of EMC, a large digital storage company, estimates that this sort of personal sensor data will balloon from 10% of all stored information to 90% within the next decade. While the promise of reality mining is great, the idea of collecting so much personal information naturally raises many questions about privacy. It is crucial that behavior-logging technology not be forced on anyone. But legal statutes are lagging behind data collection capabilities, making it particularly important to begin discussing how the technology will and should be used. Therefore, an additional focus of this chapter will be the development of a legal and ethical framework concerning the data used by reality mining techniques.
NASA Astrophysics Data System (ADS)
Berg, Brandon; Cortazar, Bingen; Tseng, Derek; Ozkan, Haydar; Feng, Steve; Wei, Qingshan; Chan, Raymond Y.; Burbano, Jordi; Farooqui, Qamar; Lewinski, Michael; Di Carlo, Dino; Garner, Omai B.; Ozcan, Aydogan
2016-03-01
Enzyme-linked immunosorbent assay (ELISA) in a microplate format has been a gold standard first-line clinical test for diagnosis of various diseases including infectious diseases. However, this technology requires a relatively large and expensive multi-well scanning spectrophotometer to read and quantify the signal from each well, hindering its implementation in resource-limited-settings. Here, we demonstrate a cost-effective and handheld smartphone-based colorimetric microplate reader for rapid digitization and quantification of immunoserology-related ELISA tests in a conventional 96-well plate format at the point of care (POC). This device consists of a bundle of 96 optical fibers to collect the transmitted light from each well of the microplate and direct all the transmission signals from the wells onto the camera of the mobile-phone. Captured images are then transmitted to a remote server through a custom-designed app, and both quantitative and qualitative diagnostic results are returned back to the user within ~1 minute per 96-well plate by using a machine learning algorithm. We tested this mobile-phone based micro-plate reader in a clinical microbiology lab using FDA-approved mumps IgG, measles IgG, and herpes simplex virus IgG (HSV-1 and HSV-2) ELISA tests on 1138 remnant patient samples (roughly 50% training and 50% testing), and achieved an overall accuracy of ~99% or higher for each ELISA test. This handheld and cost-effective platform could be immediately useful for large-scale vaccination monitoring in low-infrastructure settings, and also for other high-throughput disease screening applications at POC.
Effects-Driven Participatory Design: Learning from Sampling Interruptions.
Brandrup, Morten; Østergaard, Kija Lin; Hertzum, Morten; Karasti, Helena; Simonsen, Jesper
2017-01-01
Participatory design (PD) can play an important role in obtaining benefits from healthcare information technologies, but we contend that to fulfil this role PD must incorporate feedback from real use of the technologies. In this paper we describe an effects-driven PD approach that revolves around a sustained focus on pursued effects and uses the experience sampling method (ESM) to collect real-use feedback. To illustrate the use of the method we analyze a case that involves the organizational implementation of electronic whiteboards at a Danish hospital to support the clinicians' intra- and interdepartmental coordination. The hospital aimed to reduce the number of phone calls involved in coordinating work because many phone calls were seen as unnecessary interruptions. To learn about the interruptions we introduced an app for capturing quantitative data and qualitative feedback about the phone calls. The investigation showed that the electronic whiteboards had little potential for reducing the number of phone calls at the operating ward. The combination of quantitative data and qualitative feedback worked both as a basis for aligning assumptions to data and showed ESM as an instrument for triggering in-situ reflection. The participant-driven design and redesign of the way data were captured by means of ESM is a central contribution to the understanding of how to conduct effects-driven PD.
Uddin, Jasim; Biswas, Tuhin; Adhikary, Gourab; Ali, Wazed; Alam, Nurul; Palit, Rajesh; Uddin, Nizam; Uddin, Aftab; Khatun, Fatema; Bhuiya, Abbas
2017-07-06
Mobile phone-based technology has been used in improving the delivery of healthcare services in many countries. However, data on the effects of this technology on improving primary healthcare services in resource-poor settings are limited. The aim of this study is to develop and test a mobile phone-based system to improve health, population and nutrition services in rural Bangladesh and evaluate its impact on service delivery. The study will use a quasi-experimental pre-post design, with intervention and comparison areas. Outcome indicators will include: antenatal care (ANC), delivery care, postnatal care (PNC), neonatal care, expanded programme on immunization (EPI) coverage, and contraceptive prevalence rate (CPR). The study will be conducted over a period of 30 months, using the existing health systems of Bangladesh. The intervention will be implemented through the existing service-delivery personnel at various primary-care levels, such as community clinic, union health and family welfare centre, and upazila health complex. These healthcare providers will be given mobile phones equipped with Apps for sending text and voice messages, along with the use of Internet and device for data-capturing. Training on handling of the Smartphones, data-capturing and monitoring will be given to selected service providers. They will also be trained on inputs, editing, verifying, and monitoring the outcome variables. Mobile phone-based technology has the potential to improve primary healthcare services in low-income countries, like Bangladesh. It is expected that our study will contribute to testing and developing a mobile phone-based intervention to improve the coverage and quality of services. The learning can be used in other similar settings in the low-and middle-income countries.
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2015-10-01
the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external
Template-based education toolkit for mobile platforms
NASA Astrophysics Data System (ADS)
Golagani, Santosh Chandana; Esfahanian, Moosa; Akopian, David
2012-02-01
Nowadays mobile phones are the most widely used portable devices which evolve very fast adding new features and improving user experiences. The latest generation of hand-held devices called smartphones is equipped with superior memory, cameras and rich multimedia features, empowering people to use their mobile phones not only as a communication tool but also for entertainment purposes. With many young students showing interest in learning mobile application development one should introduce novel learning methods which may adapt to fast technology changes and introduce students to application development. Mobile phones become a common device, and engineering community incorporates phones in various solutions. Overcoming the limitations of conventional undergraduate electrical engineering (EE) education this paper explores the concept of template-based based education in mobile phone programming. The concept is based on developing small exercise templates which students can manipulate and revise for quick hands-on introduction to the application development and integration. Android platform is used as a popular open source environment for application development. The exercises relate to image processing topics typically studied by many students. The goal is to enable conventional course enhancements by incorporating in them short hands-on learning modules.
m-Learning and holography: Compatible techniques?
NASA Astrophysics Data System (ADS)
Calvo, Maria L.
2014-07-01
Since the last decades, cell phones have become increasingly popular and are nowadays ubiquitous. New generations of cell phones are now equipped with text messaging, internet, and camera features. They are now making their way into the classroom. This is creating a new teaching and learning technique, the so called m-Learning (or mobile-Learning). Because of the many benefits that cell phones offer, teachers could easily use them as a teaching and learning tool. However, an additional work from the teachers for introducing their students into the m-Learning in the classroom needs to be defined and developed. As an example, optical techniques, based upon interference and diffraction phenomena, such as holography, appear to be convenient topics for m-Learning. They can be approached with simple examples and experiments within the cell phones performances and classroom accessibility. We will present some results carried out at the Faculty of Physical Sciences in UCM to obtain very simple holographic recordings via cell phones. The activities were carried out inside the course on Optical Coherence and Laser, offered to students in the fourth course of the Grade in Physical Sciences. Some open conclusions and proposals will be presented.
Capturing migration phenology of terrestrial wildlife using camera traps
Tape, Ken D.; Gustine, David D.
2014-01-01
Remote photography, using camera traps, can be an effective and noninvasive tool for capturing the migration phenology of terrestrial wildlife. We deployed 14 digital cameras along a 104-kilometer longitudinal transect to record the spring migrations of caribou (Rangifer tarandus) and ptarmigan (Lagopus spp.) in the Alaskan Arctic. The cameras recorded images at 15-minute intervals, producing approximately 40,000 images, including 6685 caribou observations and 5329 ptarmigan observations. The northward caribou migration was evident because the median caribou observation (i.e., herd median) occurred later with increasing latitude; average caribou migration speed also increased with latitude (r2 = .91). Except at the northernmost latitude, a northward ptarmigan migration was similarly evident (r2 = .93). Future applications of this method could be used to examine the conditions proximate to animal movement, such as habitat or snow cover, that may influence migration phenology.
HST High Gain Antennae photographed by Electronic Still Camera
1993-12-04
S61-E-009 (4 Dec 1993) --- This view of one of two High Gain Antennae (HGA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC). The scene was down linked to ground controllers soon after the Space Shuttle Endeavour caught up to the orbiting telescope 320 miles above Earth. Shown here before grapple, the HST was captured on December 4, 1993 in order to service the telescope. Over a period of five days, four of the seven STS-61 crew members will work in alternating pairs outside Endeavour's shirt sleeve environment. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-04
S61-E-002 (4 Dec 1993) --- This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed from inside Endeavour's cabin with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view features the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
HST Solar Arrays photographed by Electronic Still Camera
1993-12-04
S61-E-003 (4 Dec 1993) --- This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Endeavour's crew captured the HST on December 4, 1993 in order to service the telescope over a period of five days. Four of the crew members will work in alternating pairs outside Endeavour's shirt sleeve environment to service the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Development of high-speed video cameras
NASA Astrophysics Data System (ADS)
Etoh, Takeharu G.; Takehara, Kohsei; Okinaka, Tomoo; Takano, Yasuhide; Ruckelshausen, Arno; Poggemann, Dirk
2001-04-01
Presented in this paper is an outline of the R and D activities on high-speed video cameras, which have been done in Kinki University since more than ten years ago, and are currently proceeded as an international cooperative project with University of Applied Sciences Osnabruck and other organizations. Extensive marketing researches have been done, (1) on user's requirements on high-speed multi-framing and video cameras by questionnaires and hearings, and (2) on current availability of the cameras of this sort by search of journals and websites. Both of them support necessity of development of a high-speed video camera of more than 1 million fps. A video camera of 4,500 fps with parallel readout was developed in 1991. A video camera with triple sensors was developed in 1996. The sensor is the same one as developed for the previous camera. The frame rate is 50 million fps for triple-framing and 4,500 fps for triple-light-wave framing, including color image capturing. Idea on a video camera of 1 million fps with an ISIS, In-situ Storage Image Sensor, was proposed in 1993 at first, and has been continuously improved. A test sensor was developed in early 2000, and successfully captured images at 62,500 fps. Currently, design of a prototype ISIS is going on, and, hopefully, will be fabricated in near future. Epoch-making cameras in history of development of high-speed video cameras by other persons are also briefly reviewed.
In this medium close-up view, captured by an Electronic Still Camera (ESC), the Spartan 207
NASA Technical Reports Server (NTRS)
1996-01-01
STS-77 ESC VIEW --- In this medium close-up view, captured by an Electronic Still Camera (ESC), the Spartan 207 free-flyer is held in the grasp of the Space Shuttle Endeavour's Remote Manipulator System (RMS) following its re-capture on May 21, 1996. The six-member crew has spent a portion of the early stages of the mission in various activities involving the Spartan 207 and the related Inflatable Antenna Experiment (IAE). The Spartan project is managed by NASA's Goddard Space Flight Center (GSFC) for NASA's Office of Space Science, Washington, D.C. GMT: 09:38:05.
Camera-trap study of ocelot and other secretive mammals in the northern Pantanal
Trolle, M.; Kery, M.
2005-01-01
Reliable information on abundance of the ocelot (Leopardus pardalis) is scarce. We conducted the first camera-trap study in the northern part of the Pantanal wetlands of Brazil, one of the wildlife hotspots of South America. Using capture-recapture analysis, we estimated a density of 0.112 independent individuals per km2 (SE 0.069). We list other mammals recorded with camera traps and show that camera-trap placement on roads or on trails has striking effects on camera-trapping rates.
Rectification of curved document images based on single view three-dimensional reconstruction.
Kang, Lai; Wei, Yingmei; Jiang, Jie; Bai, Liang; Lao, Songyang
2016-10-01
Since distortions in camera-captured document images significantly affect the accuracy of optical character recognition (OCR), distortion removal plays a critical role for document digitalization systems using a camera for image capturing. This paper proposes a novel framework that performs three-dimensional (3D) reconstruction and rectification of camera-captured document images. While most existing methods rely on additional calibrated hardware or multiple images to recover the 3D shape of a document page, or make a simple but not always valid assumption on the corresponding 3D shape, our framework is more flexible and practical since it only requires a single input image and is able to handle a general locally smooth document surface. The main contributions of this paper include a new iterative refinement scheme for baseline fitting from connected components of text line, an efficient discrete vertical text direction estimation algorithm based on convex hull projection profile analysis, and a 2D distortion grid construction method based on text direction function estimation using 3D regularization. In order to examine the performance of our proposed method, both qualitative and quantitative evaluation and comparison with several recent methods are conducted in our experiments. The experimental results demonstrate that the proposed method outperforms relevant approaches for camera-captured document image rectification, in terms of improvements on both visual distortion removal and OCR accuracy.
NASA Astrophysics Data System (ADS)
Kerr, Andrew D.
Determining optimal imaging settings and best practices related to the capture of aerial imagery using consumer-grade digital single lens reflex (DSLR) cameras, should enable remote sensing scientists to generate consistent, high quality, and low cost image data sets. Radiometric optimization, image fidelity, image capture consistency and repeatability were evaluated in the context of detailed image-based change detection. The impetus for this research is in part, a dearth of relevant, contemporary literature, on the utilization of consumer grade DSLR cameras for remote sensing, and the best practices associated with their use. The main radiometric control settings on a DSLR camera, EV (Exposure Value), WB (White Balance), light metering, ISO, and aperture (f-stop), are variables that were altered and controlled over the course of several image capture missions. These variables were compared for their effects on dynamic range, intra-frame brightness variation, visual acuity, temporal consistency, and the detectability of simulated cracks placed in the images. This testing was conducted from a terrestrial, rather than an airborne collection platform, due to the large number of images per collection, and the desire to minimize inter-image misregistration. The results point to a range of slightly underexposed image exposure values as preferable for change detection and noise minimization fidelity. The makeup of the scene, the sensor, and aerial platform, influence the selection of the aperture and shutter speed which along with other variables, allow for estimation of the apparent image motion (AIM) motion blur in the resulting images. The importance of the image edges in the image application, will in part dictate the lowest usable f-stop, and allow the user to select a more optimal shutter speed and ISO. The single most important camera capture variable is exposure bias (EV), with a full dynamic range, wide distribution of DN values, and high visual contrast and acuity occurring around -0.7 to -0.3EV exposure bias. The ideal values for sensor gain, was found to be ISO 100, with ISO 200 a less desirable. This study offers researchers a better understanding of the effects of camera capture settings on RSI pairs and their influence on image-based change detection.
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
An indoor augmented reality mobile application for simulation of building evacuation
NASA Astrophysics Data System (ADS)
Sharma, Sharad; Jerripothula, Shanmukha
2015-03-01
Augmented Reality enables people to remain connected with the physical environment they are in, and invites them to look at the world from new and alternative perspectives. There has been an increasing interest in emergency evacuation applications for mobile devices. Nearly all the smart phones these days are Wi-Fi and GPS enabled. In this paper, we propose a novel emergency evacuation system that will help people to safely evacuate a building in case of an emergency situation. It will further enhance knowledge and understanding of where the exits are in the building and safety evacuation procedures. We have applied mobile augmented reality (mobile AR) to create an application with Unity 3D gaming engine. We show how the mobile AR application is able to display a 3D model of the building and animation of people evacuation using markers and web camera. The system gives a visual representation of a building in 3D space, allowing people to see where exits are in the building through the use of a smart phone or tablets. Pilot studies were conducted with the system showing its partial success and demonstrated the effectiveness of the application in emergency evacuation. Our computer vision methods give good results when the markers are closer to the camera, but accuracy decreases when the markers are far away from the camera.
Simple technique to measure toric intraocular lens alignment and stability using a smartphone.
Teichman, Joshua C; Baig, Kashif; Ahmed, Iqbal Ike K
2014-12-01
Toric intraocular lenses (IOLs) are commonly implanted to correct corneal astigmatism at the time of cataract surgery. Their use requires preoperative calculation of the axis of implantation and postoperative measurement to determine whether the IOL has been implanted with the proper orientation. Moreover, toric IOL alignment stability over time is important for the patient and for the longitudinal evaluation of toric IOLs. We present a simple, inexpensive, and precise method to measure the toric IOL axis using a camera-enabled cellular phone (iPhone 5S) and computer software (ImageJ). Copyright © 2014 ASCRS and ESCRS. Published by Elsevier Inc. All rights reserved.
The Economics of Notebook Universities
ERIC Educational Resources Information Center
Bryan, John M. S.
2007-01-01
In the fall of 2006, students could purchase an entry-level notebook computer with a 15-inch LCD for $500. This price crossed an important threshold, moving notebooks into the range of consumer electronics--the category of phenomena that fuels mass consumer trends such as cell phones, digital cameras, and iPods. Most colleges and universities…
ERIC Educational Resources Information Center
Foster, Andrea L.
2006-01-01
American college students are increasingly posting videos of their lives online, due to Web sites like Vimeo and Google Video that host video material free and the ubiquity of camera phones and other devices that can take video-clips. However, the growing popularity of online socializing has many safety experts worried that students could be…
A Simple Educational Method for the Measurement of Liquid Binary Diffusivities
ERIC Educational Resources Information Center
Rice, Nicholas P.; de Beer, Martin P.; Williamson, Mark E.
2014-01-01
A simple low-cost experiment has been developed for the measurement of the binary diffusion coefficients of liquid substances. The experiment is suitable for demonstrating molecular diffusion to small or large undergraduate classes in chemistry or chemical engineering. Students use a cell phone camera in conjunction with open-source image…
Tensions in a Nepali Telecenter: An Ethnographic Look at Progress Using Activity Theory
ERIC Educational Resources Information Center
Lee, Jeffrey Chih-Yih
2010-01-01
Developing countries such as Nepal struggle to keep up technologically. While advances make it possible for average Nepalis to access mobile phones, computers, and digital cameras, barriers impede access. As with other governments (Huerta & Rodrigo, 2007; Mokhtarian & Meenakshisun, 2002), Nepal responded in 2004 with telecenters to push…
What Children Can Learn from MMORPGs
ERIC Educational Resources Information Center
Sarsar, Nasreddine Mohamed
2008-01-01
Due to the technological advance that has swept our societies, students have become more and more engaged with new burgeoning technological tools such as computers, cell phones, iPods, digital cameras, and the like. As a result, the disparity between what students do inside school and what they do at home has grown wider. Buckingham (2007) refers…
Sustaining a Nepali Telecenter: An Ethnographic Study Using Activity Theory
ERIC Educational Resources Information Center
Lee, Jeffrey; Sparks, Paul
2014-01-01
While advances have made it possible for the average Nepali to access mobile phones, computers, and digital cameras, barriers continue to impede access. Like other governments, Nepal responded in 2004 by creating about 80 telecenters to push sustainable technology to its people. Five years later, most telecenters struggle with sustainability. This…
Audiovisual Physics Reports: Students' Video Production as a Strategy for the Didactic Laboratory
ERIC Educational Resources Information Center
Pereira, Marcus Vinicius; de Souza Barros, Susana; de Rezende Filho, Luiz Augusto C.; de A. Fauth, Leduc Hermeto
2012-01-01
Constant technological advancement has facilitated access to digital cameras and cell phones. Involving students in a video production project can work as a motivating aspect to make them active and reflective in their learning, intellectually engaged in a recursive process. This project was implemented in high school level physics laboratory…
Martinkova, Pavla; Pohanka, Miroslav
2016-12-18
Glucose is an important diagnostic biochemical marker of diabetes but also for organophosphates, carbamates, acetaminophens or salicylates poisoning. Hence, innovation of accurate and fast detection assay is still one of priorities in biomedical research. Glucose sensor based on magnetic particles (MPs) with immobilized enzymes glucose oxidase (GOx) and horseradish peroxidase (HRP) was developed and the GOx catalyzed reaction was visualized by a smart-phone-integrated camera. Exponential decay concentration curve with correlation coefficient 0.997 and with limit of detection 0.4 mmol/l was achieved. Interfering and matrix substances were measured due to possibility of assay influencing and no effect of the tested substances was observed. Spiked plasma samples were also measured and no influence of plasma matrix on the assay was proved. The presented assay showed complying results with reference method (standard spectrophotometry based on enzymes glucose oxidase and peroxidase inside plastic cuvettes) with linear dependence and correlation coefficient 0.999 in concentration range between 0 and 4 mmol/l. On the grounds of measured results, method was considered as highly specific, accurate and fast assay for detection of glucose.
Kim, Jong Hyun; Hong, Hyung Gil; Park, Kang Ryoung
2017-05-08
Because intelligent surveillance systems have recently undergone rapid growth, research on accurately detecting humans in videos captured at a long distance is growing in importance. The existing research using visible light cameras has mainly focused on methods of human detection for daytime hours when there is outside light, but human detection during nighttime hours when there is no outside light is difficult. Thus, methods that employ additional near-infrared (NIR) illuminators and NIR cameras or thermal cameras have been used. However, in the case of NIR illuminators, there are limitations in terms of the illumination angle and distance. There are also difficulties because the illuminator power must be adaptively adjusted depending on whether the object is close or far away. In the case of thermal cameras, their cost is still high, which makes it difficult to install and use them in a variety of places. Because of this, research has been conducted on nighttime human detection using visible light cameras, but this has focused on objects at a short distance in an indoor environment or the use of video-based methods to capture multiple images and process them, which causes problems related to the increase in the processing time. To resolve these problems, this paper presents a method that uses a single image captured at night on a visible light camera to detect humans in a variety of environments based on a convolutional neural network. Experimental results using a self-constructed Dongguk night-time human detection database (DNHD-DB1) and two open databases (Korea advanced institute of science and technology (KAIST) and computer vision center (CVC) databases), as well as high-accuracy human detection in a variety of environments, show that the method has excellent performance compared to existing methods.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This close-up view of one of two Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Samba: a real-time motion capture system using wireless camera sensor networks.
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-03-20
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments.
Samba: A Real-Time Motion Capture System Using Wireless Camera Sensor Networks
Oh, Hyeongseok; Cha, Geonho; Oh, Songhwai
2014-01-01
There is a growing interest in 3D content following the recent developments in 3D movies, 3D TVs and 3D smartphones. However, 3D content creation is still dominated by professionals, due to the high cost of 3D motion capture instruments. The availability of a low-cost motion capture system will promote 3D content generation by general users and accelerate the growth of the 3D market. In this paper, we describe the design and implementation of a real-time motion capture system based on a portable low-cost wireless camera sensor network. The proposed system performs motion capture based on the data-driven 3D human pose reconstruction method to reduce the computation time and to improve the 3D reconstruction accuracy. The system can reconstruct accurate 3D full-body poses at 16 frames per second using only eight markers on the subject's body. The performance of the motion capture system is evaluated extensively in experiments. PMID:24658618
Computational photography with plenoptic camera and light field capture: tutorial.
Lam, Edmund Y
2015-11-01
Photography is a cornerstone of imaging. Ever since cameras became consumer products more than a century ago, we have witnessed great technological progress in optics and recording mediums, with digital sensors replacing photographic films in most instances. The latest revolution is computational photography, which seeks to make image reconstruction computation an integral part of the image formation process; in this way, there can be new capabilities or better performance in the overall imaging system. A leading effort in this area is called the plenoptic camera, which aims at capturing the light field of an object; proper reconstruction algorithms can then adjust the focus after the image capture. In this tutorial paper, we first illustrate the concept of plenoptic function and light field from the perspective of geometric optics. This is followed by a discussion on early attempts and recent advances in the construction of the plenoptic camera. We will then describe the imaging model and computational algorithms that can reconstruct images at different focus points, using mathematical tools from ray optics and Fourier optics. Last, but not least, we will consider the trade-off in spatial resolution and highlight some research work to increase the spatial resolution of the resulting images.
Hyperspectral imaging with near-infrared-enabled mobile phones for tissue oximetry
NASA Astrophysics Data System (ADS)
Lin, Jonathan L.; Ghassemi, Pejhman; Chen, Yu; Pfefer, Joshua
2018-02-01
Hyperspectral reflectance imaging (HRI) is an emerging clinical tool for characterizing spatial and temporal variations in blood perfusion and oxygenation for applications such as burn assessment, wound healing, retinal exams and intraoperative tissue viability assessment. Since clinical HRI-based oximeters often use near-infrared (NIR) light, NIR-enabled mobile phones may provide a useful platform for future point-of-care devices. Furthermore, quantitative NIR imaging on mobile phones may dramatically increase the availability and accessibility of medical diagnostics for low-resource settings. We have evaluated the potential for phone-based NIR oximetry imaging and elucidated factors affecting performance using devices from two different manufacturers, as well as a scientific CCD. A broadband light source and liquid crystal tunable filter were used for imaging at 10 nm bands from 650 to 1000 nm. Spectral sensitivity measurements indicated that mobile phones with standard NIR blocking filters had minimal response beyond 700 nm, whereas one modified phone showed sensitivity to 800 nm and another to 1000 nm. Red pixel channels showed the greatest sensitivity up to 800 nm, whereas all channels provided essentially equivalent sensitivity at longer wavelengths. Referencing of blood oxygenation levels was performed with a CO-oximeter. HRI measurements were performed using cuvettes filled with hemoglobin solutions of different oxygen saturation levels. Good agreement between absorbance spectra measured with mobile phone and a CCD cameras were seen for wavelengths below 900 nm. Saturation estimates showed root-mean-squared-errors of 5.2% and 4.5% for the CCD and phone, respectively. Overall, this work provides strong evidence of the potential for mobile phones to provide quantitative spectral imaging in the NIR for applications such as oximetry, and generates practical insights into factors that impact performance as well as test methods for performance assessment.
3D printed disposable optics and lab-on-a-chip devices for chemical sensing with cell phones
NASA Astrophysics Data System (ADS)
Comina, G.; Suska, A.; Filippini, D.
2017-02-01
Digital manufacturing (DM) offers fast prototyping capabilities and great versatility to configure countless architectures at affordable development costs. Autonomous lab-on-a-chip (LOC) devices, conceived as only disposable accessory to interface chemical sensing to cell phones, require specific features that can be achieved using DM techniques. Here we describe stereo-lithography 3D printing (SLA) of optical components and unibody-LOC (ULOC) devices using consumer grade printers. ULOC devices integrate actuation in the form of check-valves and finger pumps, as well as the calibration range required for quantitative detection. Coupling to phone camera readout depends on the detection approach, and includes different types of optical components. Optical surfaces can be locally configured with a simple polishing-free post-processing step, and the representative costs are 0.5 US$/device, same as ULOC devices, both involving fabrication times of about 20 min.
Burst mode composite photography for dynamic physics demonstrations
NASA Astrophysics Data System (ADS)
Lincoln, James
2018-05-01
I am writing this article to raise awareness of burst mode photography as a fun and engaging way for teachers and students to experience physics demonstration activities. In the context of digital photography, "burst mode" means taking multiple photographs per second, and this is a feature that now comes standard on most digital cameras—including the iPhone. Sometimes the images are composited to imply motion from a series of still pictures. By analyzing the time between the photos, students can measure rates of velocity and acceleration of moving objects. Some of these composite photographs have already shown up in the AAPT High School Physics Photo Contest. In this article I discuss some ideas for using burst mode photography in the iPhone and provide a discussion of how to edit these photographs to create a composite image. I also compare the capabilities of the iPhone and GoPro cameras in creating these photographic composites.
MEMS FPI-based smartphone hyperspectral imager
NASA Astrophysics Data System (ADS)
Rissanen, Anna; Saari, Heikki; Rainio, Kari; Stuns, Ingmar; Viherkanto, Kai; Holmlund, Christer; Näkki, Ismo; Ojanen, Harri
2016-05-01
This paper demonstrates a mobile phone- compatible hyperspectral imager based on a tunable MEMS Fabry-Perot interferometer. The realized iPhone 5s hyperspectral imager (HSI) demonstrator utilizes MEMS FPI tunable filter for visible-range, which consist of atomic layer deposited (ALD) Al2O3/TiO2-thin film Bragg reflectors. Characterization results for the mobile phone hyperspectral imager utilizing MEMS FPI chip optimized for 500 nm is presented; the operation range is λ = 450 - 550 nm with FWHM between 8 - 15 nm. Also a configuration of two cascaded FPIs (λ = 500 nm and λ = 650 nm) combined with an RGB colour camera is presented. With this tandem configuration, the overall wavelength tuning range of MEMS hyperspectral imagers can be extended to cover a larger range than with a single FPI chip. The potential applications of mobile hyperspectral imagers in the vis-NIR range include authentication, counterfeit detection and potential health/wellness and food sensing applications.
High-resolution ophthalmic imaging system
Olivier, Scot S.; Carrano, Carmen J.
2007-12-04
A system for providing an improved resolution retina image comprising an imaging camera for capturing a retina image and a computer system operatively connected to the imaging camera, the computer producing short exposures of the retina image and providing speckle processing of the short exposures to provide the improved resolution retina image. The system comprises the steps of capturing a retina image, producing short exposures of the retina image, and speckle processing the short exposures of the retina image to provide the improved resolution retina image.
Computational cameras for moving iris recognition
NASA Astrophysics Data System (ADS)
McCloskey, Scott; Venkatesha, Sharath
2015-05-01
Iris-based biometric identification is increasingly used for facility access and other security applications. Like all methods that exploit visual information, however, iris systems are limited by the quality of captured images. Optical defocus due to a small depth of field (DOF) is one such challenge, as is the acquisition of sharply-focused iris images from subjects in motion. This manuscript describes the application of computational motion-deblurring cameras to the problem of moving iris capture, from the underlying theory to system considerations and performance data.
Fabrication of multi-focal microlens array on curved surface for wide-angle camera module
NASA Astrophysics Data System (ADS)
Pan, Jun-Gu; Su, Guo-Dung J.
2017-08-01
In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.
On the performances of computer vision algorithms on mobile platforms
NASA Astrophysics Data System (ADS)
Battiato, S.; Farinella, G. M.; Messina, E.; Puglisi, G.; Ravì, D.; Capra, A.; Tomaselli, V.
2012-01-01
Computer Vision enables mobile devices to extract the meaning of the observed scene from the information acquired with the onboard sensor cameras. Nowadays, there is a growing interest in Computer Vision algorithms able to work on mobile platform (e.g., phone camera, point-and-shot-camera, etc.). Indeed, bringing Computer Vision capabilities on mobile devices open new opportunities in different application contexts. The implementation of vision algorithms on mobile devices is still a challenging task since these devices have poor image sensors and optics as well as limited processing power. In this paper we have considered different algorithms covering classic Computer Vision tasks: keypoint extraction, face detection, image segmentation. Several tests have been done to compare the performances of the involved mobile platforms: Nokia N900, LG Optimus One, Samsung Galaxy SII.
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
Efficient color correction method for smartphone camera-based health monitoring application.
Duc Dang; Chae Ho Cho; Daeik Kim; Oh Seok Kwon; Jo Woon Chong
2017-07-01
Smartphone health monitoring applications are recently highlighted due to the rapid development of hardware and software performance of smartphones. However, color characteristics of images captured by different smartphone models are dissimilar each other and this difference may give non-identical health monitoring results when the smartphone health monitoring applications monitor physiological information using their embedded smartphone cameras. In this paper, we investigate the differences in color properties of the captured images from different smartphone models and apply a color correction method to adjust dissimilar color values obtained from different smartphone cameras. Experimental results show that the color corrected images using the correction method provide much smaller color intensity errors compared to the images without correction. These results can be applied to enhance the consistency of smartphone camera-based health monitoring applications by reducing color intensity errors among the images obtained from different smartphones.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
[True color accuracy in digital forensic photography].
Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A
2016-01-01
Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation).
Lytro camera technology: theory, algorithms, performance analysis
NASA Astrophysics Data System (ADS)
Georgiev, Todor; Yu, Zhan; Lumsdaine, Andrew; Goma, Sergio
2013-03-01
The Lytro camera is the first implementation of a plenoptic camera for the consumer market. We consider it a successful example of the miniaturization aided by the increase in computational power characterizing mobile computational photography. The plenoptic camera approach to radiance capture uses a microlens array as an imaging system focused on the focal plane of the main camera lens. This paper analyzes the performance of Lytro camera from a system level perspective, considering the Lytro camera as a black box, and uses our interpretation of Lytro image data saved by the camera. We present our findings based on our interpretation of Lytro camera file structure, image calibration and image rendering; in this context, artifacts and final image resolution are discussed.
Light field reconstruction robust to signal dependent noise
NASA Astrophysics Data System (ADS)
Ren, Kun; Bian, Liheng; Suo, Jinli; Dai, Qionghai
2014-11-01
Capturing four dimensional light field data sequentially using a coded aperture camera is an effective approach but suffers from low signal noise ratio. Although multiplexing can help raise the acquisition quality, noise is still a big issue especially for fast acquisition. To address this problem, this paper proposes a noise robust light field reconstruction method. Firstly, scene dependent noise model is studied and incorporated into the light field reconstruction framework. Then, we derive an optimization algorithm for the final reconstruction. We build a prototype by hacking an off-the-shelf camera for data capturing and prove the concept. The effectiveness of this method is validated with experiments on the real captured data.
a Cost-Effective Method for Crack Detection and Measurement on Concrete Surface
NASA Astrophysics Data System (ADS)
Sarker, M. M.; Ali, T. A.; Abdelfatah, A.; Yehia, S.; Elaksher, A.
2017-11-01
Crack detection and measurement in the surface of concrete structures is currently carried out manually or through Non-Destructive Testing (NDT) such as imaging or scanning. The recent developments in depth (stereo) cameras have presented an opportunity for cost-effective, reliable crack detection and measurement. This study aimed at evaluating the feasibility of the new inexpensive depth camera (ZED) for crack detection and measurement. This depth camera with its lightweight and portable nature produces a 3D data file of the imaged surface. The ZED camera was utilized to image a concrete surface and the 3D file was processed to detect and analyse cracks. This article describes the outcome of the experiment carried out with the ZED camera as well as the processing tools used for crack detection and analysis. Crack properties that were also of interest were length, orientation, and width. The use of the ZED camera allowed for distinction between surface and concrete cracks. The ZED high-resolution capability and point cloud capture technology helped in generating a dense 3D data in low-lighting conditions. The results showed the ability of the ZED camera to capture the crack depth changes between surface (render) cracks, and crack that form in the concrete itself.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
Russian Character Recognition using Self-Organizing Map
NASA Astrophysics Data System (ADS)
Gunawan, D.; Arisandi, D.; Ginting, F. M.; Rahmat, R. F.; Amalia, A.
2017-01-01
The World Tourism Organization (UNWTO) in 2014 released that there are 28 million visitors who visit Russia. Most of the visitors might have problem in typing Russian word when using digital dictionary. This is caused by the letters, called Cyrillic that used by the Russian and the countries around it, have different shape than Latin letters. The visitors might not familiar with Cyrillic. This research proposes an alternative way to input the Cyrillic words. Instead of typing the Cyrillic words directly, camera can be used to capture image of the words as input. The captured image is cropped, then several pre-processing steps are applied such as noise filtering, binary image processing, segmentation and thinning. Next, the feature extraction process is applied to the image. Cyrillic letters recognition in the image is done by utilizing Self-Organizing Map (SOM) algorithm. SOM successfully recognizes 89.09% Cyrillic letters from the computer-generated images. On the other hand, SOM successfully recognizes 88.89% Cyrillic letters from the image captured by the smartphone’s camera. For the word recognition, SOM successfully recognized 292 words and partially recognized 58 words from the image captured by the smartphone’s camera. Therefore, the accuracy of the word recognition using SOM is 83.42%
Akkaynak, Derya; Treibitz, Tali; Xiao, Bei; Gürkan, Umut A.; Allen, Justine J.; Demirci, Utkan; Hanlon, Roger T.
2014-01-01
Commercial off-the-shelf digital cameras are inexpensive and easy-to-use instruments that can be used for quantitative scientific data acquisition if images are captured in raw format and processed so that they maintain a linear relationship with scene radiance. Here we describe the image-processing steps required for consistent data acquisition with color cameras. In addition, we present a method for scene-specific color calibration that increases the accuracy of color capture when a scene contains colors that are not well represented in the gamut of a standard color-calibration target. We demonstrate applications of the proposed methodology in the fields of biomedical engineering, artwork photography, perception science, marine biology, and underwater imaging. PMID:24562030
Projector-Camera Systems for Immersive Training
2006-01-01
average to a sequence of 100 captured distortion corrected images. The OpenCV library [ OpenCV ] was used for camera calibration. To correct for...rendering application [Treskunov, Pair, and Swartout, 2004]. It was transposed to take into account different matrix conventions between OpenCV and...Screen Imperfections. Proc. Workshop on Projector-Camera Systems (PROCAMS), Nice, France, IEEE. OpenCV : Open Source Computer Vision. [Available
Visible camera imaging of plasmas in Proto-MPEX
NASA Astrophysics Data System (ADS)
Mosby, R.; Skeen, C.; Biewer, T. M.; Renfro, R.; Ray, H.; Shaw, G. C.
2015-11-01
The prototype Material Plasma Exposure eXperiment (Proto-MPEX) is a linear plasma device being developed at Oak Ridge National Laboratory (ORNL). This machine plans to study plasma-material interaction (PMI) physics relevant to future fusion reactors. Measurements of plasma light emission will be made on Proto-MPEX using fast, visible framing cameras. The cameras utilize a global shutter, which allows a full frame image of the plasma to be captured and compared at multiple times during the plasma discharge. Typical exposure times are ~10-100 microseconds. The cameras are capable of capturing images at up to 18,000 frames per second (fps). However, the frame rate is strongly dependent on the size of the ``region of interest'' that is sampled. The maximum ROI corresponds to the full detector area, of ~1000x1000 pixels. The cameras have an internal gain, which controls the sensitivity of the 10-bit detector. The detector includes a Bayer filter, for ``true-color'' imaging of the plasma emission. This presentation will exmine the optimized camera settings for use on Proto-MPEX. This work was supported by the US. D.O.E. contract DE-AC05-00OR22725.
Calibration of Action Cameras for Photogrammetric Purposes
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-01-01
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution. PMID:25237898
Calibration of action cameras for photogrammetric purposes.
Balletti, Caterina; Guerra, Francesco; Tsioukas, Vassilios; Vernier, Paolo
2014-09-18
The use of action cameras for photogrammetry purposes is not widespread due to the fact that until recently the images provided by the sensors, using either still or video capture mode, were not big enough to perform and provide the appropriate analysis with the necessary photogrammetric accuracy. However, several manufacturers have recently produced and released new lightweight devices which are: (a) easy to handle, (b) capable of performing under extreme conditions and more importantly (c) able to provide both still images and video sequences of high resolution. In order to be able to use the sensor of action cameras we must apply a careful and reliable self-calibration prior to the use of any photogrammetric procedure, a relatively difficult scenario because of the short focal length of the camera and its wide angle lens that is used to obtain the maximum possible resolution of images. Special software, using functions of the OpenCV library, has been created to perform both the calibration and the production of undistorted scenes for each one of the still and video image capturing mode of a novel action camera, the GoPro Hero 3 camera that can provide still images up to 12 Mp and video up 8 Mp resolution.
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
Can we Use Low-Cost 360 Degree Cameras to Create Accurate 3d Models?
NASA Astrophysics Data System (ADS)
Barazzetti, L.; Previtali, M.; Roncoroni, F.
2018-05-01
360 degree cameras capture the whole scene around a photographer in a single shot. Cheap 360 cameras are a new paradigm in photogrammetry. The camera can be pointed to any direction, and the large field of view reduces the number of photographs. This paper aims to show that accurate metric reconstructions can be achieved with affordable sensors (less than 300 euro). The camera used in this work is the Xiaomi Mijia Mi Sphere 360, which has a cost of about 300 USD (January 2018). Experiments demonstrate that millimeter-level accuracy can be obtained during the image orientation and surface reconstruction steps, in which the solution from 360° images was compared to check points measured with a total station and laser scanning point clouds. The paper will summarize some practical rules for image acquisition as well as the importance of ground control points to remove possible deformations of the network during bundle adjustment, especially for long sequences with unfavorable geometry. The generation of orthophotos from images having a 360° field of view (that captures the entire scene around the camera) is discussed. Finally, the paper illustrates some case studies where the use of a 360° camera could be a better choice than a project based on central perspective cameras. Basically, 360° cameras become very useful in the survey of long and narrow spaces, as well as interior areas like small rooms.
ERIC Educational Resources Information Center
Hudson, Hannah Trierweiler
2011-01-01
Megan is a 14-year-old from Nebraska who just started ninth grade. She has her own digital camera, cell phone, Nintendo DS, and laptop, and one or more of these devices is usually by her side. Compared to the interactions and exploration she's engaged in at home, Megan finds the technology in her classroom falls a little flat. Most of the…
Literacies in a Participatory, Multimodal World: The Arts and Aesthetics of Web 2.0
ERIC Educational Resources Information Center
Vasudevan, Lalitha
2010-01-01
Communicative and expressive modalities, such as smart phones and video cameras, have become increasingly multifunctional and reflect an evolving digital landscape often referred to as Web 2.0. The "ethos" and "technical" affordances of Web 2.0 have the potential to catalyze the aesthetic creativity of youth. Following a discussion of aesthetics…
A Vision for the Net Generation Media Center. Media Matters
ERIC Educational Resources Information Center
Johnson, Doug
2005-01-01
Many children today have never lived in a home without a computer. They are the "Net Generation," constantly "connected" by iPod, cell phone, keyboard, digital video camera, or game controller to various technologies. Recent studies have found that Net Genners see technology as "embedded in society," a primary means of connection with friends, and…
It's not the pixel count, you fool
NASA Astrophysics Data System (ADS)
Kriss, Michael A.
2012-01-01
The first thing a "marketing guy" asks the digital camera engineer is "how many pixels does it have, for we need as many mega pixels as possible since the other guys are killing us with their "umpteen" mega pixel pocket sized digital cameras. And so it goes until the pixels get smaller and smaller in order to inflate the pixel count in the never-ending pixel-wars. These small pixels just are not very good. The truth of the matter is that the most important feature of digital cameras in the last five years is the automatic motion control to stabilize the image on the sensor along with some very sophisticated image processing. All the rest has been hype and some "cool" design. What is the future for digital imaging and what will drive growth of camera sales (not counting the cell phone cameras which totally dominate the market in terms of camera sales) and more importantly after sales profits? Well sit in on the Dark Side of Color and find out what is being done to increase the after sales profits and don't be surprised if has been done long ago in some basement lab of a photographic company and of course, before its time.
Tenth Anniversary Image from Camera on NASA Mars Orbiter
2012-02-29
NASA Mars Odyssey spacecraft captured this image on Feb. 19, 2012, 10 years to the day after the camera recorded its first view of Mars. This image covers an area in the Nepenthes Mensae region north of the Martian equator.
NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
NASA Astrophysics Data System (ADS)
Yoon, Soweon; Jung, Ho Gi; Park, Kang Ryoung; Kim, Jaihie
2009-03-01
Although iris recognition is one of the most accurate biometric technologies, it has not yet been widely used in practical applications. This is mainly due to user inconvenience during the image acquisition phase. Specifically, users try to adjust their eye position within small capture volume at a close distance from the system. To overcome these problems, we propose a novel iris image acquisition system that provides users with unconstrained environments: a large operating range, enabling movement from standing posture, and capturing good-quality iris images in an acceptable time. The proposed system has the following three contributions compared with previous works: (1) the capture volume is significantly increased by using a pan-tilt-zoom (PTZ) camera guided by a light stripe projection, (2) the iris location in the large capture volume is found fast due to 1-D vertical face searching from the user's horizontal position obtained by the light stripe projection, and (3) zooming and focusing on the user's irises at a distance are accurate and fast using the estimated 3-D position of a face by the light stripe projection and the PTZ camera. Experimental results show that the proposed system can capture good-quality iris images in 2.479 s on average at a distance of 1.5 to 3 m, while allowing a limited amount of movement by the user.
Enhancing physics demos using iPhone slow motion
NASA Astrophysics Data System (ADS)
Lincoln, James
2017-12-01
Slow motion video enhances our ability to perceive and experience the physical world. This can help students and teachers especially in cases of fast moving objects or detailed events that happen too quickly for the eye to follow. As often as possible, demonstrations should be performed by the students themselves and luckily many of them will already have this technology in their pockets. The "S" series of iPhone has the slow motion video feature standard, which also includes simultaneous sound recording (somewhat unusual among slow motion cameras). In this article I share some of my experiences using this feature and provide advice on how to successfully use this technology in the classroom.
Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham
2016-05-17
Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings.
Rasooly, Reuven; Bruck, Hugh Alan; Balsam, Joshua; Prickril, Ben; Ossandon, Miguel; Rasooly, Avraham
2016-01-01
Resource-poor countries and regions require effective, low-cost diagnostic devices for accurate identification and diagnosis of health conditions. Optical detection technologies used for many types of biological and clinical analysis can play a significant role in addressing this need, but must be sufficiently affordable and portable for use in global health settings. Most current clinical optical imaging technologies are accurate and sensitive, but also expensive and difficult to adapt for use in these settings. These challenges can be mitigated by taking advantage of affordable consumer electronics mobile devices such as webcams, mobile phones, charge-coupled device (CCD) cameras, lasers, and LEDs. Low-cost, portable multi-wavelength fluorescence plate readers have been developed for many applications including detection of microbial toxins such as C. Botulinum A neurotoxin, Shiga toxin, and S. aureus enterotoxin B (SEB), and flow cytometry has been used to detect very low cell concentrations. However, the relatively low sensitivities of these devices limit their clinical utility. We have developed several approaches to improve their sensitivity presented here for webcam based fluorescence detectors, including (1) image stacking to improve signal-to-noise ratios; (2) lasers to enable fluorescence excitation for flow cytometry; and (3) streak imaging to capture the trajectory of a single cell, enabling imaging sensors with high noise levels to detect rare cell events. These approaches can also help to overcome some of the limitations of other low-cost optical detection technologies such as CCD or phone-based detectors (like high noise levels or low sensitivities), and provide for their use in low-cost medical diagnostics in resource-poor settings. PMID:27196933
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
Identification of handwriting by using the genetic algorithm (GA) and support vector machine (SVM)
NASA Astrophysics Data System (ADS)
Zhang, Qigui; Deng, Kai
2016-12-01
As portable digital camera and a camera phone comes more and more popular, and equally pressing is meeting the requirements of people to shoot at any time, to identify and storage handwritten character. In this paper, genetic algorithm(GA) and support vector machine(SVM)are used for identification of handwriting. Compare with parameters-optimized method, this technique overcomes two defects: first, it's easy to trap in the local optimum; second, finding the best parameters in the larger range will affects the efficiency of classification and prediction. As the experimental results suggest, GA-SVM has a higher recognition rate.
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This view, backdropped against the blackness of space shows one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST). The scene was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Advanced imaging research and development at DARPA
NASA Astrophysics Data System (ADS)
Dhar, Nibir K.; Dat, Ravi
2012-06-01
Advances in imaging technology have huge impact on our daily lives. Innovations in optics, focal plane arrays (FPA), microelectronics and computation have revolutionized camera design. As a result, new approaches to camera design and low cost manufacturing is now possible. These advances are clearly evident in visible wavelength band due to pixel scaling, improvements in silicon material and CMOS technology. CMOS cameras are available in cell phones and many other consumer products. Advances in infrared imaging technology have been slow due to market volume and many technological barriers in detector materials, optics and fundamental limits imposed by the scaling laws of optics. There is of course much room for improvements in both, visible and infrared imaging technology. This paper highlights various technology development projects at DARPA to advance the imaging technology for both, visible and infrared. Challenges and potentials solutions are highlighted in areas related to wide field-of-view camera design, small pitch pixel, broadband and multiband detectors and focal plane arrays.
BAE Systems' 17μm LWIR camera core for civil, commercial, and military applications
NASA Astrophysics Data System (ADS)
Lee, Jeffrey; Rodriguez, Christian; Blackwell, Richard
2013-06-01
Seventeen (17) µm pixel Long Wave Infrared (LWIR) Sensors based on vanadium oxide (VOx) micro-bolometers have been in full rate production at BAE Systems' Night Vision Sensors facility in Lexington, MA for the past five years.[1] We introduce here a commercial camera core product, the Airia-MTM imaging module, in a VGA format that reads out in 30 and 60Hz progressive modes. The camera core is architected to conserve power with all digital interfaces from the readout integrated circuit through video output. The architecture enables a variety of input/output interfaces including Camera Link, USB 2.0, micro-display drivers and optional RS-170 analog output supporting legacy systems. The modular board architecture of the electronics facilitates hardware upgrades allow us to capitalize on the latest high performance low power electronics developed for the mobile phones. Software and firmware is field upgradeable through a USB 2.0 port. The USB port also gives users access to up to 100 digitally stored (lossless) images.
Smartphone Fundus Photography.
Nazari Khanamiri, Hossein; Nakatsuka, Austin; El-Annan, Jaafar
2017-07-06
Smartphone fundus photography is a simple technique to obtain ocular fundus pictures using a smartphone camera and a conventional handheld indirect ophthalmoscopy lens. This technique is indispensable when picture documentation of optic nerve, retina, and retinal vessels is necessary but a fundus camera is not available. The main advantage of this technique is the widespread availability of smartphones that allows documentation of macula and optic nerve changes in many settings that was not previously possible. Following the well-defined steps detailed here, such as proper alignment of the phone camera, handheld lens, and the patient's pupil, is the key for obtaining a clear retina picture with no interfering light reflections and aberrations. In this paper, the optical principles of indirect ophthalmoscopy and fundus photography will be reviewed first. Then, the step-by-step method to record a good quality retinal image using a smartphone will be explained.
VAP/VAT: video analytics platform and test bed for testing and deploying video analytics
NASA Astrophysics Data System (ADS)
Gorodnichy, Dmitry O.; Dubrofsky, Elan
2010-04-01
Deploying Video Analytics in operational environments is extremely challenging. This paper presents a methodological approach developed by the Video Surveillance and Biometrics Section (VSB) of the Science and Engineering Directorate (S&E) of the Canada Border Services Agency (CBSA) to resolve these problems. A three-phase approach to enable VA deployment within an operational agency is presented and the Video Analytics Platform and Testbed (VAP/VAT) developed by the VSB section is introduced. In addition to allowing the integration of third party and in-house built VA codes into an existing video surveillance infrastructure, VAP/VAT also allows the agency to conduct an unbiased performance evaluation of the cameras and VA software available on the market. VAP/VAT consists of two components: EventCapture, which serves to Automatically detect a "Visual Event", and EventBrowser, which serves to Display & Peruse of "Visual Details" captured at the "Visual Event". To deal with Open architecture as well as with Closed architecture cameras, two video-feed capture mechanisms have been developed within the EventCapture component: IPCamCapture and ScreenCapture.
High-emulation mask recognition with high-resolution hyperspectral video capture system
NASA Astrophysics Data System (ADS)
Feng, Jiao; Fang, Xiaojing; Li, Shoufeng; Wang, Yongjin
2014-11-01
We present a method for distinguishing human face from high-emulation mask, which is increasingly used by criminals for activities such as stealing card numbers and passwords on ATM. Traditional facial recognition technique is difficult to detect such camouflaged criminals. In this paper, we use the high-resolution hyperspectral video capture system to detect high-emulation mask. A RGB camera is used for traditional facial recognition. A prism and a gray scale camera are used to capture spectral information of the observed face. Experiments show that mask made of silica gel has different spectral reflectance compared with the human skin. As multispectral image offers additional spectral information about physical characteristics, high-emulation mask can be easily recognized.
A Highly Accurate Face Recognition System Using Filtering Correlation
NASA Astrophysics Data System (ADS)
Watanabe, Eriko; Ishikawa, Sayuri; Kodate, Kashiko
2007-09-01
The authors previously constructed a highly accurate fast face recognition optical correlator (FARCO) [E. Watanabe and K. Kodate: Opt. Rev. 12 (2005) 460], and subsequently developed an improved, super high-speed FARCO (S-FARCO), which is able to process several hundred thousand frames per second. The principal advantage of our new system is its wide applicability to any correlation scheme. Three different configurations were proposed, each depending on correlation speed. This paper describes and evaluates a software correlation filter. The face recognition function proved highly accurate, seeing that a low-resolution facial image size (64 × 64 pixels) has been successfully implemented. An operation speed of less than 10 ms was achieved using a personal computer with a central processing unit (CPU) of 3 GHz and 2 GB memory. When we applied the software correlation filter to a high-security cellular phone face recognition system, experiments on 30 female students over a period of three months yielded low error rates: 0% false acceptance rate and 2% false rejection rate. Therefore, the filtering correlation works effectively when applied to low resolution images such as web-based images or faces captured by a monitoring camera.
Traffic Sign Recognition with Invariance to Lighting in Dual-Focal Active Camera System
NASA Astrophysics Data System (ADS)
Gu, Yanlei; Panahpour Tehrani, Mehrdad; Yendo, Tomohiro; Fujii, Toshiaki; Tanimoto, Masayuki
In this paper, we present an automatic vision-based traffic sign recognition system, which can detect and classify traffic signs at long distance under different lighting conditions. To realize this purpose, the traffic sign recognition is developed in an originally proposed dual-focal active camera system. In this system, a telephoto camera is equipped as an assistant of a wide angle camera. The telephoto camera can capture a high accuracy image for an object of interest in the view field of the wide angle camera. The image from the telephoto camera provides enough information for recognition when the accuracy of traffic sign is low from the wide angle camera. In the proposed system, the traffic sign detection and classification are processed separately for different images from the wide angle camera and telephoto camera. Besides, in order to detect traffic sign from complex background in different lighting conditions, we propose a type of color transformation which is invariant to light changing. This color transformation is conducted to highlight the pattern of traffic signs by reducing the complexity of background. Based on the color transformation, a multi-resolution detector with cascade mode is trained and used to locate traffic signs at low resolution in the image from the wide angle camera. After detection, the system actively captures a high accuracy image of each detected traffic sign by controlling the direction and exposure time of the telephoto camera based on the information from the wide angle camera. Moreover, in classification, a hierarchical classifier is constructed and used to recognize the detected traffic signs in the high accuracy image from the telephoto camera. Finally, based on the proposed system, a set of experiments in the domain of traffic sign recognition is presented. The experimental results demonstrate that the proposed system can effectively recognize traffic signs at low resolution in different lighting conditions.
Suffoletto, Brian; Gharani, Pedram; Chung, Tammy; Karimi, Hassan
2018-02-01
Phone sensors could be useful in assessing changes in gait that occur with alcohol consumption. This study determined (1) feasibility of collecting gait-related data during drinking occasions in the natural environment, and (2) how gait-related features measured by phone sensors relate to estimated blood alcohol concentration (eBAC). Ten young adult heavy drinkers were prompted to complete a 5-step gait task every hour from 8pm to 12am over four consecutive weekends. We collected 3-axis accelerometer, gyroscope, and magnetometer data from phone sensors, and computed 24 gait-related features using a sliding window technique. eBAC levels were calculated at each time point based on Ecological Momentary Assessment (EMA) of alcohol use. We used an artificial neural network model to analyze associations between sensor features and eBACs in training (70% of the data) and validation and test (30% of the data) datasets. We analyzed 128 data points where both eBAC and gait-related sensor data were captured, either when not drinking (n=60), while eBAC was ascending (n=55) or eBAC was descending (n=13). 21 data points were captured at times when the eBAC was greater than the legal limit (0.08mg/dl). Using a Bayesian regularized neural network, gait-related phone sensor features showed a high correlation with eBAC (Pearson's r>0.9), and >95% of estimated eBAC would fall between -0.012 and +0.012 of actual eBAC. It is feasible to collect gait-related data from smartphone sensors during drinking occasions in the natural environment. Sensor-based features can be used to infer gait changes associated with elevated blood alcohol content. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2015-05-01
As it is well-known, application of the passive THz camera for the security problems is very promising way. It allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. In previous papers, we demonstrate new possibility of the passive THz camera using for a temperature difference observing on the human skin if this difference is caused by different temperatures inside the body. For proof of validity of our statement we make the similar physical experiment using the IR camera. We show a possibility of temperature trace on human body skin, caused by changing of temperature inside the human body due to water drinking. We use as a computer code that is available for treatment of images captured by commercially available IR camera, manufactured by Flir Corp., as well as our developed computer code for computer processing of these images. Using both codes we demonstrate clearly changing of human body skin temperature induced by water drinking. Shown phenomena are very important for the detection of forbidden samples and substances concealed inside the human body using non-destructive control without X-rays using. Early we have demonstrated such possibility using THz radiation. Carried out experiments can be used for counter-terrorism problem solving. We developed original filters for computer processing of images captured by IR cameras. Their applications for computer processing of images results in a temperature resolution enhancing of cameras.
Assessing the Potential of Low-Cost 3D Cameras for the Rapid Measurement of Plant Woody Structure
Nock, Charles A; Taugourdeau, Olivier; Delagrange, Sylvain; Messier, Christian
2013-01-01
Detailed 3D plant architectural data have numerous applications in plant science, but many existing approaches for 3D data collection are time-consuming and/or require costly equipment. Recently, there has been rapid growth in the availability of low-cost, 3D cameras and related open source software applications. 3D cameras may provide measurements of key components of plant architecture such as stem diameters and lengths, however, few tests of 3D cameras for the measurement of plant architecture have been conducted. Here, we measured Salix branch segments ranging from 2–13 mm in diameter with an Asus Xtion camera to quantify the limits and accuracy of branch diameter measurement with a 3D camera. By scanning at a variety of distances we also quantified the effect of scanning distance. In addition, we also test the sensitivity of the program KinFu for continuous 3D object scanning and modeling as well as other similar software to accurately record stem diameters and capture plant form (<3 m in height). Given its ability to accurately capture the diameter of branches >6 mm, Asus Xtion may provide a novel method for the collection of 3D data on the branching architecture of woody plants. Improvements in camera measurement accuracy and available software are likely to further improve the utility of 3D cameras for plant sciences in the future. PMID:24287538
Relating transverse ray error and light fields in plenoptic camera images
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim; Tyo, J. Scott
2013-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. The camera image is focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The resultant image is an array of circular exit pupil images, each corresponding to the overlying lenslet. The position of the lenslet encodes the spatial information of the scene, whereas as the sensor pixels encode the angular information for light incident on the lenslet. The 4D light field is therefore described by the 2D spatial information and 2D angular information captured by the plenoptic camera. In aberration theory, the transverse ray error relates the pupil coordinates of a given ray to its deviation from the ideal image point in the image plane and is consequently a 4D function as well. We demonstrate a technique for modifying the traditional transverse ray error equations to recover the 4D light field of a general scene. In the case of a well corrected optical system, this light field is easily related to the depth of various objects in the scene. Finally, the effects of sampling with both the lenslet array and the camera sensor on the 4D light field data are analyzed to illustrate the limitations of such systems.
Fluorescent Imaging of Single Nanoparticles and Viruses on a Smart Phone
Wei, Qingshan; Qi, Hangfei; Luo, Wei; Tseng, Derek; Ki, So Jung; Wan, Zhe; Göröcs, Zoltán; Bentolila, Laurent A.; Wu, Ting-Ting; Sun, Ren; Ozcan, Aydogan
2014-01-01
Optical imaging of nanoscale objects, whether it is based on scattering or fluorescence, is a challenging task due to reduced detection signal-to-noise ratio and contrast at subwavelength dimensions. Here, we report a field-portable fluorescence microscopy platform installed on a smart phone for imaging of individual nanoparticles as well as viruses using a lightweight and compact opto-mechanical attachment to the existing camera module of the cell phone. This hand-held fluorescent imaging device utilizes (i) a compact 450 nm laser diode that creates oblique excitation on the sample plane with an incidence angle of ~75°, (ii) a long-pass thin-film interference filter to reject the scattered excitation light, (iii) an external lens creating 2× optical magnification, and (iv) a translation stage for focus adjustment. We tested the imaging performance of this smart-phone-enabled microscopy platform by detecting isolated 100 nm fluorescent particles as well as individual human cytomegaloviruses that are fluorescently labeled. The size of each detected nano-object on the cell phone platform was validated using scanning electron microscopy images of the same samples. This field-portable fluorescence microscopy attachment to the cell phone, weighing only ~186 g, could be used for specific and sensitive imaging of subwavelength objects including various bacteria and viruses and, therefore, could provide a valuable platform for the practice of nanotechnology in field settings and for conducting viral load measurements and other biomedical tests even in remote and resource-limited environments. PMID:24016065
Automatic helmet-wearing detection for law enforcement using CCTV cameras
NASA Astrophysics Data System (ADS)
Wonghabut, P.; Kumphong, J.; Satiennam, T.; Ung-arunyawee, R.; Leelapatra, W.
2018-04-01
The objective of this research is to develop an application for enforcing helmet wearing using CCTV cameras. The developed application aims to help law enforcement by police, and eventually resulting in changing risk behaviours and consequently reducing the number of accidents and its severity. Conceptually, the application software implemented using C++ language and OpenCV library uses two different angle of view CCTV cameras. Video frames recorded by the wide-angle CCTV camera are used to detect motorcyclists. If any motorcyclist without helmet is found, then the zoomed (narrow-angle) CCTV is activated to capture image of the violating motorcyclist and the motorcycle license plate in real time. Captured images are managed by database implemented using MySQL for ticket issuing. The results show that the developed program is able to detect 81% of motorcyclists on various motorcycle types during daytime and night-time. The validation results reveal that the program achieves 74% accuracy in detecting the motorcyclist without helmet.
Low-cost conversion of the Polaroid MD-4 land camera to a digital gel documentation system.
Porch, Timothy G; Erpelding, John E
2006-04-30
A simple, inexpensive design is presented for the rapid conversion of the popular MD-4 Polaroid land camera to a high quality digital gel documentation system. Images of ethidium bromide stained DNA gels captured using the digital system were compared to images captured on Polaroid instant film. Resolution and sensitivity were enhanced using the digital system. In addition to the low cost and superior image quality of the digital system, there is also the added convenience of real-time image viewing through the swivel LCD of the digital camera, wide flexibility of gel sizes, accurate automatic focusing, variable image resolution, and consistent ease of use and quality. Images can be directly imported to a computer by using the USB port on the digital camera, further enhancing the potential of the digital system for documentation, analysis, and archiving. The system is appropriate for use as a start-up gel documentation system and for routine gel analysis.
Sensor fusion of cameras and a laser for city-scale 3D reconstruction.
Bok, Yunsu; Choi, Dong-Geol; Kweon, In So
2014-11-04
This paper presents a sensor fusion system of cameras and a 2D laser sensorfor large-scale 3D reconstruction. The proposed system is designed to capture data on afast-moving ground vehicle. The system consists of six cameras and one 2D laser sensor,and they are synchronized by a hardware trigger. Reconstruction of 3D structures is doneby estimating frame-by-frame motion and accumulating vertical laser scans, as in previousworks. However, our approach does not assume near 2D motion, but estimates free motion(including absolute scale) in 3D space using both laser data and image features. In orderto avoid the degeneration associated with typical three-point algorithms, we present a newalgorithm that selects 3D points from two frames captured by multiple cameras. The problemof error accumulation is solved by loop closing, not by GPS. The experimental resultsshow that the estimated path is successfully overlaid on the satellite images, such that thereconstruction result is very accurate.
Vibration extraction based on fast NCC algorithm and high-speed camera.
Lei, Xiujun; Jin, Yi; Guo, Jie; Zhu, Chang'an
2015-09-20
In this study, a high-speed camera system is developed to complete the vibration measurement in real time and to overcome the mass introduced by conventional contact measurements. The proposed system consists of a notebook computer and a high-speed camera which can capture the images as many as 1000 frames per second. In order to process the captured images in the computer, the normalized cross-correlation (NCC) template tracking algorithm with subpixel accuracy is introduced. Additionally, a modified local search algorithm based on the NCC is proposed to reduce the computation time and to increase efficiency significantly. The modified algorithm can rapidly accomplish one displacement extraction 10 times faster than the traditional template matching without installing any target panel onto the structures. Two experiments were carried out under laboratory and outdoor conditions to validate the accuracy and efficiency of the system performance in practice. The results demonstrated the high accuracy and efficiency of the camera system in extracting vibrating signals.
Citizen journalism in a time of crisis: lessons from a large-scale California wildfire
S. Gillette; J. Taylor; D.J. Chavez; R. Hodgson; J. Downing
2007-01-01
The accessibility of news production tools through consumer communication technology has made it possible for media consumers to become media producers. The evolution of media consumer to media producer has important implications for the shape of public discourse during a time of crisis. Citizen journalists cover crisis events using camera cell phones and digital...
Access Control for Home Data Sharing: Attitudes, Needs and Practices
2009-10-01
cameras, mobile phones and portable music players make creating and interacting with this content easy. Home users are increasingly interested in...messages, photos, home videos, journal files and home musical recordings. Many participants considered unauthorized access by strangers, acquaintances...configuration does not allow users to share different subsets of music with different people. Facebook supplies rich, customizable access controls for
Open Source Initiative Powers Real-Time Data Streams
NASA Technical Reports Server (NTRS)
2014-01-01
Under an SBIR contract with Dryden Flight Research Center, Creare Inc. developed a data collection tool called the Ring Buffered Network Bus. The technology has now been released under an open source license and is hosted by the Open Source DataTurbine Initiative. DataTurbine allows anyone to stream live data from sensors, labs, cameras, ocean buoys, cell phones, and more.
Advanced Digital Forensic and Steganalysis Methods
2009-02-01
investigation is simultaneously cropped, scaled, and processed, extending the technology when the digital image is printed, developing technology capable ...or other common processing operations). TECNOLOGY APPLICATIONS 1. Determining the origin of digital images 2. Matching an image to a camera...Technology Transfer and Innovation Partnerships Division of Research P.O. Box 6000 State University of New York Binghamton, NY 13902-6000 Phone: 607-777
NASA Astrophysics Data System (ADS)
Chen, Shih-Hao; Chow, Chi-Wai
2015-01-01
Multiple-input and multiple-output (MIMO) scheme can extend the transmission capacity for the light-emitting-diode (LED) based visible light communication (VLC) systems. The MIMO VLC system that uses the mobile-phone camera as the optical receiver (Rx) to receive MIMO signal from the n×n Red-Green-Blue (RGB) LED array is desirable. The key step of decoding this signal is to detect the signal direction. If the LED transmitter (Tx) is rotated, the Rx may not realize the rotation and transmission error can occur. In this work, we propose and demonstrate a novel hierarchical transmission scheme which can reduce the computation complexity of rotation detection in LED array VLC system. We use the n×n RGB LED array as the MIMO Tx. In our study, a novel two dimensional Hadamard coding scheme is proposed. Using the different LED color layers to indicate the rotation, a low complexity rotation detection method can be used for improving the quality of received signal. The detection correction rate is above 95% in the indoor usage distance. Experimental results confirm the feasibility of the proposed scheme.
Resource Allocation in Dynamic Environments
2012-10-01
Utility Curve for the TOC Camera 42 Figure 20: Utility Curves for Ground Vehicle Camera and Squad Camera 43 Figure 21: Facial - Recognition Utility...A Facial - Recognition Server (FRS) can receive images from smartphones the squads use, compare them to a local database, and then return the...fallback. In addition, each squad has the ability to capture images with a smartphone and send them to a Facial - Recognition Server in the TOC to
A stereoscopic lens for digital cinema cameras
NASA Astrophysics Data System (ADS)
Lipton, Lenny; Rupkalvis, John
2015-03-01
Live-action stereoscopic feature films are, for the most part, produced using a costly post-production process to convert planar cinematography into stereo-pair images and are only occasionally shot stereoscopically using bulky dual-cameras that are adaptations of the Ramsdell rig. The stereoscopic lens design described here might very well encourage more live-action image capture because it uses standard digital cinema cameras and workflow to save time and money.
CMOS Camera Array With Onboard Memory
NASA Technical Reports Server (NTRS)
Gat, Nahum
2009-01-01
A compact CMOS (complementary metal oxide semiconductor) camera system has been developed with high resolution (1.3 Megapixels), a USB (universal serial bus) 2.0 interface, and an onboard memory. Exposure times, and other operating parameters, are sent from a control PC via the USB port. Data from the camera can be received via the USB port and the interface allows for simple control and data capture through a laptop computer.
Using a Video Camera to Measure the Radius of the Earth
ERIC Educational Resources Information Center
Carroll, Joshua; Hughes, Stephen
2013-01-01
A simple but accurate method for measuring the Earth's radius using a video camera is described. A video camera was used to capture a shadow rising up the wall of a tall building at sunset. A free program called ImageJ was used to measure the time it took the shadow to rise a known distance up the building. The time, distance and length of…
Plenoptic Image Motion Deblurring.
Chandramouli, Paramanand; Jin, Meiguang; Perrone, Daniele; Favaro, Paolo
2018-04-01
We propose a method to remove motion blur in a single light field captured with a moving plenoptic camera. Since motion is unknown, we resort to a blind deconvolution formulation, where one aims to identify both the blur point spread function and the latent sharp image. Even in the absence of motion, light field images captured by a plenoptic camera are affected by a non-trivial combination of both aliasing and defocus, which depends on the 3D geometry of the scene. Therefore, motion deblurring algorithms designed for standard cameras are not directly applicable. Moreover, many state of the art blind deconvolution algorithms are based on iterative schemes, where blurry images are synthesized through the imaging model. However, current imaging models for plenoptic images are impractical due to their high dimensionality. We observe that plenoptic cameras introduce periodic patterns that can be exploited to obtain highly parallelizable numerical schemes to synthesize images. These schemes allow extremely efficient GPU implementations that enable the use of iterative methods. We can then cast blind deconvolution of a blurry light field image as a regularized energy minimization to recover a sharp high-resolution scene texture and the camera motion. Furthermore, the proposed formulation can handle non-uniform motion blur due to camera shake as demonstrated on both synthetic and real light field data.
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2015-08-05
This animation shows images of the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
From a Million Miles Away, NASA Camera Shows Moon Crossing Face of Earth
2017-12-08
This animation still image shows the far side of the moon, illuminated by the sun, as it crosses between the DISCOVR spacecraft's Earth Polychromatic Imaging Camera (EPIC) camera and telescope, and the Earth - one million miles away. Credits: NASA/NOAA A NASA camera aboard the Deep Space Climate Observatory (DSCOVR) satellite captured a unique view of the moon as it moved in front of the sunlit side of Earth last month. The series of test images shows the fully illuminated “dark side” of the moon that is never visible from Earth. The images were captured by NASA’s Earth Polychromatic Imaging Camera (EPIC), a four megapixel CCD camera and telescope on the DSCOVR satellite orbiting 1 million miles from Earth. From its position between the sun and Earth, DSCOVR conducts its primary mission of real-time solar wind monitoring for the National Oceanic and Atmospheric Administration (NOAA). Read more: www.nasa.gov/feature/goddard/from-a-million-miles-away-na... NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Gardner, Beth; Reppucci, Juan; Lucherini, Mauro; Royle, J Andrew
2010-11-01
We develop a hierarchical capture-recapture model for demographically open populations when auxiliary spatial information about location of capture is obtained. Such spatial capture-recapture data arise from studies based on camera trapping, DNA sampling, and other situations in which a spatial array of devices records encounters of unique individuals. We integrate an individual-based formulation of a Jolly-Seber type model with recently developed spatially explicit capture-recapture models to estimate density and demographic parameters for survival and recruitment. We adopt a Bayesian framework for inference under this model using the method of data augmentation which is implemented in the software program WinBUGS. The model was motivated by a camera trapping study of Pampas cats Leopardus colocolo from Argentina, which we present as an illustration of the model in this paper. We provide estimates of density and the first quantitative assessment of vital rates for the Pampas cat in the High Andes. The precision of these estimates is poor due likely to the sparse data set. Unlike conventional inference methods which usually rely on asymptotic arguments, Bayesian inferences are valid in arbitrary sample sizes, and thus the method is ideal for the study of rare or endangered species for which small data sets are typical.
Electronic Still Camera view of Aft end of Wide Field/Planetary Camera in HST
1993-12-06
S61-E-015 (6 Dec 1993) --- A close-up view of the aft part of the new Wide Field/Planetary Camera (WFPC-II) installed on the Hubble Space Telescope (HST). WFPC-II was photographed with the Electronic Still Camera (ESC) from inside Endeavour's cabin as astronauts F. Story Musgrave and Jeffrey A. Hoffman moved it from its stowage position onto the giant telescope. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
Style, Sarah; Beard, B James; Harris-Fry, Helen; Sengupta, Aman; Jha, Sonali; Shrestha, Bhim P; Rai, Anjana; Paudel, Vikas; Thondoo, Meelan; Pulkki-Brannstrom, Anni-Maria; Skordis-Worrall, Jolene; Manandhar, Dharma S; Costello, Anthony; Saville, Naomi M
2017-01-01
The increasing availability and capabilities of mobile phones make them a feasible means of data collection. Electronic Data Capture (EDC) systems have been used widely for public health monitoring and surveillance activities, but documentation of their use in complicated research studies requiring multiple systems is limited. This paper shares our experiences of designing and implementing a complex multi-component EDC system for a community-based four-armed cluster-Randomised Controlled Trial in the rural plains of Nepal, to help other researchers planning to use EDC for complex studies in low-income settings. We designed and implemented three interrelated mobile phone data collection systems to enrol and follow-up pregnant women (trial participants), and to support the implementation of trial interventions (women's groups, food and cash transfers). 720 field staff used basic phones to send simple coded text messages, 539 women's group facilitators used Android smartphones with Open Data Kit Collect, and 112 Interviewers, Coordinators and Supervisors used smartphones with CommCare. Barcoded photo ID cards encoded with participant information were generated for each enrolled woman. Automated systems were developed to download, recode and merge data for nearly real-time access by researchers. The systems were successfully rolled out and used by 1371 staff. A total of 25,089 pregnant women were enrolled, and 17,839 follow-up forms completed. Women's group facilitators recorded 5717 women's groups and the distribution of 14,647 food and 13,482 cash transfers. Using EDC sped up data collection and processing, although time needed for programming and set-up delayed the study inception. EDC using three interlinked mobile data management systems (FrontlineSMS, ODK and CommCare) was a feasible and effective method of data capture in a complex large-scale trial in the plains of Nepal. Despite challenges including prolonged set-up times, the systems met multiple data collection needs for users with varying levels of literacy and experience.
Style, Sarah; Beard, B. James; Harris-Fry, Helen; Sengupta, Aman; Jha, Sonali; Shrestha, Bhim P.; Rai, Anjana; Paudel, Vikas; Thondoo, Meelan; Pulkki-Brannstrom, Anni-Maria; Skordis-Worrall, Jolene; Manandhar, Dharma S.; Costello, Anthony; Saville, Naomi M.
2017-01-01
ABSTRACT The increasing availability and capabilities of mobile phones make them a feasible means of data collection. Electronic Data Capture (EDC) systems have been used widely for public health monitoring and surveillance activities, but documentation of their use in complicated research studies requiring multiple systems is limited. This paper shares our experiences of designing and implementing a complex multi-component EDC system for a community-based four-armed cluster-Randomised Controlled Trial in the rural plains of Nepal, to help other researchers planning to use EDC for complex studies in low-income settings. We designed and implemented three interrelated mobile phone data collection systems to enrol and follow-up pregnant women (trial participants), and to support the implementation of trial interventions (women’s groups, food and cash transfers). 720 field staff used basic phones to send simple coded text messages, 539 women’s group facilitators used Android smartphones with Open Data Kit Collect, and 112 Interviewers, Coordinators and Supervisors used smartphones with CommCare. Barcoded photo ID cards encoded with participant information were generated for each enrolled woman. Automated systems were developed to download, recode and merge data for nearly real-time access by researchers. The systems were successfully rolled out and used by 1371 staff. A total of 25,089 pregnant women were enrolled, and 17,839 follow-up forms completed. Women’s group facilitators recorded 5717 women’s groups and the distribution of 14,647 food and 13,482 cash transfers. Using EDC sped up data collection and processing, although time needed for programming and set-up delayed the study inception. EDC using three interlinked mobile data management systems (FrontlineSMS, ODK and CommCare) was a feasible and effective method of data capture in a complex large-scale trial in the plains of Nepal. Despite challenges including prolonged set-up times, the systems met multiple data collection needs for users with varying levels of literacy and experience. PMID:28613121
The Mole Mapper Study, mobile phone skin imaging and melanoma risk data collected using ResearchKit.
Webster, Dan E; Suver, Christine; Doerr, Megan; Mounts, Erin; Domenico, Lisa; Petrie, Tracy; Leachman, Sancy A; Trister, Andrew D; Bot, Brian M
2017-02-14
Sensor-embedded phones are an emerging facilitator for participant-driven research studies. Skin cancer research is particularly amenable to this approach, as phone cameras enable self-examination and documentation of mole abnormalities that may signal a progression towards melanoma. Aggregation and open sharing of this participant-collected data can be foundational for research and the development of early cancer detection tools. Here we describe data from Mole Mapper, an iPhone-based observational study built using the Apple ResearchKit framework. The Mole Mapper app was designed to collect participant-provided images and measurements of moles, together with demographic and behavioral information relating to melanoma risk. The study cohort includes 2,069 participants who contributed 1,920 demographic surveys, 3,274 mole measurements, and 2,422 curated mole images. Survey data recapitulates associations between melanoma and known demographic risks, with red hair as the most significant factor in this cohort. Participant-provided mole measurements indicate an average mole size of 3.95 mm. These data have been made available to engage researchers in a collaborative, multidisciplinary effort to better understand and prevent melanoma.
The Mole Mapper Study, mobile phone skin imaging and melanoma risk data collected using ResearchKit
Webster, Dan E.; Suver, Christine; Doerr, Megan; Mounts, Erin; Domenico, Lisa; Petrie, Tracy; Leachman, Sancy A.; Trister, Andrew D.; Bot, Brian M.
2017-01-01
Sensor-embedded phones are an emerging facilitator for participant-driven research studies. Skin cancer research is particularly amenable to this approach, as phone cameras enable self-examination and documentation of mole abnormalities that may signal a progression towards melanoma. Aggregation and open sharing of this participant-collected data can be foundational for research and the development of early cancer detection tools. Here we describe data from Mole Mapper, an iPhone-based observational study built using the Apple ResearchKit framework. The Mole Mapper app was designed to collect participant-provided images and measurements of moles, together with demographic and behavioral information relating to melanoma risk. The study cohort includes 2,069 participants who contributed 1,920 demographic surveys, 3,274 mole measurements, and 2,422 curated mole images. Survey data recapitulates associations between melanoma and known demographic risks, with red hair as the most significant factor in this cohort. Participant-provided mole measurements indicate an average mole size of 3.95 mm. These data have been made available to engage researchers in a collaborative, multidisciplinary effort to better understand and prevent melanoma. PMID:28195576
2017-12-08
These images of Earth were reconstructed from photos taken by three smartphones in orbit, or "PhoneSats." The trio of PhoneSats launched on April 21, 2013, aboard the Antares rocket from NASA's Wallops Flight Facility and ended a successful mission on April 27. The ultimate goal of the PhoneSat mission was to determine whether a consumer-grade smartphone can be used as the main flight avionics for a satellite in space. During their time in orbit, the three miniature satellites used their smartphone cameras to take pictures of Earth and transmitted these "image-data packets" to multiple ground stations. Every packet held a small piece of the big picture. As the data became available, the PhoneSat Team and multiple amateur radio operators around the world collaborated to piece together photographs from the tiny data packets. Read more: 1.usa.gov/ZsWnQG Credit: NASA/Ames NASA image use policy. NASA Goddard Space Flight Center enables NASA’s mission through four scientific endeavors: Earth Science, Heliophysics, Solar System Exploration, and Astrophysics. Goddard plays a leading role in NASA’s accomplishments by contributing compelling scientific knowledge to advance the Agency’s mission. Follow us on Twitter Like us on Facebook Find us on Instagram
Vector-Based Ground Surface and Object Representation Using Cameras
2009-12-01
representations and it is a digital data structure used for the representation of a ground surface in geographical information systems ( GIS ). Figure...Vision API library, and the OpenCV library. Also, the Posix thread library was utilized to quickly capture the source images from cameras. Both
Melanoma detection using a mobile phone app
NASA Astrophysics Data System (ADS)
Diniz, Luciano E.; Ennser, K.
2016-03-01
Mobile phones have had their processing power greatly increased since their invention a few decades ago. As a direct result of Moore's Law, this improvement has made available several applications that were impossible before. The aim of this project is to develop a mobile phone app, integrated with its camera coupled to an amplifying lens, to help distinguish melanoma. The proposed device has the capability of processing skin mole images and suggesting, using a score system, if it is a case of melanoma or not. This score system is based on the ABCDE signs of melanoma, and takes into account the area, the perimeter and the colors present in the nevus. It was calibrated and tested using images from the PH2 Dermoscopic Image Database from Pedro Hispano Hospital. The results show that the system created can be useful, with an accuracy of up to 100% for malign cases and 80% for benign cases (including common and atypical moles), when used in the test group.
Patterns of Detection and Capture Are Associated with Cohabiting Predators and Prey
Lazenby, Billie T.; Dickman, Christopher R.
2013-01-01
Avoidance behaviour can play an important role in structuring ecosystems but can be difficult to uncover and quantify. Remote cameras have great but as yet unrealized potential to uncover patterns arising from predatory, competitive or other interactions that structure animal communities by detecting species that are active at the same sites and recording their behaviours and times of activity. Here, we use multi-season, two-species occupancy models to test for evidence of interactions between introduced (feral cat Felis catus) and native predator (Tasmanian devil Sarcophilus harrisii) and predator and small mammal (swamp rat Rattus lutreolus velutinus) combinations at baited camera sites in the cool temperate forests of southern Tasmania. In addition, we investigate the capture rates of swamp rats in traps scented with feral cat and devil faecal odours. We observed that one species could reduce the probability of detecting another at a camera site. In particular, feral cats were detected less frequently at camera sites occupied by devils, whereas patterns of swamp rat detection associated with devils or feral cats varied with study site. Captures of swamp rats were not associated with odours on traps, although fewer captures tended to occur in traps scented with the faecal odour of feral cats. The observation that a native carnivorous marsupial, the Tasmanian devil, can suppress the detectability of an introduced eutherian predator, the feral cat, is consistent with a dominant predator – mesopredator relationship. Such a relationship has important implications for the interaction between feral cats and the lower trophic guilds that form their prey, especially if cat activity increases in places where devil populations are declining. More generally, population estimates derived from devices such as remote cameras need to acknowledge the potential for one species to change the detectability of another, and incorporate this in assessments of numbers and survival. PMID:23565172
Royo Sánchez, Ana Cristina; Aguilar Martín, Juan José; Santolaria Mazo, Jorge
2014-12-01
Motion capture systems are often used for checking and analyzing human motion in biomechanical applications. It is important, in this context, that the systems provide the best possible accuracy. Among existing capture systems, optical systems are those with the highest accuracy. In this paper, the development of a new calibration procedure for optical human motion capture systems is presented. The performance and effectiveness of that new calibration procedure are also checked by experimental validation. The new calibration procedure consists of two stages. In the first stage, initial estimators of intrinsic and extrinsic parameters are sought. The camera calibration method used in this stage is the one proposed by Tsai. These parameters are determined from the camera characteristics, the spatial position of the camera, and the center of the capture volume. In the second stage, a simultaneous nonlinear optimization of all parameters is performed to identify the optimal values, which minimize the objective function. The objective function, in this case, minimizes two errors. The first error is the distance error between two markers placed in a wand. The second error is the error of position and orientation of the retroreflective markers of a static calibration object. The real co-ordinates of the two objects are calibrated in a co-ordinate measuring machine (CMM). The OrthoBio system is used to validate the new calibration procedure. Results are 90% lower than those from the previous calibration software and broadly comparable with results from a similarly configured Vicon system.
Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.
Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael
2016-11-01
To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.
Active 3D camera design for target capture on Mars orbit
NASA Astrophysics Data System (ADS)
Cottin, Pierre; Babin, François; Cantin, Daniel; Deslauriers, Adam; Sylvestre, Bruno
2010-04-01
During the ESA Mars Sample Return (MSR) mission, a sample canister launched from Mars will be autonomously captured by an orbiting satellite. We present the concept and the design of an active 3D camera supporting the orbiter navigation system during the rendezvous and capture phase. This camera aims at providing the range and bearing of a 20 cm diameter canister from 2 m to 5 km within a 20° field-of-view without moving parts (scannerless). The concept exploits the sensitivity and the gating capability of a gated intensified camera. It is supported by a pulsed source based on an array of laser diodes with adjustable amplitude and pulse duration (from nanoseconds to microseconds). The ranging capability is obtained by adequately controlling the timing between the acquisition of 2D images and the emission of the light pulses. Three modes of acquisition are identified to accommodate the different levels of ranging and bearing accuracy and the 3D data refresh rate. To come up with a single 3D image, each mode requires a different number of images to be processed. These modes can be applied to the different approach phases. The entire concept of operation of this camera is detailed with an emphasis on the extreme lighting conditions. Its uses for other space missions and terrestrial applications are also highlighted. This design is implemented in a prototype with shorter ranging capabilities for concept validation. Preliminary results obtained with this prototype are also presented. This work is financed by the Canadian Space Agency.
A computational approach to real-time image processing for serial time-encoded amplified microscopy
NASA Astrophysics Data System (ADS)
Oikawa, Minoru; Hiyama, Daisuke; Hirayama, Ryuji; Hasegawa, Satoki; Endo, Yutaka; Sugie, Takahisa; Tsumura, Norimichi; Kuroshima, Mai; Maki, Masanori; Okada, Genki; Lei, Cheng; Ozeki, Yasuyuki; Goda, Keisuke; Shimobaba, Tomoyoshi
2016-03-01
High-speed imaging is an indispensable technique, particularly for identifying or analyzing fast-moving objects. The serial time-encoded amplified microscopy (STEAM) technique was proposed to enable us to capture images with a frame rate 1,000 times faster than using conventional methods such as CCD (charge-coupled device) cameras. The application of this high-speed STEAM imaging technique to a real-time system, such as flow cytometry for a cell-sorting system, requires successively processing a large number of captured images with high throughput in real time. We are now developing a high-speed flow cytometer system including a STEAM camera. In this paper, we describe our approach to processing these large amounts of image data in real time. We use an analog-to-digital converter that has up to 7.0G samples/s and 8-bit resolution for capturing the output voltage signal that involves grayscale images from the STEAM camera. Therefore the direct data output from the STEAM camera generates 7.0G byte/s continuously. We provided a field-programmable gate array (FPGA) device as a digital signal pre-processor for image reconstruction and finding objects in a microfluidic channel with high data rates in real time. We also utilized graphics processing unit (GPU) devices for accelerating the calculation speed of identification of the reconstructed images. We built our prototype system, which including a STEAM camera, a FPGA device and a GPU device, and evaluated its performance in real-time identification of small particles (beads), as virtual biological cells, owing through a microfluidic channel.
Loiselle, Bette A.
2018-01-01
Terrestrial mammals are important components of lowland forests in Amazonia (as seed dispersal agents, herbivores, predators) but there are relatively few detailed studies from areas that have not been affected by human activities (e.g., hunting, logging). Yet, such information is needed to evaluate effects of humans elsewhere. We used camera traps to sample medium to large-sized terrestrial mammals at a site in lowland forests of eastern Ecuador, one of the most biologically rich areas in the world. We deployed cameras on two study plots in terra firme forest at Tiputini Biodiversity Station. Sixteen cameras were arranged 200 m apart in a 4 × 4 grid on each plot. Cameras were operated for 60 days in January–March, 2014–2017, for a total of 3,707 and 3,482 trap-days on the two plots (Harpia, Puma). A total of 28 species were recorded; 26 on Harpia and 25 on Puma. Number of species recorded each year was slightly greater on Harpia whereas overall capture rates (images/100 trap-days) were higher on Puma. Although most species were recorded on each plot, differences in capture rates meant that yearly samples on a given plot were more similar to each other than to samples on the other plot. Images of most species showed a clumped distribution pattern on each plot; Panthera onca was the only species that did not show a clumped distribution on either plot. Images at a given camera location showed no evidence of autocorrelation with numbers of images at nearby camera locations, suggesting that species were responding to small-scale differences in habitat conditions. A redundancy analysis showed that environmental features within 50 or 100 m of camera locations (e.g., elevation, variation in elevation, slope, distance to streams) accounted for significant amounts of variation in distribution patterns of species. Composition and relative importance based on capture rates were very similar to results from cameras located along trails at the same site; similarities decreased at increasing spatial scales based on comparisons with results from other sites in Ecuador and Peru. PMID:29333349
A Digital Approach to Learning Petrology
NASA Astrophysics Data System (ADS)
Reid, M. R.
2011-12-01
In the undergraduate igneous and metamorphic petrology course at Northern Arizona University, we are employing petrographic microscopes equipped with relatively inexpensive ( $200) digital cameras that are linked to pen-tablet computers. The camera-tablet systems can assist student learning in a variety of ways. Images provided by the tablet computers can be used for helping students filter the visually complex specimens they examine. Instructors and students can simultaneously view the same petrographic features captured by the cameras and exchange information about them by pointing to salient features using the tablet pen. These images can become part of a virtual mineral/rock/texture portfolio tailored to individual student's needs. Captured digital illustrations can be annotated with digital ink or computer graphics tools; this activity emulates essential features of more traditional line drawings (visualizing an appropriate feature and selecting a representative image of it, internalizing the feature through studying and annotating it) while minimizing the frustration that many students feel about drawing. In these ways, we aim to help a student progress more efficiently from novice to expert. A number of our petrology laboratory exercises involve use of the camera-tablet systems for collaborative learning. Observational responsibilities are distributed among individual members of teams in order to increase interdependence and accountability, and to encourage efficiency. Annotated digital images are used to share students' findings and arrive at an understanding of an entire rock suite. This interdependence increases the individual's sense of responsibility for their work, and reporting out encourages students to practice use of technical vocabulary and to defend their observations. Pre- and post-course student interest in the camera-tablet systems has been assessed. In a post-course survey, the majority of students reported that, if available, they would use camera-tablet systems to capture microscope images (77%) and to make notes on images (71%). An informal focus group recommended introducing the cameras as soon as possible and having them available for making personal mineralogy/petrology portfolios. Because the stakes are perceived as high, use of the camera-tablet systems for peer-peer learning has been progressively modified to bolster student confidence in their collaborative efforts.
Dynamic Human Body Modeling Using a Single RGB Camera.
Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan
2016-03-18
In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.
Dynamic Human Body Modeling Using a Single RGB Camera
Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan
2016-01-01
In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159
Digital Earth Watch: Investigating the World with Digital Cameras
NASA Astrophysics Data System (ADS)
Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.
2015-12-01
Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu
Multi-frame knowledge based text enhancement for mobile phone captured videos
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-02-01
In this study, we explore automated text recognition and enhancement using mobile phone captured videos of store receipts. We propose a method which includes Optical Character Resolution (OCR) enhanced by our proposed Row Based Multiple Frame Integration (RB-MFI), and Knowledge Based Correction (KBC) algorithms. In this method, first, the trained OCR engine is used for recognition; then, the RB-MFI algorithm is applied to the output of the OCR. The RB-MFI algorithm determines and combines the most accurate rows of the text outputs extracted by using OCR from multiple frames of the video. After RB-MFI, KBC algorithm is applied to these rows to correct erroneous characters. Results of the experiments show that the proposed video-based approach which includes the RB-MFI and the KBC algorithm increases the word character recognition rate to 95%, and the character recognition rate to 98%.
Projection model for flame chemiluminescence tomography based on lens imaging
NASA Astrophysics Data System (ADS)
Wan, Minggang; Zhuang, Jihui
2018-04-01
For flame chemiluminescence tomography (FCT) based on lens imaging, the projection model is essential because it formulates the mathematical relation between the flame projections captured by cameras and the chemiluminescence field, and, through this relation, the field is reconstructed. This work proposed the blurry-spot (BS) model, which takes more universal assumptions and has higher accuracy than the widely applied line-of-sight model. By combining the geometrical camera model and the thin-lens equation, the BS model takes into account perspective effect of the camera lens; by combining ray-tracing technique and Monte Carlo simulation, it also considers inhomogeneous distribution of captured radiance on the image plane. Performance of these two models in FCT was numerically compared, and results showed that using the BS model could lead to better reconstruction quality in wider application ranges.
Li, Tian-Jiao; Li, Sai; Yuan, Yuan; Liu, Yu-Dong; Xu, Chuan-Long; Shuai, Yong; Tan, He-Ping
2017-04-03
Plenoptic cameras are used for capturing flames in studies of high-temperature phenomena. However, simulations of plenoptic camera models can be used prior to the experiment improve experimental efficiency and reduce cost. In this work, microlens arrays, which are based on the established light field camera model, are optimized into a hexagonal structure with three types of microlenses. With this improved plenoptic camera model, light field imaging of static objects and flame are simulated using the calibrated parameters of the Raytrix camera (R29). The optimized models improve the image resolution, imaging screen utilization, and shooting range of depth of field.
Depth estimation using a lightfield camera
NASA Astrophysics Data System (ADS)
Roper, Carissa
The latest innovation to camera design has come in the form of the lightfield, or plenoptic, camera that captures 4-D radiance data rather than just the 2-D scene image via microlens arrays. With the spatial and angular light ray data now recorded on the camera sensor, it is feasible to construct algorithms that can estimate depth of field in different portions of a given scene. There are limitations to the precision due to hardware structure and the sheer number of scene variations that can occur. In this thesis, the potential of digital image analysis and spatial filtering to extract depth information is tested on the commercially available plenoptic camera.
Extreme ultra-violet movie camera for imaging microsecond time scale magnetic reconnection
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chai, Kil-Byoung; Bellan, Paul M.
2013-12-15
An ultra-fast extreme ultra-violet (EUV) movie camera has been developed for imaging magnetic reconnection in the Caltech spheromak/astrophysical jet experiment. The camera consists of a broadband Mo:Si multilayer mirror, a fast decaying YAG:Ce scintillator, a visible light block, and a high-speed visible light CCD camera. The camera can capture EUV images as fast as 3.3 × 10{sup 6} frames per second with 0.5 cm spatial resolution. The spectral range is from 20 eV to 60 eV. EUV images reveal strong, transient, highly localized bursts of EUV radiation when magnetic reconnection occurs.
Collection and Analysis of Crowd Data with Aerial, Rooftop, and Ground Views
2014-11-10
collected these datasets using different aircrafts. Erista 8 HL OctaCopter is a heavy-lift aerial platform capable of using high-resolution cinema ...is another high-resolution camera that is cinema grade and high quality, with the capability of capturing videos with 4K resolution at 30 frames per...292.58 Imaging Systems and Accessories Blackmagic Production Camera 4 Crowd Counting using 4K Cameras High resolution cinema grade digital video
HST Solar Arrays photographed by Electronic Still Camera
NASA Technical Reports Server (NTRS)
1993-01-01
This medium close-up view of one of two original Solar Arrays (SA) on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and downlinked to ground controllers soon afterward. This view shows the cell side of the minus V-2 panel. Electronic still photography is a technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality.
Practical aspects of modern interferometry for optical manufacturing quality control: Part 2
NASA Astrophysics Data System (ADS)
Smythe, Robert
2012-07-01
Modern phase shifting interferometers enable the manufacture of optical systems that drive the global economy. Semiconductor chips, solid-state cameras, cell phone cameras, infrared imaging systems, space based satellite imaging and DVD and Blu-Ray disks are all enabled by phase shifting interferometers. Theoretical treatments of data analysis and instrument design advance the technology but often are not helpful towards the practical use of interferometers. An understanding of the parameters that drive system performance is critical to produce useful results. Any interferometer will produce a data map and results; this paper, in three parts, reviews some of the key issues to minimize error sources in that data and provide a valid measurement.
Practical aspects of modern interferometry for optical manufacturing quality control, Part 3
NASA Astrophysics Data System (ADS)
Smythe, Robert A.
2012-09-01
Modern phase shifting interferometers enable the manufacture of optical systems that drive the global economy. Semiconductor chips, solid-state cameras, cell phone cameras, infrared imaging systems, space-based satellite imaging, and DVD and Blu-Ray disks are all enabled by phase-shifting interferometers. Theoretical treatments of data analysis and instrument design advance the technology but often are not helpful toward the practical use of interferometers. An understanding of the parameters that drive the system performance is critical to produce useful results. Any interferometer will produce a data map and results; this paper, in three parts, reviews some of the key issues to minimize error sources in that data and provide a valid measurement.
NASA Astrophysics Data System (ADS)
Bouma, Henri; Burghouts, Gertjan; den Hollander, Richard; van der Zee, Sophie; Baan, Jan; ten Hove, Johan-Martijn; van Diepen, Sjaak; van den Haak, Paul; van Rest, Jeroen
2016-10-01
Deception detection is valuable in the security domain to distinguish truth from lies. It is desirable in many security applications, such as suspect and witness interviews and airport passenger screening. Interviewers are constantly trying to assess the credibility of a statement, usually based on intuition without objective technical support. However, psychological research has shown that humans can hardly perform better than random guessing. Deception detection is a multi-disciplinary research area with an interest from different fields, such as psychology and computer science. In the last decade, several developments have helped to improve the accuracy of lie detection (e.g., with a concealed information test, increasing the cognitive load, or measurements with motion capture suits) and relevant cues have been discovered (e.g., eye blinking or fiddling with the fingers). With an increasing presence of mobile phones and bodycams in society, a mobile, stand-off, automatic deception detection methodology based on various cues from the whole body would create new application opportunities. In this paper, we study the feasibility of measuring these visual cues automatically on different parts of the body, laying the groundwork for stand-off deception detection in more flexible and mobile deployable sensors, such as body-worn cameras. We give an extensive overview of recent developments in two communities: in the behavioral-science community the developments that improve deception detection with a special attention to the observed relevant non-verbal cues, and in the computer-vision community the recent methods that are able to measure these cues. The cues are extracted from several body parts: the eyes, the mouth, the head and the fullbody pose. We performed an experiment using several state-of-the-art video-content-analysis (VCA) techniques to assess the quality of robustly measuring these visual cues.
Video at Sea: Telling the Stories of the International Ocean Discovery Program
NASA Astrophysics Data System (ADS)
Wright, M.; Harned, D.
2014-12-01
Seagoing science expeditions offer an ideal opportunity for storytelling. While many disciplines involve fieldwork, few offer the adventure of spending two months at sea on a vessel hundreds of miles from shore with several dozen strangers from all over the world. As a medium, video is nearly ideal for telling these stories; it can capture the thrill of discovery, the agony of disappointment, the everyday details of life at sea, and everything in between. At the International Ocean Discovery Program (IODP, formerly the Integrated Ocean Drilling Program), we have used video as a storytelling medium for several years with great success. Over this timeframe, camera equipment and editing software have become cheaper and easier to use, while web sites such as YouTube and Vimeo have enabled sharing with just a few mouse clicks. When it comes to telling science stories with video, the barriers to entry have never been lower. As such, we have experimented with many different approaches and a wide range of styles. On one end of the spectrum, live "ship-to-shore" broadcasts with school groups - conducted with an iPad and free videoconferencing software such as Skype and Zoom - enable curious minds to engage directly with scientists in real-time. We have also contracted with professional videographers and animators who offer the experience, skill, and equipment needed to produce polished clips of the highest caliber. Amateur videographers (including some scientists looking to make use of their free time on board) have shot and produced impressive shorts using little more than a phone camera. In this talk, I will provide a brief overview of our efforts to connect with the public using video, including a look at how effective certain tactics are for connecting to specific audiences.
Antibiogramj: A tool for analysing images from disk diffusion tests.
Alonso, C A; Domínguez, C; Heras, J; Mata, E; Pascual, V; Torres, C; Zarazaga, M
2017-05-01
Disk diffusion testing, known as antibiogram, is widely applied in microbiology to determine the antimicrobial susceptibility of microorganisms. The measurement of the diameter of the zone of growth inhibition of microorganisms around the antimicrobial disks in the antibiogram is frequently performed manually by specialists using a ruler. This is a time-consuming and error-prone task that might be simplified using automated or semi-automated inhibition zone readers. However, most readers are usually expensive instruments with embedded software that require significant changes in laboratory design and workflow. Based on the workflow employed by specialists to determine the antimicrobial susceptibility of microorganisms, we have designed a software tool that, from images of disk diffusion tests, semi-automatises the process. Standard computer vision techniques are employed to achieve such an automatisation. We present AntibiogramJ, a user-friendly and open-source software tool to semi-automatically determine, measure and categorise inhibition zones of images from disk diffusion tests. AntibiogramJ is implemented in Java and deals with images captured with any device that incorporates a camera, including digital cameras and mobile phones. The fully automatic procedure of AntibiogramJ for measuring inhibition zones achieves an overall agreement of 87% with an expert microbiologist; moreover, AntibiogramJ includes features to easily detect when the automatic reading is not correct and fix it manually to obtain the correct result. AntibiogramJ is a user-friendly, platform-independent, open-source, and free tool that, up to the best of our knowledge, is the most complete software tool for antibiogram analysis without requiring any investment in new equipment or changes in the laboratory. Copyright © 2017 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Cunnah, David
2014-07-01
In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.
ERIC Educational Resources Information Center
Cunnah, David
2014-01-01
In this paper I propose a method of calculating the time between line captures in a standard complementary metal-oxide-semiconductor (CMOS) webcam using the rolling shutter effect when filming a guitar. The exercise links the concepts of wavelength and frequency, while outlining the basic operation of a CMOS camera through vertical line capture.
Sophie in the Snow: A Simple Approach to Datalogging and Modelling in Physics
ERIC Educational Resources Information Center
Oldknow, Adrian; Huyton, Pip; Galloway, Ian
2010-01-01
Most students now have access to devices such as digital cameras and mobile phones that are capable of taking short video clips outdoors. Such clips can be used with powerful ICT tools, such as Tracker, Excel and TI-Nspire, to extract time and coordinate data about a moving object, to produce scattergrams and to fit models. In this article we…
YouTube War: Fighting in a World of Cameras in Every Cell Phone and Photoshop on Every Computer
2009-11-01
free MovieMaker 2 program that Microsoft includes with its Windows XP operating system. Mike Wendland, “From ENG to SNG : TV Technology for Covering the...the deployed soldier. 51. Wendland, “From ENG to SNG .” 52. This is based in part on the typology in Ben Venzke, “Jihadi Master Video Guide, JMVG
Electronic camera-management system for 35-mm and 70-mm film cameras
NASA Astrophysics Data System (ADS)
Nielsen, Allan
1993-01-01
Military and commercial test facilities have been tasked with the need for increasingly sophisticated data collection and data reduction. A state-of-the-art electronic control system for high speed 35 mm and 70 mm film cameras designed to meet these tasks is described. Data collection in today's test range environment is difficult at best. The need for a completely integrated image and data collection system is mandated by the increasingly complex test environment. Instrumentation film cameras have been used on test ranges to capture images for decades. Their high frame rates coupled with exceptionally high resolution make them an essential part of any test system. In addition to documenting test events, today's camera system is required to perform many additional tasks. Data reduction to establish TSPI (time- space-position information) may be performed after a mission and is subject to all of the variables present in documenting the mission. A typical scenario would consist of multiple cameras located on tracking mounts capturing the event along with azimuth and elevation position data. Corrected data can then be reduced using each camera's time and position deltas and calculating the TSPI of the object using triangulation. An electronic camera control system designed to meet these requirements has been developed by Photo-Sonics, Inc. The feedback received from test technicians at range facilities throughout the world led Photo-Sonics to design the features of this control system. These prominent new features include: a comprehensive safety management system, full local or remote operation, frame rate accuracy of less than 0.005 percent, and phase locking capability to Irig-B. In fact, Irig-B phase lock operation of multiple cameras can reduce the time-distance delta of a test object traveling at mach-1 to less than one inch during data reduction.
Detection and Spatial Mapping of Mercury Contamination in Water Samples Using a Smart-Phone
2014-01-01
Detection of environmental contamination such as trace-level toxic heavy metal ions mostly relies on bulky and costly analytical instruments. However, a considerable global need exists for portable, rapid, specific, sensitive, and cost-effective detection techniques that can be used in resource-limited and field settings. Here we introduce a smart-phone-based hand-held platform that allows the quantification of mercury(II) ions in water samples with parts per billion (ppb) level of sensitivity. For this task, we created an integrated opto-mechanical attachment to the built-in camera module of a smart-phone to digitally quantify mercury concentration using a plasmonic gold nanoparticle (Au NP) and aptamer based colorimetric transmission assay that is implemented in disposable test tubes. With this smart-phone attachment that weighs <40 g, we quantified mercury(II) ion concentration in water samples by using a two-color ratiometric method employing light-emitting diodes (LEDs) at 523 and 625 nm, where a custom-developed smart application was utilized to process each acquired transmission image on the same phone to achieve a limit of detection of ∼3.5 ppb. Using this smart-phone-based detection platform, we generated a mercury contamination map by measuring water samples at over 50 locations in California (USA), taken from city tap water sources, rivers, lakes, and beaches. With its cost-effective design, field-portability, and wireless data connectivity, this sensitive and specific heavy metal detection platform running on cellphones could be rather useful for distributed sensing, tracking, and sharing of water contamination information as a function of both space and time. PMID:24437470
Optimising Camera Traps for Monitoring Small Mammals
Glen, Alistair S.; Cockburn, Stuart; Nichols, Margaret; Ekanayake, Jagath; Warburton, Bruce
2013-01-01
Practical techniques are required to monitor invasive animals, which are often cryptic and occur at low density. Camera traps have potential for this purpose, but may have problems detecting and identifying small species. A further challenge is how to standardise the size of each camera’s field of view so capture rates are comparable between different places and times. We investigated the optimal specifications for a low-cost camera trap for small mammals. The factors tested were 1) trigger speed, 2) passive infrared vs. microwave sensor, 3) white vs. infrared flash, and 4) still photographs vs. video. We also tested a new approach to standardise each camera’s field of view. We compared the success rates of four camera trap designs in detecting and taking recognisable photographs of captive stoats ( Mustela erminea ), feral cats (Felis catus) and hedgehogs ( Erinaceus europaeus ). Trigger speeds of 0.2–2.1 s captured photographs of all three target species unless the animal was running at high speed. The camera with a microwave sensor was prone to false triggers, and often failed to trigger when an animal moved in front of it. A white flash produced photographs that were more readily identified to species than those obtained under infrared light. However, a white flash may be more likely to frighten target animals, potentially affecting detection probabilities. Video footage achieved similar success rates to still cameras but required more processing time and computer memory. Placing two camera traps side by side achieved a higher success rate than using a single camera. Camera traps show considerable promise for monitoring invasive mammal control operations. Further research should address how best to standardise the size of each camera’s field of view, maximise the probability that an animal encountering a camera trap will be detected, and eliminate visible or audible cues emitted by camera traps. PMID:23840790
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
NASA Astrophysics Data System (ADS)
Fang, Jingyu; Xu, Haisong; Wang, Zhehong; Wu, Xiaomin
2016-05-01
With colorimetric characterization, digital cameras can be used as image-based tristimulus colorimeters for color communication. In order to overcome the restriction of fixed capture settings adopted in the conventional colorimetric characterization procedures, a novel method was proposed considering capture settings. The method calculating colorimetric value of the measured image contains five main steps, including conversion from RGB values to equivalent ones of training settings through factors based on imaging system model so as to build the bridge between different settings, scaling factors involved in preparation steps for transformation mapping to avoid errors resulted from nonlinearity of polynomial mapping for different ranges of illumination levels. The experiment results indicate that the prediction error of the proposed method, which was measured by CIELAB color difference formula, reaches less than 2 CIELAB units under different illumination levels and different correlated color temperatures. This prediction accuracy for different capture settings remains the same level as the conventional method for particular lighting condition.
Listen; There's a Hell of a Good Universe Next Door; Let's Go
NASA Technical Reports Server (NTRS)
Rigby, Jane R.
2012-01-01
Scientific research is key to our nation's technological and economic development. One can attempt to focus research toward specific applications, but science has a way of surprising us. Think for example of the "charge-couple device", which was originally invented for memory storage, but became the modern digital camera that is used everywhere from camera phones to the Hubble Space Telescope. Using digital cameras, Hubble has taken pictures that reach back 12 billion light-years into the past, when the Universe was only 1-2 billion years old. Such results would never have been possible with the film cameras Hubble was originally supposed to use. Over the past two decades, Hubble and other telescopes have shown us much about the Universe -- many of these results are shocking. Our galaxy is swarming with planets; most of the mass in the Universe is invisible; and our Universe is accelerating ever faster and faster for unknown reasons. Thus, we live in a "hell of a good universe", to quote e.e. cummings, that we fundamentally don't understand. This means that you, as young scientists, have many worlds to discover
Video quality of 3G videophones for telephone cardiopulmonary resuscitation.
Tränkler, Uwe; Hagen, Oddvar; Horsch, Alexander
2008-01-01
We simulated a cardiopulmonary resuscitation (CPR) scene with a manikin and used two 3G videophones on the caller's side to transmit video to a laptop PC. Five observers (two doctors with experience in emergency medicine and three paramedics) evaluated the video. They judged whether the manikin was breathing and whether they would give advice for CPR; they also graded the confidence of their decision-making. Breathing was only visible from certain orientations of the videophones, at distances below 150 cm with good illumination and a still background. Since the phones produced a degradation in colours and shadows, detection of breathing mainly depended on moving contours. Low camera positioning produced better results than having the camera high up. Darkness, shaking of the camera and a moving background made detection of breathing almost impossible. The video from the two 3G videophones that were tested was of sufficient quality for telephone CPR provided that camera orientation, distance, illumination and background were carefully chosen. Thus it seems possible to use 3G videophones for emergency calls involving CPR. However, further studies on the required video quality in different scenarios are necessary.
A Picture is Worth a Thousand Words
ERIC Educational Resources Information Center
Davison, Sarah
2009-01-01
Lions, tigers, and bears, oh my! Digital cameras, young inquisitive scientists, give it a try! In this project, students create an open-ended question for investigation, capture and record their observations--data--with digital cameras, and create a digital story to share their findings. The project follows a 5E learning cycle--Engage, Explore,…
Through the Creator's Eyes: Using the Subjective Camera to Study Craft Creativity
ERIC Educational Resources Information Center
Glaveanu, Vlad Petre; Lahlou, Saadi
2012-01-01
This article addresses a methodological gap in the study of creativity: the difficulty of capturing the microgenesis of creative action in ways that would reflect both its psychological and behavioral dynamics. It explores the use of subjective camera (subcam) by research participants as part of an adapted Subjective Evidence-Based Ethnography…
Development of a piecewise linear omnidirectional 3D image registration method
NASA Astrophysics Data System (ADS)
Bae, Hyunsoo; Kang, Wonjin; Lee, SukGyu; Kim, Youngwoo
2016-12-01
This paper proposes a new piecewise linear omnidirectional image registration method. The proposed method segments an image captured by multiple cameras into 2D segments defined by feature points of the image and then stitches each segment geometrically by considering the inclination of the segment in the 3D space. Depending on the intended use of image registration, the proposed method can be used to improve image registration accuracy or reduce the computation time in image registration because the trade-off between the computation time and image registration accuracy can be controlled for. In general, nonlinear image registration methods have been used in 3D omnidirectional image registration processes to reduce image distortion by camera lenses. The proposed method depends on a linear transformation process for omnidirectional image registration, and therefore it can enhance the effectiveness of the geometry recognition process, increase image registration accuracy by increasing the number of cameras or feature points of each image, increase the image registration speed by reducing the number of cameras or feature points of each image, and provide simultaneous information on shapes and colors of captured objects.
Capturing the plenoptic function in a swipe
NASA Astrophysics Data System (ADS)
Lawson, Michael; Brookes, Mike; Dragotti, Pier Luigi
2016-09-01
Blur in images, caused by camera motion, is typically thought of as a problem. The approach described in this paper shows instead that it is possible to use the blur caused by the integration of light rays at different positions along a moving camera trajectory to extract information about the light rays present within the scene. Retrieving the light rays of a scene from different viewpoints is equivalent to retrieving the plenoptic function of the scene. In this paper, we focus on a specific case in which the blurred image of a scene, containing a flat plane with a texture signal that is a sum of sine waves, is analysed to recreate the plenoptic function. The image is captured by a single lens camera with shutter open, moving in a straight line between two points, resulting in a swiped image. It is shown that finite rate of innovation sampling theory can be used to recover the scene geometry and therefore the epipolar plane image from the single swiped image. This epipolar plane image can be used to generate unblurred images for a given camera location.
Detection and enforcement of failure-to-yield in an emergency vehicle preemption system
NASA Technical Reports Server (NTRS)
Bachelder, Aaron (Inventor); Wickline, Richard (Inventor)
2007-01-01
An intersection controlled by an intersection controller receives trigger signals from on-coming emergency vehicles responding to an emergency call. The intersection controller initiates surveillance of the intersection via cameras installed at the intersection in response to a received trigger signal. The surveillance may begin immediately upon receipt of the trigger signal from an emergency vehicle, or may wait until the intersection controller determines that the signaling emergency vehicle is in the field of view of the cameras at the intersection. Portions of the captured images are tagged by the intersection controller based on tag signals transmitted by the vehicle or based on detected traffic patterns that indicate a potential traffic violation. The captured images are downloaded to a processing facility that analyzes the images and automatically issues citations for captured traffic violations.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera
NASA Astrophysics Data System (ADS)
Cruz Perez, Carlos; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
Calcium neuroimaging in behaving zebrafish larvae using a turn-key light field camera.
Perez, Carlos Cruz; Lauri, Antonella; Symvoulidis, Panagiotis; Cappetta, Michele; Erdmann, Arne; Westmeyer, Gil Gregor
2015-09-01
Reconstructing a three-dimensional scene from multiple simultaneously acquired perspectives (the light field) is an elegant scanless imaging concept that can exceed the temporal resolution of currently available scanning-based imaging methods for capturing fast cellular processes. We tested the performance of commercially available light field cameras on a fluorescent microscopy setup for monitoring calcium activity in the brain of awake and behaving reporter zebrafish larvae. The plenoptic imaging system could volumetrically resolve diverse neuronal response profiles throughout the zebrafish brain upon stimulation with an aversive odorant. Behavioral responses of the reporter fish could be captured simultaneously together with depth-resolved neuronal activity. Overall, our assessment showed that with some optimizations for fluorescence microscopy applications, commercial light field cameras have the potential of becoming an attractive alternative to custom-built systems to accelerate molecular imaging research on cellular dynamics.
NASA Astrophysics Data System (ADS)
McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul
2011-06-01
Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.
NASA Astrophysics Data System (ADS)
Renken, Hartmut; Oelze, Holger W.; Rath, Hans J.
1998-04-01
The design and application of a digital high sped image data capturing system with a following image processing system applied to the Bremer Hochschul Hyperschallkanal BHHK is the content of this presentation. It is also the result of the cooperation between the departments aerodynamic and image processing at the ZARM-institute at the Drop Tower of Brennen. Similar systems are used by the combustion working group at ZARM and other external project partners. The BHHK, camera- and image storage system as well as the personal computer based image processing software are described next. Some examples of images taken at the BHHK are shown to illustrate the application. The new and very user-friendly Windows 32-bit system is capable to capture all camera data with a maximum pixel clock of 43 MHz and to process complete sequences of images in one step by using only one comfortable program.
Investigating plasma viscosity with fast framing photography in the ZaP-HD Flow Z-Pinch experiment
NASA Astrophysics Data System (ADS)
Weed, Jonathan Robert
The ZaP-HD Flow Z-Pinch experiment investigates the stabilizing effect of sheared axial flows while scaling toward a high-energy-density laboratory plasma (HEDLP > 100 GPa). Stabilizing flows may persist until viscous forces dissipate a sheared flow profile. Plasma viscosity is investigated by measuring scale lengths in turbulence intentionally introduced in the plasma flow. A boron nitride turbulence-tripping probe excites small scale length turbulence in the plasma, and fast framing optical cameras are used to study time-evolved turbulent structures and viscous dissipation. A Hadland Imacon 790 fast framing camera is modified for digital image capture, but features insufficient resolution to study turbulent structures. A Shimadzu HPV-X camera captures the evolution of turbulent structures with great spatial and temporal resolution, but is unable to resolve the anticipated Kolmogorov scale in ZaP-HD as predicted by a simplified pinch model.
NASA Technical Reports Server (NTRS)
Price, M. C.; Kearsley, A. T.; Wozniakiewicz, P. J.; Spratt, J.; Burchell, M. J.; Cole, M. J.; Anz-Meador, P.; Liou, J. C.; Ross, D. K.; Opiela, J.;
2014-01-01
Hypervelocity impact features have been recognized on painted surfaces returned from the Hubble Space Telescope (HST). Here we describe experiments that help us to understand their creation, and the preservation of micrometeoroid (MM) remnants. We simulated capture of silicate and sulfide minerals on the Zinc orthotitanate (ZOT) paint and Al alloy plate of the Wide Field and Planetary Camera 2 (WFPC2) radiator, which was returned from HST after 16 years in low Earth orbit (LEO). Our results also allow us to validate analytical methods for identification of MM (and orbital debris) impacts in LEO.
Integration of virtual and real scenes within an integral 3D imaging environment
NASA Astrophysics Data System (ADS)
Ren, Jinsong; Aggoun, Amar; McCormick, Malcolm
2002-11-01
The Imaging Technologies group at De Montfort University has developed an integral 3D imaging system, which is seen as the most likely vehicle for 3D television avoiding psychological effects. To create real fascinating three-dimensional television programs, a virtual studio that performs the task of generating, editing and integrating the 3D contents involving virtual and real scenes is required. The paper presents, for the first time, the procedures, factors and methods of integrating computer-generated virtual scenes with real objects captured using the 3D integral imaging camera system. The method of computer generation of 3D integral images, where the lens array is modelled instead of the physical camera is described. In the model each micro-lens that captures different elemental images of the virtual scene is treated as an extended pinhole camera. An integration process named integrated rendering is illustrated. Detailed discussion and deep investigation are focused on depth extraction from captured integral 3D images. The depth calculation method from the disparity and the multiple baseline method that is used to improve the precision of depth estimation are also presented. The concept of colour SSD and its further improvement in the precision is proposed and verified.
Dual-modality smartphone endoscope for cervical pre-cancer detection (Conference Presentation)
NASA Astrophysics Data System (ADS)
Hong, Xiangqian; Yu, Bing
2017-02-01
Early detection is the key to the prevention of cervical cancer. There is an urgent need for a portable, affordable, and easy-to-use device for cervical pre-cancer detection, especially in low-resource settings. We have developed a dual-modality fiber-optic endoscope system (SmartME) that integrates high-resolution fluorescence imaging (FLI) and quantitative diffuse reflectance spectroscopy (DRS) onto a smartphone platform. The SmartME consists of a smartphone, a miniature fiber-optic endoscope, a phone attachment containing imaging optics, and a smartphone application (app). FLI is obtained by painting the tissue with a contrast agent (e.g., proflavine), illuminating the tissue and collecting its fluorescence images through an imaging bundle that is coupled to the phone camera. DRS is achieved by using a white LED, attaching additional source and detection fibers to the imaging bundle, and converting the phone camera into a spectrometer. The app collects images/spectra and transmits them to a remote server for analysis to extract the tissue parameters, including nuclear-to-cytoplasm ratio (calculated from FLI), concentrations of oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) as well as scattering (measured by DRS). These parameters can be used to detect cervical dysplasia. Our preliminary studies have demonstrated that the SmartME can clearly visualize the nuclei in living cells and in vivo biological samples, with a high spatial resolution of 3.1μm. The device can also measure tissue absorption and scattering properties with comparable accuracy to those of a benchtop DRS system. The SmartME has great potential to provide a compact, affordable, and `smart' solution for early detection of neoplastic changes in cervix.
Smartphone-Guided Needle Angle Selection During CT-Guided Procedures.
Xu, Sheng; Krishnasamy, Venkatesh; Levy, Elliot; Li, Ming; Tse, Zion Tsz Ho; Wood, Bradford John
2018-01-01
In CT-guided intervention, translation from a planned needle insertion angle to the actual insertion angle is estimated only with the physician's visuospatial abilities. An iPhone app was developed to reduce reliance on operator ability to estimate and reproduce angles. The iPhone app overlays the planned angle on the smartphone's camera display in real-time based on the smartphone's orientation. The needle's angle is selected by visually comparing the actual needle with the guideline in the display. If the smartphone's screen is perpendicular to the planned path, the smartphone shows the Bull's-Eye View mode, in which the angle is selected after the needle's hub overlaps the tip in the camera. In phantom studies, we evaluated the accuracies of the hardware, the Guideline mode, and the Bull's-Eye View mode and showed the app's clinical efficacy. A proof-of-concept clinical case was also performed. The hardware accuracy was 0.37° ± 0.27° (mean ± SD). The mean error and navigation time were 1.0° ± 0.9° and 8.7 ± 2.3 seconds for a senior radiologist with 25 years' experience and 1.5° ± 1.3° and 8.0 ± 1.6 seconds for a junior radiologist with 4 years' experience. The accuracy of the Bull's-Eye View mode was 2.9° ± 1.1°. Combined CT and smart-phone guidance was significantly more accurate than CT-only guidance for the first needle pass (p = 0.046), which led to a smaller final targeting error (mean distance from needle tip to target, 2.5 vs 7.9 mm). Mobile devices can be useful for guiding needle-based interventions. The hardware is low cost and widely available. The method is accurate, effective, and easy to implement.
Wright, Timothy J; Vitale, Thomas; Boot, Walter R; Charness, Neil
2015-12-01
Recent empirical evidence has suggested that the flashes associated with red light running cameras (RLRCs) distract younger drivers, pulling attention away from the roadway and delaying processing of safety-relevant events. Considering the perceptual and attentional declines that occur with age, older drivers may be especially susceptible to the distracting effects of RLRC flashes, particularly in situations in which the flash is more salient (a bright flash at night compared with the day). The current study examined how age and situational factors potentially influence attention capture by RLRC flashes using covert (cuing effects) and overt (eye movement) indices of capture. We manipulated the salience of the flash by varying its luminance and contrast with respect to the background of the driving scene (either day or night scenes). Results of 2 experiments suggest that simulated RLRC flashes capture observers' attention, but, surprisingly, no age differences in capture were observed. However, an analysis examining early and late eye movements revealed that older adults may have been strategically delaying their eye movements in order to avoid capture. Additionally, older adults took longer to disengage attention following capture, suggesting at least 1 age-related disadvantage in capture situations. Findings have theoretical implications for understanding age differences in attention capture, especially with respect to capture in real-world scenes, and inform future work that should examine how the distracting effects of RLRC flashes influence driver behavior. (c) 2015 APA, all rights reserved).
Wright, Timothy J.; Vitale, Thomas; Boot, Walter R; Charness, Neil
2015-01-01
Recent empirical evidence suggests that the flashes associated with red light running cameras (RLRCs) distract younger drivers, pulling attention away from the roadway and delaying processing of safety-relevant events. Considering the perceptual and attentional declines that occur with age, older drivers may be especially susceptible to the distracting effects of RLRC flashes, particularly in situations in which the flash is more salient (a bright flash at night compared to the day). The current study examined how age and situational factors potentially influence attention capture by RLRC flashes using covert (cuing effects) and overt (eye movement) indices of capture. We manipulated the salience of the flash by varying its luminance and contrast with respect to the background of the driving scene (either day or night scenes). Results of two experiments suggest that simulated RLRC flashes capture observers' attention, but, surprisingly, no age differences in capture were observed. However, an analysis examining early and late eye movements revealed that older adults may have been strategically delaying their eye movements in order to avoid capture. Additionally, older adults took longer to disengage attention following capture, suggesting at least one age-related disadvantage in capture situations. Findings have theoretical implications for understanding age differences in attention capture, especially with respect to capture in real-world scenes, and inform future work that should examine how the distracting effects of RLRC flashes influence driver behavior. PMID:26479014
A socialization intervention in remote health coaching for older adults in the home.
Jimison, Holly B; Klein, Krystal A; Marcoe, Jennifer L
2013-01-01
Previous studies have shown that social ties enhance both physical and mental health, and that social isolation has been linked to increased cognitive decline. As part of our cognitive training platform, we created a socialization intervention to address these issues. The intervention is designed to improve social contact time of older adults with remote family members and friends using a variety of technologies, including Web cameras, Skype software, email and phone. We used usability testing, surveys, interviews and system usage monitoring to develop design guidance for socialization protocols that were appropriate for older adults living independently in their homes. Our early results with this intervention show increased number of social contacts, total communication time (we measure email, phone, and Skype usage) and significant participant satisfaction with the intervention.
A Socialization Intervention in Remote Health Coaching for Older Adults in the Home*
Jimison, Holly B.; Klein, Krystal A.; Marcoe, Jennifer L.
2014-01-01
Previous studies have shown that social ties enhance both physical and mental health, and that social isolation has been linked to increased cognitive decline. As part of our cognitive training platform, we created a socialization intervention to address these issues. The intervention is designed to improve social contact time of older adults with remote family members and friends using a variety of technologies, including Web cameras, Skype software, email and phone. We used usability testing, surveys, interviews and system usage monitoring to develop design guidance for socialization protocols that were appropriate for older adults living independently in their homes. Our early results with this intervention show increased number of social contacts, total communication time (we measure email, phone, and Skype usage) and significant participant satisfaction with the intervention. PMID:24111362
NASA Astrophysics Data System (ADS)
Morikawa, E.; Nayak, A.; Vernon, F.; Braun, H.; Matthews, J.
2004-12-01
Late October 2003 brought devastating fires to the entire Southern California region. The NSF-funded High Performance Wireless Research and Education Network (HPWREN - http://hpwren.ucsd.edu/) cameras captured the development and progress of the Cedar fire in San Diego County. Cameras on Mt. Laguna, Mt. Woodson, Ramona Airport, and North Peak, recording one frame every 12 seconds, allowed for a time-lapse composite showing the fire's formation and progress from its beginnings on October 26th, to October 30th. The time-lapse camera footage depicts gushing smoke formations during the day, and bright orange walls of fire at night. The final video includes time synchronized views from multiple cameras, and an animated map highlighting the progress of the fire over time, and a directional indicator for each of the displaying cameras. The video is narrated by the California Department of Forestry and Fire Protection Fire Captain Ron Serabia (retd.) who was working then as a Air Tactical Group Supervisor with the aerial assault on the Cedar Fire Sunday October 26, 2004. The movie will be made available for download from the Scripps Institution of Oceanography Visualization Center Visual Objects library (supported by the OptIPuter project) at http://www.siovizcenter.ucsd.edu.
STS-31 crew activity on the middeck of the Earth-orbiting Discovery, OV-103
1990-04-29
STS031-05-002 (24-29 April 1990) --- A 35mm camera with a "fish eye" lens captured this high angle image on Discovery's middeck. Astronaut Kathryn D. Sullivan works with the IMAX camera in foreground, while Astronaut Steven A. Hawley consults a checklist in corner. An Arriflex motion picture camera records student ion arc experiment in apparatus mounted on stowage locker. The experiment was the project of Gregory S. Peterson, currently a student at Utah State University.
Thin film transistors on plastic substrates with reflective coatings for radiation protection
Wolfe, Jesse D.; Theiss, Steven D.; Carey, Paul G.; Smith, Patrick M.; Wickboldt, Paul
2003-11-04
Fabrication of silicon thin film transistors (TFT) on low-temperature plastic substrates using a reflective coating so that inexpensive plastic substrates may be used in place of standard glass, quartz, and silicon wafer-based substrates. The TFT can be used in large area low cost electronics, such as flat panel displays and portable electronics such as video cameras, personal digital assistants, and cell phones.
Thin film transistors on plastic substrates with reflective coatings for radiation protection
Wolfe, Jesse D [Fairfield, CA; Theiss, Steven D [Woodbury, MN; Carey, Paul G [Mountain View, CA; Smith, Patrick M [San Ramon, CA; Wickbold, Paul [Walnut Creek, CA
2006-09-26
Fabrication of silicon thin film transistors (TFT) on low-temperature plastic substrates using a reflective coating so that inexpensive plastic substrates may be used in place of standard glass, quartz, and silicon wafer-based substrates. The TFT can be used in large area low cost electronics, such as flat panel displays and portable electronics such as video cameras, personal digital assistants, and cell phones.
From Marginal Adjustments to Meaningful Change: Rethinking Weapon System Acquisition
2010-01-01
phones, digital cameras, Blackberries , GPS navigation systems, Bluetooth headsets, et cetera. To achieve these breakthroughs, businesses accept a greater...informing the detailed design phase—is less valid. For instance, even with advances in computational fl uid dynamics, wind tunnel testing and live fl ight...of Federal Procurement Pol- icy, 2007. Antón, Philip S., Eugene C. Gritton, Richard Mesic, and Paul Steinberg, Wind Tunnel and Propulsion Test
2008-09-01
telephone, conference calls, emails, alert notifications, and blackberry . The RDTSF holds conference calls with its stakeholders to provide routine... tunnels ) is monitored by CCTV cameras with live feeds to WMATA’s Operations Control Center (OCC) to detect unauthorized entry into areas not intended for...message by email, blackberry and phone to the Security Coordinators. Dissemination of classified information however, is generally handled through the
ERIC Educational Resources Information Center
Moraes, Edgar P.; da Silva, Nilbert S. A.; de Morais, Camilo de L. M.; das Neves, Luiz S.; de Lima, Kassio M. G.
2014-01-01
The flame test is a classical analytical method that is often used to teach students how to identify specific metals. However, some universities in developing countries have difficulties acquiring the sophisticated instrumentation needed to demonstrate how to identify and quantify metals. In this context, a method was developed based on the flame…
NASA Astrophysics Data System (ADS)
Phillips, Jonathan B.; Coppola, Stephen M.; Jin, Elaine W.; Chen, Ying; Clark, James H.; Mauer, Timothy A.
2009-01-01
Texture appearance is an important component of photographic image quality as well as object recognition. Noise cleaning algorithms are used to decrease sensor noise of digital images, but can hinder texture elements in the process. The Camera Phone Image Quality (CPIQ) initiative of the International Imaging Industry Association (I3A) is developing metrics to quantify texture appearance. Objective and subjective experimental results of the texture metric development are presented in this paper. Eight levels of noise cleaning were applied to ten photographic scenes that included texture elements such as faces, landscapes, architecture, and foliage. Four companies (Aptina Imaging, LLC, Hewlett-Packard, Eastman Kodak Company, and Vista Point Technologies) have performed psychophysical evaluations of overall image quality using one of two methods of evaluation. Both methods presented paired comparisons of images on thin film transistor liquid crystal displays (TFT-LCD), but the display pixel pitch and viewing distance differed. CPIQ has also been developing objective texture metrics and targets that were used to analyze the same eight levels of noise cleaning. The correlation of the subjective and objective test results indicates that texture perception can be modeled with an objective metric. The two methods of psychophysical evaluation exhibited high correlation despite the differences in methodology.
Real-Time 3D Tracking and Reconstruction on Mobile Phones.
Prisacariu, Victor Adrian; Kähler, Olaf; Murray, David W; Reid, Ian D
2015-05-01
We present a novel framework for jointly tracking a camera in 3D and reconstructing the 3D model of an observed object. Due to the region based approach, our formulation can handle untextured objects, partial occlusions, motion blur, dynamic backgrounds and imperfect lighting. Our formulation also allows for a very efficient implementation which achieves real-time performance on a mobile phone, by running the pose estimation and the shape optimisation in parallel. We use a level set based pose estimation but completely avoid the, typically required, explicit computation of a global distance. This leads to tracking rates of more than 100 Hz on a desktop PC and 30 Hz on a mobile phone. Further, we incorporate additional orientation information from the phone's inertial sensor which helps us resolve the tracking ambiguities inherent to region based formulations. The reconstruction step first probabilistically integrates 2D image statistics from selected keyframes into a 3D volume, and then imposes coherency and compactness using a total variational regularisation term. The global optimum of the overall energy function is found using a continuous max-flow algorithm and we show that, similar to tracking, the integration of per voxel posteriors instead of likelihoods improves the precision and accuracy of the reconstruction.
Segmenting texts from outdoor images taken by mobile phones using color features
NASA Astrophysics Data System (ADS)
Liu, Zongyi; Zhou, Hanning
2011-01-01
Recognizing texts from images taken by mobile phones with low resolution has wide applications. It has been shown that a good image binarization can substantially improve the performances of OCR engines. In this paper, we present a framework to segment texts from outdoor images taken by mobile phones using color features. The framework consists of three steps: (i) the initial process including image enhancement, binarization and noise filtering, where we binarize the input images in each RGB channel, and apply component level noise filtering; (ii) grouping components into blocks using color features, where we compute the component similarities by dynamically adjusting the weights of RGB channels, and merge groups hierachically, and (iii) blocks selection, where we use the run-length features and choose the Support Vector Machine (SVM) as the classifier. We tested the algorithm using 13 outdoor images taken by an old-style LG-64693 mobile phone with 640x480 resolution. We compared the segmentation results with Tsar's algorithm, a state-of-the-art camera text detection algorithm, and show that our algorithm is more robust, particularly in terms of the false alarm rates. In addition, we also evaluated the impacts of our algorithm on the Abbyy's FineReader, one of the most popular commercial OCR engines in the market.
NASA Astrophysics Data System (ADS)
Scopatz, Stephen D.; Mendez, Michael; Trent, Randall
2015-05-01
The projection of controlled moving targets is key to the quantitative testing of video capture and post processing for Motion Imagery. This presentation will discuss several implementations of target projectors with moving targets or apparent moving targets creating motion to be captured by the camera under test. The targets presented are broadband (UV-VIS-IR) and move in a predictable, repeatable and programmable way; several short videos will be included in the presentation. Among the technical approaches will be targets that move independently in the camera's field of view, as well targets that change size and shape. The development of a rotating IR and VIS 4 bar target projector with programmable rotational velocity and acceleration control for testing hyperspectral cameras is discussed. A related issue for motion imagery is evaluated by simulating a blinding flash which is an impulse of broadband photons in fewer than 2 milliseconds to assess the camera's reaction to a large, fast change in signal. A traditional approach of gimbal mounting the camera in combination with the moving target projector is discussed as an alternative to high priced flight simulators. Based on the use of the moving target projector several standard tests are proposed to provide a corresponding test to MTF (resolution), SNR and minimum detectable signal at velocity. Several unique metrics are suggested for Motion Imagery including Maximum Velocity Resolved (the measure of the greatest velocity that is accurately tracked by the camera system) and Missing Object Tolerance (measurement of tracking ability when target is obscured in the images). These metrics are applicable to UV-VIS-IR wavelengths and can be used to assist in camera and algorithm development as well as comparing various systems by presenting the exact scenes to the cameras in a repeatable way.
New opportunities for quality enhancing of images captured by passive THz camera
NASA Astrophysics Data System (ADS)
Trofimov, Vyacheslav A.; Trofimov, Vladislav V.
2014-10-01
As it is well-known, the passive THz camera allows seeing concealed object without contact with a person and this camera is non-dangerous for a person. Obviously, efficiency of using the passive THz camera depends on its temperature resolution. This characteristic specifies possibilities of the detection for concealed object: minimal size of the object; maximal distance of the detection; image quality. Computer processing of the THz image may lead to many times improving of the image quality without any additional engineering efforts. Therefore, developing of modern computer code for its application to THz images is urgent problem. Using appropriate new methods one may expect such temperature resolution which will allow to see banknote in pocket of a person without any real contact. Modern algorithms for computer processing of THz images allow also to see object inside the human body using a temperature trace on the human skin. This circumstance enhances essentially opportunity of passive THz camera applications for counterterrorism problems. We demonstrate opportunities, achieved at present time, for the detection both of concealed objects and of clothes components due to using of computer processing of images captured by passive THz cameras, manufactured by various companies. Another important result discussed in the paper consists in observation of both THz radiation emitted by incandescent lamp and image reflected from ceramic floorplate. We consider images produced by THz passive cameras manufactured by Microsemi Corp., and ThruVision Corp., and Capital Normal University (Beijing, China). All algorithms for computer processing of the THz images under consideration in this paper were developed by Russian part of author list. Keywords: THz wave, passive imaging camera, computer processing, security screening, concealed and forbidden objects, reflected image, hand seeing, banknote seeing, ceramic floorplate, incandescent lamp.
Popescu, Viorel D; Valpine, Perry; Sweitzer, Rick A
2014-04-01
Wildlife data gathered by different monitoring techniques are often combined to estimate animal density. However, methods to check whether different types of data provide consistent information (i.e., can information from one data type be used to predict responses in the other?) before combining them are lacking. We used generalized linear models and generalized linear mixed-effects models to relate camera trap probabilities for marked animals to independent space use from telemetry relocations using 2 years of data for fishers (Pekania pennanti) as a case study. We evaluated (1) camera trap efficacy by estimating how camera detection probabilities are related to nearby telemetry relocations and (2) whether home range utilization density estimated from telemetry data adequately predicts camera detection probabilities, which would indicate consistency of the two data types. The number of telemetry relocations within 250 and 500 m from camera traps predicted detection probability well. For the same number of relocations, females were more likely to be detected during the first year. During the second year, all fishers were more likely to be detected during the fall/winter season. Models predicting camera detection probability and photo counts solely from telemetry utilization density had the best or nearly best Akaike Information Criterion (AIC), suggesting that telemetry and camera traps provide consistent information on space use. Given the same utilization density, males were more likely to be photo-captured due to larger home ranges and higher movement rates. Although methods that combine data types (spatially explicit capture-recapture) make simple assumptions about home range shapes, it is reasonable to conclude that in our case, camera trap data do reflect space use in a manner consistent with telemetry data. However, differences between the 2 years of data suggest that camera efficacy is not fully consistent across ecological conditions and make the case for integrating other sources of space-use data.
Spatiotemporal Detection of Unusual Human Population Behavior Using Mobile Phone Data
Dobra, Adrian; Williams, Nathalie E.; Eagle, Nathan
2015-01-01
With the aim to contribute to humanitarian response to disasters and violent events, scientists have proposed the development of analytical tools that could identify emergency events in real-time, using mobile phone data. The assumption is that dramatic and discrete changes in behavior, measured with mobile phone data, will indicate extreme events. In this study, we propose an efficient system for spatiotemporal detection of behavioral anomalies from mobile phone data and compare sites with behavioral anomalies to an extensive database of emergency and non-emergency events in Rwanda. Our methodology successfully captures anomalous behavioral patterns associated with a broad range of events, from religious and official holidays to earthquakes, floods, violence against civilians and protests. Our results suggest that human behavioral responses to extreme events are complex and multi-dimensional, including extreme increases and decreases in both calling and movement behaviors. We also find significant temporal and spatial variance in responses to extreme events. Our behavioral anomaly detection system and extensive discussion of results are a significant contribution to the long-term project of creating an effective real-time event detection system with mobile phone data and we discuss the implications of our findings for future research to this end. PMID:25806954
Optimization design of periscope type 3X zoom lens design for a five megapixel cellphone camera
NASA Astrophysics Data System (ADS)
Sun, Wen-Shing; Tien, Chuen-Lin; Pan, Jui-Wen; Chao, Yu-Hao; Chu, Pu-Yi
2016-11-01
This paper presents a periscope type 3X zoom lenses design for a five megapixel cellphone camera. The configuration of optical system uses the right angle prism in front of the zoom lenses to change the optical path rotated by a 90° angle resulting in the zoom lenses length of 6 mm. The zoom lenses can be embedded in mobile phone with a thickness of 6 mm. The zoom lenses have three groups with six elements. The half field of view is varied from 30° to 10.89°, the effective focal length is adjusted from 3.142 mm to 9.426 mm, and the F-number is changed from 2.8 to 5.13.
Landman, Adam; Emani, Srinivas; Carlile, Narath; Rosenthal, David I; Semakov, Simon; Pallin, Daniel J; Poon, Eric G
2015-01-02
Photographs are important tools to record, track, and communicate clinical findings. Mobile devices with high-resolution cameras are now ubiquitous, giving clinicians the opportunity to capture and share images from the bedside. However, secure and efficient ways to manage and share digital images are lacking. The aim of this study is to describe the implementation of a secure application for capturing and storing clinical images in the electronic health record (EHR), and to describe initial user experiences. We developed CliniCam, a secure Apple iOS (iPhone, iPad) application that allows for user authentication, patient selection, image capture, image annotation, and storage of images as a Portable Document Format (PDF) file in the EHR. We leveraged our organization's enterprise service-oriented architecture to transmit the image file from CliniCam to our enterprise clinical data repository. There is no permanent storage of protected health information on the mobile device. CliniCam also required connection to our organization's secure WiFi network. Resident physicians from emergency medicine, internal medicine, and dermatology used CliniCam in clinical practice for one month. They were then asked to complete a survey on their experience. We analyzed the survey results using descriptive statistics. Twenty-eight physicians participated and 19/28 (68%) completed the survey. Of the respondents who used CliniCam, 89% found it useful or very useful for clinical practice and easy to use, and wanted to continue using the app. Respondents provided constructive feedback on location of the photos in the EHR, preferring to have photos embedded in (or linked to) clinical notes instead of storing them as separate PDFs within the EHR. Some users experienced difficulty with WiFi connectivity which was addressed by enhancing CliniCam to check for connectivity on launch. CliniCam was implemented successfully and found to be easy to use and useful for clinical practice. CliniCam is now available to all clinical users in our hospital, providing a secure and efficient way to capture clinical images and to insert them into the EHR. Future clinical image apps should more closely link clinical images and clinical documentation and consider enabling secure transmission over public WiFi or cellular networks.
The California All-sky Meteor Surveillance (CAMS) System
NASA Astrophysics Data System (ADS)
Gural, P. S.
2011-01-01
A unique next generation multi-camera, multi-site video meteor system is being developed and deployed in California to provide high accuracy orbits of simultaneously captured meteors. Included herein is a description of the goals, concept of operations, hardware, and software development progress. An appendix contains a meteor camera performance trade study made for video systems circa 2010.
Using Surveillance Camera Systems to Monitor Public Domains: Can Abuse Be Prevented
2006-03-01
relationship with a 16-year old girl failed. The incident was captured by a New York City Police Department surveillance camera. Although the image...administrators stated that the images recorded were “…nothing more than images of a few bras and panties .”17 The use of CCTV surveillance systems for
Comparison of approaches for mobile document image analysis using server supported smartphones
NASA Astrophysics Data System (ADS)
Ozarslan, Suleyman; Eren, P. Erhan
2014-03-01
With the recent advances in mobile technologies, new capabilities are emerging, such as mobile document image analysis. However, mobile phones are still less powerful than servers, and they have some resource limitations. One approach to overcome these limitations is performing resource-intensive processes of the application on remote servers. In mobile document image analysis, the most resource consuming process is the Optical Character Recognition (OCR) process, which is used to extract text in mobile phone captured images. In this study, our goal is to compare the in-phone and the remote server processing approaches for mobile document image analysis in order to explore their trade-offs. For the inphone approach, all processes required for mobile document image analysis run on the mobile phone. On the other hand, in the remote-server approach, core OCR process runs on the remote server and other processes run on the mobile phone. Results of the experiments show that the remote server approach is considerably faster than the in-phone approach in terms of OCR time, but adds extra delays such as network delay. Since compression and downscaling of images significantly reduce file sizes and extra delays, the remote server approach overall outperforms the in-phone approach in terms of selected speed and correct recognition metrics, if the gain in OCR time compensates for the extra delays. According to the results of the experiments, using the most preferable settings, the remote server approach performs better than the in-phone approach in terms of speed and acceptable correct recognition metrics.
Høye, Gudrun; Fridman, Andrei
2013-05-06
Current high-resolution push-broom hyperspectral cameras introduce keystone errors to the captured data. Efforts to correct these errors in hardware severely limit the optical design, in particular with respect to light throughput and spatial resolution, while at the same time the residual keystone often remains large. The mixel camera solves this problem by combining a hardware component--an array of light mixing chambers--with a mathematical method that restores the hyperspectral data to its keystone-free form, based on the data that was recorded onto the sensor with large keystone. A Virtual Camera software, that was developed specifically for this purpose, was used to compare the performance of the mixel camera to traditional cameras that correct keystone in hardware. The mixel camera can collect at least four times more light than most current high-resolution hyperspectral cameras, and simulations have shown that the mixel camera will be photon-noise limited--even in bright light--with a significantly improved signal-to-noise ratio compared to traditional cameras. A prototype has been built and is being tested.
Web conferencing systems: Skype and MSN in telepathology
Klock, Clóvis; Gomes, Regina de Paula Xavier
2008-01-01
Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. Alive image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology. PMID:18673501
Web conferencing systems: Skype and MSN in telepathology.
Klock, Clóvis; Gomes, Regina de Paula Xavier
2008-07-15
Virtual pathology is a very important tool that can be used in several ways, including interconsultations with specialists in many areas and for frozen sections. We considered in this work the use of Windows Live Messenger and Skype for image transmission. The conference was made through wide broad internet using Nikon E 200 microscope and Digital Samsung Colour SCC-131 camera. Internet speed for transmission varied from 400 Kb to 2.0 Mb. Both programs allow voice transmission concomitant to image, so the communication between the involved pathologists was possible using microphones and speakers. A live image could be seen by the receptor pathologist who was able to ask for moving the field or increase/diminish the augmentation. No phone call or typing required. The programs MSN and Skype can be used in many ways and with different operational systems installed in the computer. The capture system is simple and relatively cheap, what proves the viability of the system to be used in developing countries and in cities where do not exist pathologists. With the improvement of software and the improvement of digital image quality, associated to the use of the high speed broad band Internet this will be able to become a new modality in surgical pathology.
Shumate, Alice M; Yard, Ellen E; Casey-Lockyer, Mary; Apostolou, Andria; Chan, Miranda; Tan, Christina; Noe, Rebecca S; Wolkin, Amy F
2016-06-01
Timely morbidity surveillance of sheltered populations is crucial for identifying and addressing their immediate needs, and accurate surveillance allows us to better prepare for future disasters. However, disasters often create travel and communication challenges that complicate the collection and transmission of surveillance data. We describe a surveillance project conducted in New Jersey shelters after Hurricane Sandy, which occurred in November 2012, that successfully used cellular phones for remote real-time reporting. This project demonstrated that, when supported with just-in-time morbidity surveillance training, cellular phone reporting was a successful, sustainable, and less labor-intensive methodology than in-person shelter visits to capture morbidity data from multiple locations and opened a two-way communication channel with shelters. (Disaster Med Public Health Preparedness. 2015;10:525-528).
Lifting Scheme DWT Implementation in a Wireless Vision Sensor Network
NASA Astrophysics Data System (ADS)
Ong, Jia Jan; Ang, L.-M.; Seng, K. P.
This paper presents the practical implementation of a Wireless Visual Sensor Network (WVSN) with DWT processing on the visual nodes. WVSN consists of visual nodes that capture video and transmit to the base-station without processing. Limitation of network bandwidth restrains the implementation of real time video streaming from remote visual nodes through wireless communication. Three layers of DWT filters are implemented to process the captured image from the camera. With having all the wavelet coefficients produced, it is possible just to transmit the low frequency band coefficients and obtain an approximate image at the base-station. This will reduce the amount of power required in transmission. When necessary, transmitting all the wavelet coefficients will produce the full detail of image, which is similar to the image captured at the visual nodes. The visual node combines the CMOS camera, Xilinx Spartan-3L FPGA and wireless ZigBee® network that uses the Ember EM250 chip.
Evangelista, Dennis J.; Ray, Dylan D.; Hedrick, Tyson L.
2016-01-01
ABSTRACT Ecological, behavioral and biomechanical studies often need to quantify animal movement and behavior in three dimensions. In laboratory studies, a common tool to accomplish these measurements is the use of multiple, calibrated high-speed cameras. Until very recently, the complexity, weight and cost of such cameras have made their deployment in field situations risky; furthermore, such cameras are not affordable to many researchers. Here, we show how inexpensive, consumer-grade cameras can adequately accomplish these measurements both within the laboratory and in the field. Combined with our methods and open source software, the availability of inexpensive, portable and rugged cameras will open up new areas of biological study by providing precise 3D tracking and quantification of animal and human movement to researchers in a wide variety of field and laboratory contexts. PMID:27444791
NASA Astrophysics Data System (ADS)
Osborne, T. C.; Billings, B. J.
2014-12-01
Stereo images of cloud patterns and nearshore waves upstream of the Southern Alps during the DEEPWAVE field campaign are presented through photogrammetric analysis. The photos highlighted in this case were taken in the afternoon of Friday, 13 June 2014. These photos were chosen because they may allow for focused analysis of terrain effects on cloud evolution. Stratocumulus and other cumuliform, as well as cirrus clouds were captured as the sun set over the Tasman Sea, one of the South Pacific Ocean's marginal seas. Breaks in the thin band of stratocumulus along the shoreline, as well as the total time for cloud layer dissipation are also of interest. A possible barrier jet causing the southward motion of the stratocumulus layer is also investigated. Views look northwest from Serpentine Road in Kumara Junction, South Island, New Zealand. An Integrated Sounding System (ISS) located at the Hokitika Airport was the primary source of vertical profiles. The upper air sounding closest to the shoot time and location, plotted from Hokitika's 11:05 UTC upsonde data, shows 10 mph NE winds near the surface. Images were taken on days with research flights over New Zealand from 2 June to 23 June 2014 to match DEEPWAVE objectives. On the night of 13 June 2014, NSF/NCAR's HIAPER GV research aircraft completed a flight from Christchurch over the South Island. This flight became known as Intensive Observing Period 3 (IOP 3) Sensitivity Flight. Methods applied in the Terrain-Induced Rotor Experiment (T-REX) by Grubišić and Grubišić (2007) were closely followed while capturing stereo photographic images. Two identical cameras were positioned with a separation baseline near 270 meters. Each camera was tilted upward approximately seven degrees and carefully positioned to capture parallel fields of view of the site. Developing clouds were captured using synchronized camera timers on a five second interval. Ultimately, cloud locations and measurements can be determined using the recorded GPS locations of the cameras. The Camera Calibration Toolbox available for MATLAB was used in order to perform these elaborate triangulation calculations.
Multi-User Low Intrusive Occupancy Detection
Widyawan, Widyawan; Lazovik, Alexander
2018-01-01
Smart spaces are those that are aware of their state and can act accordingly. Among the central elements of such a state is the presence of humans and their number. For a smart office building, such information can be used for saving energy and safety purposes. While acquiring presence information is crucial, using sensing techniques that are highly intrusive, such as cameras, is often not acceptable for the building occupants. In this paper, we illustrate a proposal for occupancy detection which is low intrusive; it is based on equipment typically available in modern offices such as room-level power-metering and an app running on workers’ mobile phones. For power metering, we collect the aggregated power consumption and disaggregate the load of each device. For the mobile phone, we use the Received Signal Strength (RSS) of BLE (Bluetooth Low Energy) nodes deployed around workspaces to localize the phone in a room. We test the system in our offices. The experiments show that sensor fusion of the two sensing modalities gives 87–90% accuracy, demonstrating the effectiveness of the proposed approach. PMID:29509693
Choodum, Aree; Parabun, Kaewalee; Klawach, Nantikan; Daeid, Niamh Nic; Kanatharana, Proespichaya; Wongniramaikul, Worawit
2014-02-01
The Simon presumptive color test was used in combination with the built-in digital camera on a mobile phone to detect methamphetamine. The real-time Red-Green-Blue (RGB) basic color data was obtained using an application installed on the mobile phone and the relationship profile between RGB intensity, including other calculated values, and the colourimetric product was investigated. A wide linear range (0.1-2.5mg mL(-1)) and a low detection limit (0.0110±0.0001-0.044±0.002mg mL(-1)) were achieved. The method also required a small sample size (20μL). The results obtained from the analysis of illicit methamphetamine tablets were comparable to values obtained from gas chromatograph-flame ionization detector (GC-FID) analysis. Method validation indicated good intra- and inter-day precision (2.27-4.49%RSD and 2.65-5.62%RSD, respectively). The results suggest that this is a powerful real-time mobile method with the potential to be applied in field tests. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Optical zoom lens module using MEMS deformable mirrors for portable device
NASA Astrophysics Data System (ADS)
Lu, Jia-Shiun; Su, Guo-Dung J.
2012-10-01
The thickness of the smart phones in today's market is usually below than 10 mm, and with the shrinking of the phone volume, the difficulty of its production of the camera lens has been increasing. Therefore, how to give the imaging device more functionality in the smaller space is one of the interesting research topics for today's mobile phone companies. In this paper, we proposed a thin optical zoom system which is combined of micro-electromechanical components and reflective optical architecture. By the adopting of the MEMS deformable mirrors, we can change their radius of curvature to reach the optical zoom in and zoom out. And because we used the all-reflective architecture, so this system has eliminated the considerable chromatic aberrations in the absence of lenses. In our system, the thickness of the zoom system is about 11 mm. The smallest EFL (effective focal length) is 4.61 mm at a diagonal field angle of 52° and f/# of 5.24. The longest EFL of the module is 9.22 mm at a diagonal field angle of 27.4 with f/# of 5.03.°
Face Value: Towards Robust Estimates of Snow Leopard Densities.
Alexander, Justine S; Gopalaswamy, Arjun M; Shi, Kun; Riordan, Philip
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality.
Face Value: Towards Robust Estimates of Snow Leopard Densities
2015-01-01
When densities of large carnivores fall below certain thresholds, dramatic ecological effects can follow, leading to oversimplified ecosystems. Understanding the population status of such species remains a major challenge as they occur in low densities and their ranges are wide. This paper describes the use of non-invasive data collection techniques combined with recent spatial capture-recapture methods to estimate the density of snow leopards Panthera uncia. It also investigates the influence of environmental and human activity indicators on their spatial distribution. A total of 60 camera traps were systematically set up during a three-month period over a 480 km2 study area in Qilianshan National Nature Reserve, Gansu Province, China. We recorded 76 separate snow leopard captures over 2,906 trap-days, representing an average capture success of 2.62 captures/100 trap-days. We identified a total number of 20 unique individuals from photographs and estimated snow leopard density at 3.31 (SE = 1.01) individuals per 100 km2. Results of our simulation exercise indicate that our estimates from the Spatial Capture Recapture models were not optimal to respect to bias and precision (RMSEs for density parameters less or equal to 0.87). Our results underline the critical challenge in achieving sufficient sample sizes of snow leopard captures and recaptures. Possible performance improvements are discussed, principally by optimising effective camera capture and photographic data quality. PMID:26322682
Astronaut Kathryn Thornton on HST photographed by Electronic Still Camera
1993-12-05
S61-E-011 (5 Dec 1993) --- This view of astronaut Kathryn C. Thornton working on the Hubble Space Telescope (HST) was photographed with an Electronic Still Camera (ESC), and down linked to ground controllers soon afterward. Thornton, anchored to the end of the Remote Manipulator System (RMS) arm, is installing the +V2 Solar Array Panel as a replacement for the original one removed earlier. Electronic still photography is a relatively new technology which provides the means for a handheld camera to electronically capture and digitize an image with resolution approaching film quality. The electronic still camera has flown as an experiment on several other shuttle missions.
NutriPhone: vitamin B12 testing on your smartphone (Conference Presentation)
NASA Astrophysics Data System (ADS)
Lee, Seoho; O'Dell, Dakota; Hohenstein, Jessica; Colt, Susannah; Mehta, Saurabh; Erickson, David
2016-03-01
Vitamin B12 deficiency is the leading cause of cognitive decline in the elderly and is associated with increased risks of several acute and chronic conditions including anemia. The deficiency is prevalent among the world population, most of whom are unaware of their condition due to the lack of a simple diagnostics system. Recent advancements in the smartphone-enabled mobile health can help address this problem by making the deficiency tests more accessible. Previously, our group has demonstrated the NutriPhone, a smartphone platform for the accurate quantification of vitamin D levels. The NutriPhone technology comprises of a disposable test strip that performs a colorimetric reaction upon collecting a sample, a reusable accessory that interfaces with the smartphone camera, and a smartphone app that stores the algorithm for analyzing the test-strip reaction. In this work, we show that the NutriPhone can be expanded to measure vitamin B12 concentrations by developing a lateral flow assay for B12 that is compatible with our NutriPhone system. Our novel vitamin B12 assay incorporates blood sample processing and key reagent storage on-chip, which advances it into a sample-in-answer-out format that is suitable for point-of-care diagnostic applications. In order to enable the detection of pM levels of vitamin B12 levels, silver amplification of the initial signal is used within the total assay time of less than 15 minutes. We demonstrate the effectiveness of our NutriPhone system by deploying it in a resource-limited clinical setting in India where it is used to test tens of participants for vitamin B12 deficiency.
A mobile phone-based approach to detection of hemolysis.
Archibong, Edikan; Konnaiyan, Karthik Raj; Kaplan, Howard; Pyayt, Anna
2017-02-15
Preeclampsia and HELLP (hemolysis, elevated liver enzymes, and low platelet count) syndrome are pregnancy-related complications with high rates of morbidity and mortality. HELLP syndrome, in particular, can be difficult to diagnose. Recent work suggests that elevated levels of free cell hemoglobin in blood plasma can, as early as the first trimester, potentially serve as a diagnostic biomarker for impending complications. We therefore developed a point-of-care mobile phone-based platform that can quickly characterize a patient's level of hemolysis by measuring the color of blood plasma. The custom hardware and software are designed to be easy to use. A sample of the whole blood (~10µL or less) is first collected into a clear capillary tube or microtube, which is then inserted into a low-cost 3D-printed sample holder attached to the phone. A 5-10min period of quiescence allows for gravitational sedimentation of the red blood cells, leaving a layer of yellowish plasma at the top of the tube. The phone camera then photographs the capillary tube and analyzes the color components of the cell-free plasma layer. The software converts these color values to a concentration of free hemoglobin, based on a built-in calibration curve, and reports the patient's hemolysis level: non-hemolyzed, slightly hemolyzed, mildly hemolyzed, frankly hemolyzed, or grossly hemolyzed.. The accuracy of the method is ~1mgdL -1 . This phone-based point-of-care system provides the potentially life-saving advantage of a turnaround time of about 10min (versus 4+hours for conventional laboratory analytical methods) and a cost of approximately one dollar USD (assuming you have the phone and the software are already available). Copyright © 2016 Elsevier B.V. All rights reserved.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
NASA Astrophysics Data System (ADS)
Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing
2008-02-01
Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Recent advances in multiview distributed video coding
NASA Astrophysics Data System (ADS)
Dufaux, Frederic; Ouaret, Mourad; Ebrahimi, Touradj
2007-04-01
We consider dense networks of surveillance cameras capturing overlapped images of the same scene from different viewing directions, such a scenario being referred to as multi-view. Data compression is paramount in such a system due to the large amount of captured data. In this paper, we propose a Multi-view Distributed Video Coding approach. It allows for low complexity / low power consumption at the encoder side, and the exploitation of inter-view correlation without communications among the cameras. We introduce a combination of temporal intra-view side information and homography inter-view side information. Simulation results show both the improvement of the side information, as well as a significant gain in terms of coding efficiency.
Royle, J. Andrew; Chandler, Richard B.; Sollmann, Rahel; Gardner, Beth
2013-01-01
Spatial Capture-Recapture provides a revolutionary extension of traditional capture-recapture methods for studying animal populations using data from live trapping, camera trapping, DNA sampling, acoustic sampling, and related field methods. This book is a conceptual and methodological synthesis of spatial capture-recapture modeling. As a comprehensive how-to manual, this reference contains detailed examples of a wide range of relevant spatial capture-recapture models for inference about population size and spatial and temporal variation in demographic parameters. Practicing field biologists studying animal populations will find this book to be a useful resource, as will graduate students and professionals in ecology, conservation biology, and fisheries and wildlife management.
NASA Astrophysics Data System (ADS)
Walton, James S.; Hodgson, Peter; Hallamasek, Karen; Palmer, Jake
2003-07-01
4DVideo is creating a general purpose capability for capturing and analyzing kinematic data from video sequences in near real-time. The core element of this capability is a software package designed for the PC platform. The software ("4DCapture") is designed to capture and manipulate customized AVI files that can contain a variety of synchronized data streams -- including audio, video, centroid locations -- and signals acquired from more traditional sources (such as accelerometers and strain gauges.) The code includes simultaneous capture or playback of multiple video streams, and linear editing of the images (together with the ancilliary data embedded in the files). Corresponding landmarks seen from two or more views are matched automatically, and photogrammetric algorithms permit multiple landmarks to be tracked in two- and three-dimensions -- with or without lens calibrations. Trajectory data can be processed within the main application or they can be exported to a spreadsheet where they can be processed or passed along to a more sophisticated, stand-alone, data analysis application. Previous attempts to develop such applications for high-speed imaging have been limited in their scope, or by the complexity of the application itself. 4DVideo has devised a friendly ("FlowStack") user interface that assists the end-user to capture and treat image sequences in a natural progression. 4DCapture employs the AVI 2.0 standard and DirectX technology which effectively eliminates the file size limitations found in older applications. In early tests, 4DVideo has streamed three RS-170 video sources to disk for more than an hour without loss of data. At this time, the software can acquire video sequences in three ways: (1) directly, from up to three hard-wired cameras supplying RS-170 (monochrome) signals; (2) directly, from a single camera or video recorder supplying an NTSC (color) signal; and (3) by importing existing video streams in the AVI 1.0 or AVI 2.0 formats. The latter is particularly useful for high-speed applications where the raw images are often captured and stored by the camera before being downloaded. Provision has been made to synchronize data acquired from any combination of these video sources using audio and visual "tags." Additional "front-ends," designed for digital cameras, are anticipated.
Measuring frequency of one-dimensional vibration with video camera using electronic rolling shutter
NASA Astrophysics Data System (ADS)
Zhao, Yipeng; Liu, Jinyue; Guo, Shijie; Li, Tiejun
2018-04-01
Cameras offer a unique capability of collecting high density spatial data from a distant scene of interest. They can be employed as remote monitoring or inspection sensors to measure vibrating objects because of their commonplace availability, simplicity, and potentially low cost. A defect of vibrating measurement with the camera is to process the massive data generated by camera. In order to reduce the data collected from the camera, the camera using electronic rolling shutter (ERS) is applied to measure the frequency of one-dimensional vibration, whose frequency is much higher than the speed of the camera. Every row in the image captured by the ERS camera records the vibrating displacement at different times. Those displacements that form the vibration could be extracted by local analysis with sliding windows. This methodology is demonstrated on vibrating structures, a cantilever beam, and an air compressor to identify the validity of the proposed algorithm. Suggestions for applications of this methodology and challenges in real-world implementation are given at last.
Analysis of Camera Arrays Applicable to the Internet of Things.
Yang, Jiachen; Xu, Ru; Lv, Zhihan; Song, Houbing
2016-03-22
The Internet of Things is built based on various sensors and networks. Sensors for stereo capture are essential for acquiring information and have been applied in different fields. In this paper, we focus on the camera modeling and analysis, which is very important for stereo display and helps with viewing. We model two kinds of cameras, a parallel and a converged one, and analyze the difference between them in vertical and horizontal parallax. Even though different kinds of camera arrays are used in various applications and analyzed in the research work, there are few discussions on the comparison of them. Therefore, we make a detailed analysis about their performance over different shooting distances. From our analysis, we find that the threshold of shooting distance for converged cameras is 7 m. In addition, we design a camera array in our work that can be used as a parallel camera array, as well as a converged camera array and take some images and videos with it to identify the threshold.
NASA Astrophysics Data System (ADS)
Gaddam, Vamsidhar Reddy; Griwodz, Carsten; Halvorsen, Pâl.
2014-02-01
One of the most common ways of capturing wide eld-of-view scenes is by recording panoramic videos. Using an array of cameras with limited overlapping in the corresponding images, one can generate good panorama images. Using the panorama, several immersive display options can be explored. There is a two fold synchronization problem associated to such a system. One is the temporal synchronization, but this challenge can easily be handled by using a common triggering solution to control the shutters of the cameras. The other synchronization challenge is the automatic exposure synchronization which does not have a straight forward solution, especially in a wide area scenario where the light conditions are uncontrolled like in the case of an open, outdoor football stadium. In this paper, we present the challenges and approaches for creating a completely automatic real-time panoramic capture system with a particular focus on the camera settings. One of the main challenges in building such a system is that there is not one common area of the pitch that is visible to all the cameras that can be used for metering the light in order to nd appropriate camera parameters. One approach we tested is to use the green color of the eld grass. Such an approach provided us with acceptable results only in limited light conditions.A second approach was devised where the overlapping areas between adjacent cameras are exploited, thus creating pairs of perfectly matched video streams. However, there still existed some disparity between di erent pairs. We nally developed an approach where the time between two temporal frames is exploited to communicate the exposures among the cameras where we achieve a perfectly synchronized array. An analysis of the system and some experimental results are presented in this paper. In summary, a pilot-camera approach running in auto-exposure mode and then distributing the used exposure values to the other cameras seems to give best visual results.
ERIC Educational Resources Information Center
Levesque, Luc
2014-01-01
Inaccurate measurements occur regularly in data acquisition as a result of improper sampling times. An understanding of proper sampling times when collecting data with an analogue-to-digital converter or video camera is crucial in order to avoid anomalies. A proper choice of sampling times should be based on the Nyquist sampling theorem. If the…