NASA Astrophysics Data System (ADS)
Park, J. W.; Jeong, H. H.; Kim, J. S.; Choi, C. U.
2016-06-01
Recently, aerial photography with unmanned aerial vehicle (UAV) system uses UAV and remote controls through connections of ground control system using bandwidth of about 430 MHz radio Frequency (RF) modem. However, as mentioned earlier, existing method of using RF modem has limitations in long distance communication. The Smart Camera equipments's LTE (long-term evolution), Bluetooth, and Wi-Fi to implement UAV that uses developed UAV communication module system carried out the close aerial photogrammetry with the automatic shooting. Automatic shooting system is an image capturing device for the drones in the area's that needs image capturing and software for loading a smart camera and managing it. This system is composed of automatic shooting using the sensor of smart camera and shooting catalog management which manages filmed images and information. Processing UAV imagery module used Open Drone Map. This study examined the feasibility of using the Smart Camera as the payload for a photogrammetric UAV system. The open soure tools used for generating Android, OpenCV (Open Computer Vision), RTKLIB, Open Drone Map.
Motionless active depth from defocus system using smart optics for camera autofocus applications
NASA Astrophysics Data System (ADS)
Amin, M. Junaid; Riza, Nabeel A.
2016-04-01
This paper describes a motionless active Depth from Defocus (DFD) system design suited for long working range camera autofocus applications. The design consists of an active illumination module that projects a scene illuminating coherent conditioned optical radiation pattern which maintains its sharpness over multiple axial distances allowing an increased DFD working distance range. The imager module of the system responsible for the actual DFD operation deploys an electronically controlled variable focus lens (ECVFL) as a smart optic to enable a motionless imager design capable of effective DFD operation. An experimental demonstration is conducted in the laboratory which compares the effectiveness of the coherent conditioned radiation module versus a conventional incoherent active light source, and demonstrates the applicability of the presented motionless DFD imager design. The fast response and no-moving-parts features of the DFD imager design are especially suited for camera scenarios where mechanical motion of lenses to achieve autofocus action is challenging, for example, in the tiny camera housings in smartphones and tablets. Applications for the proposed system include autofocus in modern day digital cameras.
Demonstration of the CDMA-mode CAOS smart camera.
Riza, Nabeel A; Mazhar, Mohsin A
2017-12-11
Demonstrated is the code division multiple access (CDMA)-mode coded access optical sensor (CAOS) smart camera suited for bright target scenarios. Deploying a silicon CMOS sensor and a silicon point detector within a digital micro-mirror device (DMD)-based spatially isolating hybrid camera design, this smart imager first engages the DMD starring mode with a controlled factor of 200 high optical attenuation of the scene irradiance to provide a classic unsaturated CMOS sensor-based image for target intelligence gathering. Next, this CMOS sensor provided image data is used to acquire a focused zone more robust un-attenuated true target image using the time-modulated CDMA-mode of the CAOS camera. Using four different bright light test target scenes, successfully demonstrated is a proof-of-concept visible band CAOS smart camera operating in the CDMA-mode using up-to 4096 bits length Walsh design CAOS pixel codes with a maximum 10 KHz code bit rate giving a 0.4096 seconds CAOS frame acquisition time. A 16-bit analog-to-digital converter (ADC) with time domain correlation digital signal processing (DSP) generates the CDMA-mode images with a 3600 CAOS pixel count and a best spatial resolution of one micro-mirror square pixel size of 13.68 μm side. The CDMA-mode of the CAOS smart camera is suited for applications where robust high dynamic range (DR) imaging is needed for un-attenuated un-spoiled bright light spectrally diverse targets.
Uav Photogrammetric Solution Using a Raspberry pi Camera Module and Smart Devices: Test and Results
NASA Astrophysics Data System (ADS)
Piras, M.; Grasso, N.; Jabbar, A. Abdul
2017-08-01
Nowadays, smart technologies are an important part of our action and life, both in indoor and outdoor environment. There are several smart devices very friendly to be setting, where they can be integrated and embedded with other sensors, having a very low cost. Raspberry allows to install an internal camera called Raspberry Pi Camera Module, both in RGB band and NIR band. The advantage of this system is the limited cost (< 60 euro), their light weight and their simplicity to be used and embedded. This paper will describe a research where a Raspberry Pi with the Camera Module was installed onto a UAV hexacopter based on arducopter system, with purpose to collect pictures for photogrammetry issue. Firstly, the system was tested with aim to verify the performance of RPi camera in terms of frame per second/resolution and the power requirement. Moreover, a GNSS receiver Ublox M8T was installed and connected to the Raspberry platform in order to collect real time position and the raw data, for data processing and to define the time reference. IMU was also tested to see the impact of UAV rotors noise on different sensors like accelerometer, Gyroscope and Magnetometer. A comparison of the achieved results (accuracy) on some check points of the point clouds obtained by the camera will be reported as well in order to analyse in deeper the main discrepancy on the generated point cloud and the potentiality of these proposed approach. In this contribute, the assembling of the system is described, in particular the dataset acquired and the results carried out will be analysed.
Cai, Fuhong; Lu, Wen; Shi, Wuxiong; He, Sailing
2017-11-15
Spatially-explicit data are essential for remote sensing of ecological phenomena. Lately, recent innovations in mobile device platforms have led to an upsurge in on-site rapid detection. For instance, CMOS chips in smart phones and digital cameras serve as excellent sensors for scientific research. In this paper, a mobile device-based imaging spectrometer module (weighing about 99 g) is developed and equipped on a Single Lens Reflex camera. Utilizing this lightweight module, as well as commonly used photographic equipment, we demonstrate its utility through a series of on-site multispectral imaging, including ocean (or lake) water-color sensing and plant reflectance measurement. Based on the experiments we obtain 3D spectral image cubes, which can be further analyzed for environmental monitoring. Moreover, our system can be applied to many kinds of cameras, e.g., aerial camera and underwater camera. Therefore, any camera can be upgraded to an imaging spectrometer with the help of our miniaturized module. We believe it has the potential to become a versatile tool for on-site investigation into many applications.
Modular telerobot control system for accident response
NASA Astrophysics Data System (ADS)
Anderson, Richard J. M.; Shirey, David L.
1999-08-01
The Accident Response Mobile Manipulator System (ARMMS) is a teleoperated emergency response vehicle that deploys two hydraulic manipulators, five cameras, and an array of sensors to the scene of an incident. It is operated from a remote base station that can be situated up to four kilometers away from the site. Recently, a modular telerobot control architecture called SMART was applied to ARMMS to improve the precision, safety, and operability of the manipulators on board. Using SMART, a prototype manipulator control system was developed in a couple of days, and an integrated working system was demonstrated within a couple of months. New capabilities such as camera-frame teleoperation, autonomous tool changeout and dual manipulator control have been incorporated. The final system incorporates twenty-two separate modules and implements seven different behavior modes. This paper describes the integration of SMART into the ARMMS system.
Bio-inspired motion detection in an FPGA-based smart camera module.
Köhler, T; Röchter, F; Lindemann, J P; Möller, R
2009-03-01
Flying insects, despite their relatively coarse vision and tiny nervous system, are capable of carrying out elegant and fast aerial manoeuvres. Studies of the fly visual system have shown that this is accomplished by the integration of signals from a large number of elementary motion detectors (EMDs) in just a few global flow detector cells. We developed an FPGA-based smart camera module with more than 10,000 single EMDs, which is closely modelled after insect motion-detection circuits with respect to overall architecture, resolution and inter-receptor spacing. Input to the EMD array is provided by a CMOS camera with a high frame rate. Designed as an adaptable solution for different engineering applications and as a testbed for biological models, the EMD detector type and parameters such as the EMD time constants, the motion-detection directions and the angle between correlated receptors are reconfigurable online. This allows a flexible and simultaneous detection of complex motion fields such as translation, rotation and looming, such that various tasks, e.g., obstacle avoidance, height/distance control or speed regulation can be performed by the same compact device.
Design of intelligent vehicle control system based on single chip microcomputer
NASA Astrophysics Data System (ADS)
Zhang, Congwei
2018-06-01
The smart car microprocessor uses the KL25ZV128VLK4 in the Freescale series of single-chip microcomputers. The image sampling sensor uses the CMOS digital camera OV7725. The obtained track data is processed by the corresponding algorithm to obtain track sideline information. At the same time, the pulse width modulation control (PWM) is used to control the motor and servo movements, and based on the digital incremental PID algorithm, the motor speed control and servo steering control are realized. In the project design, IAR Embedded Workbench IDE is used as the software development platform to program and debug the micro-control module, camera image processing module, hardware power distribution module, motor drive and servo control module, and then complete the design of the intelligent car control system.
Traffic monitoring with distributed smart cameras
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin; Ulm, Michael; Schwingshackl, Gert
2012-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. Today the automated analysis of traffic situations is still in its infancy--the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully captured and interpreted by a vision system. 3In this work we present steps towards a visual monitoring system which is designed to detect potentially dangerous traffic situations around a pedestrian crossing at a street intersection. The camera system is specifically designed to detect incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system has been field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in a weatherproof housing. Two cameras run vehicle detection and tracking software, one camera runs a pedestrian detection and tracking module based on the HOG dectection principle. All 3 cameras use sparse optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. Geometric calibration of the cameras allows us to estimate the real-world co-ordinates of detected objects and to link the cameras together into one common reference system. This work describes the foundation for all the different object detection modalities (pedestrians, vehicles), and explains the system setup, tis design, and evaluation results which we have achieved so far.
NASA Astrophysics Data System (ADS)
Bo, Nyan Bo; Deboeverie, Francis; Veelaert, Peter; Philips, Wilfried
2017-09-01
Occlusion is one of the most difficult challenges in the area of visual tracking. We propose an occlusion handling framework to improve the performance of local tracking in a smart camera view in a multicamera network. We formulate an extensible energy function to quantify the quality of a camera's observation of a particular target by taking into account both person-person and object-person occlusion. Using this energy function, a smart camera assesses the quality of observations over all targets being tracked. When it cannot adequately observe of a target, a smart camera estimates the quality of observation of the target from view points of other assisting cameras. If a camera with better observation of the target is found, the tracking task of the target is carried out with the assistance of that camera. In our framework, only positions of persons being tracked are exchanged between smart cameras. Thus, communication bandwidth requirement is very low. Performance evaluation of our method on challenging video sequences with frequent and severe occlusions shows that the accuracy of a baseline tracker is considerably improved. We also report the performance comparison to the state-of-the-art trackers in which our method outperforms.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
Lunar and Planetary Science XXXV: Future Missions to the Moon
NASA Technical Reports Server (NTRS)
2004-01-01
This document contained the following topics: A Miniature Mass Spectrometer Module; SELENE Gamma Ray Spectrometer Using Ge Detector Cooled by Stirling Cryocooler; Lunar Elemental Composition and Investigations with D-CIXS X-Ray Mapping Spectrometer on SMART-1; X-Ray Fluorescence Spectrometer Onboard the SELENE Lunar Orbiter: Its Science and Instrument; Detectability of Degradation of Lunar Impact Craters by SELENE Terrain Camera; Study of the Apollo 16 Landing Site: As a Standard Site for the SELENE Multiband Imager; Selection of Targets for the SMART-1 Infrared Spectrometer (SIR); Development of a Telescopic Imaging Spectrometer for the Moon; The Lunar Seismic Network: Mission Update.
Hardware accelerator design for tracking in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Vohra, Anil
2011-10-01
Smart Cameras are important components in video analysis. For video analysis, smart cameras needs to detect interesting moving objects, track such objects from frame to frame, and perform analysis of object track in real time. Therefore, the use of real-time tracking is prominent in smart cameras. The software implementation of tracking algorithm on a general purpose processor (like PowerPC) could achieve low frame rate far from real-time requirements. This paper presents the SIMD approach based hardware accelerator designed for real-time tracking of objects in a scene. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA. Resulted frame rate is 30 frames per second for 250x200 resolution video in gray scale.
Hardware accelerator design for change detection in smart camera
NASA Astrophysics Data System (ADS)
Singh, Sanjay; Dunga, Srinivasa Murali; Saini, Ravi; Mandal, A. S.; Shekhar, Chandra; Chaudhury, Santanu; Vohra, Anil
2011-10-01
Smart Cameras are important components in Human Computer Interaction. In any remote surveillance scenario, smart cameras have to take intelligent decisions to select frames of significant changes to minimize communication and processing overhead. Among many of the algorithms for change detection, one based on clustering based scheme was proposed for smart camera systems. However, such an algorithm could achieve low frame rate far from real-time requirements on a general purpose processors (like PowerPC) available on FPGAs. This paper proposes the hardware accelerator capable of detecting real time changes in a scene, which uses clustering based change detection scheme. The system is designed and simulated using VHDL and implemented on Xilinx XUP Virtex-IIPro FPGA board. Resulted frame rate is 30 frames per second for QVGA resolution in gray scale.
Real-time machine vision system using FPGA and soft-core processor
NASA Astrophysics Data System (ADS)
Malik, Abdul Waheed; Thörnberg, Benny; Meng, Xiaozhou; Imran, Muhammad
2012-06-01
This paper presents a machine vision system for real-time computation of distance and angle of a camera from reference points in the environment. Image pre-processing, component labeling and feature extraction modules were modeled at Register Transfer (RT) level and synthesized for implementation on field programmable gate arrays (FPGA). The extracted image component features were sent from the hardware modules to a soft-core processor, MicroBlaze, for computation of distance and angle. A CMOS imaging sensor operating at a clock frequency of 27MHz was used in our experiments to produce a video stream at the rate of 75 frames per second. Image component labeling and feature extraction modules were running in parallel having a total latency of 13ms. The MicroBlaze was interfaced with the component labeling and feature extraction modules through Fast Simplex Link (FSL). The latency for computing distance and angle of camera from the reference points was measured to be 2ms on the MicroBlaze, running at 100 MHz clock frequency. In this paper, we present the performance analysis, device utilization and power consumption for the designed system. The FPGA based machine vision system that we propose has high frame speed, low latency and a power consumption that is much lower compared to commercially available smart camera solutions.
A traffic situation analysis system
NASA Astrophysics Data System (ADS)
Sidla, Oliver; Rosner, Marcin
2011-01-01
The observation and monitoring of traffic with smart visions systems for the purpose of improving traffic safety has a big potential. For example embedded vision systems built into vehicles can be used as early warning systems, or stationary camera systems can modify the switching frequency of signals at intersections. Today the automated analysis of traffic situations is still in its infancy - the patterns of vehicle motion and pedestrian flow in an urban environment are too complex to be fully understood by a vision system. We present steps towards such a traffic monitoring system which is designed to detect potentially dangerous traffic situations, especially incidents in which the interaction of pedestrians and vehicles might develop into safety critical encounters. The proposed system is field-tested at a real pedestrian crossing in the City of Vienna for the duration of one year. It consists of a cluster of 3 smart cameras, each of which is built from a very compact PC hardware system in an outdoor capable housing. Two cameras run vehicle detection software including license plate detection and recognition, one camera runs a complex pedestrian detection and tracking module based on the HOG detection principle. As a supplement, all 3 cameras use additional optical flow computation in a low-resolution video stream in order to estimate the motion path and speed of objects. This work describes the foundation for all 3 different object detection modalities (pedestrians, vehi1cles, license plates), and explains the system setup and its design.
Real-time FPGA-based radar imaging for smart mobility systems
NASA Astrophysics Data System (ADS)
Saponara, Sergio; Neri, Bruno
2016-04-01
The paper presents an X-band FMCW (Frequency Modulated Continuous Wave) Radar Imaging system, called X-FRI, for surveillance in smart mobility applications. X-FRI allows for detecting the presence of targets (e.g. obstacles in a railway crossing or urban road crossing, or ships in a small harbor), as well as their speed and their position. With respect to alternative solutions based on LIDAR or camera systems, X-FRI operates in real-time also in bad lighting and weather conditions, night and day. The radio-frequency transceiver is realized through COTS (Commercial Off The Shelf) components on a single-board. An FPGA-based baseband platform allows for real-time Radar image processing.
DOE Office of Scientific and Technical Information (OSTI.GOV)
2009-09-01
The on cell phone software captures the images from the CMOS camera periodically, stores the pictures, and periodically transmits those images over the cellular network to the server. The cell phone software consists of several modules: CamTest.cpp, CamStarter.cpp, StreamIOHandler .cpp, and covertSmartDevice.cpp. The camera application on the SmartPhone is CamStarter, which is "the" user interface for the camera system. The CamStarter user interface allows a user to start/stop the camera application and transfer files to the server. The CamStarter application interfaces to the CamTest application through registry settings. Both the CamStarter and CamTest applications must be separately deployed on themore » smartphone to run the camera system application. When a user selects the Start button in CamStarter, CamTest is created as a process. The smartphone begins taking small pictures (CAPTURE mode), analyzing those pictures for certain conditions, and saving those pictures on the smartphone. This process will terminate when the user selects the Stop button. The camtest code spins off an asynchronous thread, StreamIOHandler, to check for pictures taken by the camera. The received image is then tested by StreamIOHandler to see if it meets certain conditions. If those conditions are met, the CamTest program is notified through the setting of a registry key value and the image is saved in a designated directory in a custom BMP file which includes a header and the image data. When the user selects the Transfer button in the CamStarter user interface, the covertsmartdevice code is created as a process. Covertsmartdevice gets all of the files in a designated directory, opens a socket connection to the server, sends each file, and then terminates.« less
NASA Astrophysics Data System (ADS)
Ozolinsh, Maris; Paulins, Paulis
2017-09-01
An experimental setup allowing the modeling of conditions in optical devices and in the eye at various degrees of scattering such as cataract pathology in human eyes is presented. The scattering in cells of polymer-dispersed liquid crystals (PDLCs) and ‘Smart Glass’ windows is used in the modeling experiments. Both applications are used as optical obstacles placed in different positions of the optical information flow pathway either directly on the stimuli demonstration computer screen or mounted directly after the image-formation lens of a digital camera. The degree of scattering is changed continuously by applying an AC voltage of up to 30-80 V to the PDLC cell. The setup uses a camera with 14 bit depth and a 24 mm focal length lens. Light-emitting diodes and diode-pumped solid-state lasers emitting radiation of different wavelengths are used as portable small-divergence light sources in the experiments. Image formation, optical system point spread function, modulation transfer functions, and system resolution limits are determined for such sample optical systems in student optics and optometry experimental exercises.
Efficient large-scale graph data optimization for intelligent video surveillance
NASA Astrophysics Data System (ADS)
Shang, Quanhong; Zhang, Shujun; Wang, Yanbo; Sun, Chen; Wang, Zepeng; Zhang, Luming
2017-08-01
Society is rapidly accepting the use of a wide variety of cameras Location and applications: site traffic monitoring, parking Lot surveillance, car and smart space. These ones here the camera provides data every day in an analysis Effective way. Recent advances in sensor technology Manufacturing, communications and computing are stimulating.The development of new applications that can change the traditional Vision system incorporating universal smart camera network. This Analysis of visual cues in multi camera networks makes wide Applications ranging from smart home and office automation to large area surveillance and traffic surveillance. In addition, dense Camera networks, most of which have large overlapping areas of cameras. In the view of good research, we focus on sparse camera networks. One Sparse camera network using large area surveillance. As few cameras as possible, most cameras do not overlap Each other’s field of vision. This task is challenging Lack of knowledge of topology Network, the specific changes in appearance and movement Track different opinions of the target, as well as difficulties Understanding complex events in a network. In this review in this paper, we present a comprehensive survey of recent studies Results to solve the problem of topology learning, Object appearance modeling and global activity understanding sparse camera network. In addition, some of the current open Research issues are discussed.
Real-time multiple objects tracking on Raspberry-Pi-based smart embedded camera
NASA Astrophysics Data System (ADS)
Dziri, Aziz; Duranton, Marc; Chapuis, Roland
2016-07-01
Multiple-object tracking constitutes a major step in several computer vision applications, such as surveillance, advanced driver assistance systems, and automatic traffic monitoring. Because of the number of cameras used to cover a large area, these applications are constrained by the cost of each node, the power consumption, the robustness of the tracking, the processing time, and the ease of deployment of the system. To meet these challenges, the use of low-power and low-cost embedded vision platforms to achieve reliable tracking becomes essential in networks of cameras. We propose a tracking pipeline that is designed for fixed smart cameras and which can handle occlusions between objects. We show that the proposed pipeline reaches real-time processing on a low-cost embedded smart camera composed of a Raspberry-Pi board and a RaspiCam camera. The tracking quality and the processing speed obtained with the proposed pipeline are evaluated on publicly available datasets and compared to the state-of-the-art methods.
Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe
2013-01-24
The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed.
Dworak, Volker; Selbeck, Joern; Dammer, Karl-Heinz; Hoffmann, Matthias; Zarezadeh, Ali Akbar; Bobda, Christophe
2013-01-01
The application of (smart) cameras for process control, mapping, and advanced imaging in agriculture has become an element of precision farming that facilitates the conservation of fertilizer, pesticides, and machine time. This technique additionally reduces the amount of energy required in terms of fuel. Although research activities have increased in this field, high camera prices reflect low adaptation to applications in all fields of agriculture. Smart, low-cost cameras adapted for agricultural applications can overcome this drawback. The normalized difference vegetation index (NDVI) for each image pixel is an applicable algorithm to discriminate plant information from the soil background enabled by a large difference in the reflectance between the near infrared (NIR) and the red channel optical frequency band. Two aligned charge coupled device (CCD) chips for the red and NIR channel are typically used, but they are expensive because of the precise optical alignment required. Therefore, much attention has been given to the development of alternative camera designs. In this study, the advantage of a smart one-chip camera design with NDVI image performance is demonstrated in terms of low cost and simplified design. The required assembly and pixel modifications are described, and new algorithms for establishing an enhanced NDVI image quality for data processing are discussed. PMID:23348037
High-Speed Edge-Detecting Line Scan Smart Camera
NASA Technical Reports Server (NTRS)
Prokop, Norman F.
2012-01-01
A high-speed edge-detecting line scan smart camera was developed. The camera is designed to operate as a component in a NASA Glenn Research Center developed inlet shock detection system. The inlet shock is detected by projecting a laser sheet through the airflow. The shock within the airflow is the densest part and refracts the laser sheet the most in its vicinity, leaving a dark spot or shadowgraph. These spots show up as a dip or negative peak within the pixel intensity profile of an image of the projected laser sheet. The smart camera acquires and processes in real-time the linear image containing the shock shadowgraph and outputting the shock location. Previously a high-speed camera and personal computer would perform the image capture and processing to determine the shock location. This innovation consists of a linear image sensor, analog signal processing circuit, and a digital circuit that provides a numerical digital output of the shock or negative edge location. The smart camera is capable of capturing and processing linear images at over 1,000 frames per second. The edges are identified as numeric pixel values within the linear array of pixels, and the edge location information can be sent out from the circuit in a variety of ways, such as by using a microcontroller and onboard or external digital interface to include serial data such as RS-232/485, USB, Ethernet, or CAN BUS; parallel digital data; or an analog signal. The smart camera system can be integrated into a small package with a relatively small number of parts, reducing size and increasing reliability over the previous imaging system..
Human movement activity classification approaches that use wearable sensors and mobile devices
NASA Astrophysics Data System (ADS)
Kaghyan, Sahak; Sarukhanyan, Hakob; Akopian, David
2013-03-01
Cell phones and other mobile devices become part of human culture and change activity and lifestyle patterns. Mobile phone technology continuously evolves and incorporates more and more sensors for enabling advanced applications. Latest generations of smart phones incorporate GPS and WLAN location finding modules, vision cameras, microphones, accelerometers, temperature sensors etc. The availability of these sensors in mass-market communication devices creates exciting new opportunities for data mining applications. Particularly healthcare applications exploiting build-in sensors are very promising. This paper reviews different approaches of human activity recognition.
SMART (Sandia's Modular Architecture for Robotics and Teleoperation) Ver. 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Robert
"SMART Ver. 0.8 Beta" provides a system developer with software tools to create a telerobotic control system, i.e., a system whereby an end-user can interact with mechatronic equipment. It consists of three main components: the SMART Editor (tsmed), the SMART Real-time kernel (rtos), and the SMART Supervisor (gui). The SMART Editor is a graphical icon-based code generation tool for creating end-user systems, given descriptions of SMART modules. The SMART real-time kernel implements behaviors that combine modules representing input devices, sensors, constraints, filters, and robotic devices. Included with this software release is a number of core modules, which can be combinedmore » with additional project and device specific modules to create a telerobotic controller. The SMART Supervisor is a graphical front-end for running a SMART system. It is an optional component of the SMART Environment and utilizes the TeVTk windowing and scripting environment. Although the code contained within this release is complete, and can be utilized for defining, running, and interfacing to a sample end-user SMART system, most systems will include additional project and hardware specific modules developed either by the system developer or obtained independently from a SMART module developer. SMART is a software system designed to integrate the different robots, input devices, sensors and dynamic elements required for advanced modes of telerobotic control. "SMART Ver. 0.8 Beta" defines and implements a telerobotic controller. A telerobotic system consists of combinations of modules that implement behaviors. Each real-time module represents an input device, robot device, sensor, constraint, connection or filter. The underlying theory utilizes non-linear discretized multidimensional network elements to model each individual module, and guarantees that upon a valid connection, the resulting system will perform in a stable fashion. Different combinations of modules implement different behaviors. Each module must have at a minimum an initialization routine, a parameter adjustment routine, and an update routine. The SMART runtime kernel runs continuously within a real-time embedded system. Each module is first set-up by the kernel, initialized, and then updated at a fixed rate whenever it is in context. The kernel responds to operator directed commands by changing the state of the system, changing parameters on individual modules, and switching behavioral modes. The SMART Editor is a tool used to define, verify, configure and generate source code for a SMART control system. It uses icon representations of the modules, code patches from valid configurations of the modules, and configuration files describing how a module can be connected into a system to lead the end-user in through the steps needed to create a final system. The SMART Supervisor serves as an interface to a SMART run-time system. It provides an interface on a host computer that connects to the embedded system via TCPIIP ASCII commands. It utilizes a scripting language (Tel) and a graphics windowing environment (Tk). This system can either be customized to fit an end-user's needs or completely replaced as needed.« less
NASA Astrophysics Data System (ADS)
Devadhasan, Jasmine P.; Kim, Sanghyo
2015-07-01
Complementary metal oxide semiconductor (CMOS) image sensors are received great attention for their high efficiency in biological applications. The present work describes a CMOS image sensor-based whole blood glucose monitoring system through a point-of-care (POC) approach. A simple poly-ethylene terephthalate (PET) film chip was developed to carry out the enzyme kinetic reaction at various concentrations of blood glucose. In this technique, assay reagent was adsorbed onto amine functionalized silica (AFSiO2) nanoparticles in order to achieve glucose oxidation on the PET film chip. The AFSiO2 nanoparticles can immobilize the assay reagent with an electrostatic attraction and eased to develop the opaque platform which was technically suitable chip to analyze by the camera module. The oxidized glucose then produces a green color according to the glucose concentration and is analyzed by the camera module as a photon detection technique. The photon number decreases with increasing glucose concentration. The simple sensing approach, utilizing enzyme immobilized AFSiO2 nanoparticle chip and assay detection method was developed for quantitative glucose measurement.
High dynamic range adaptive real-time smart camera: an overview of the HDR-ARTiST project
NASA Astrophysics Data System (ADS)
Lapray, Pierre-Jean; Heyrman, Barthélémy; Ginhac, Dominique
2015-04-01
Standard cameras capture only a fraction of the information that is visible to the human visual system. This is specifically true for natural scenes including areas of low and high illumination due to transitions between sunlit and shaded areas. When capturing such a scene, many cameras are unable to store the full Dynamic Range (DR) resulting in low quality video where details are concealed in shadows or washed out by sunlight. The imaging technique that can overcome this problem is called HDR (High Dynamic Range) imaging. This paper describes a complete smart camera built around a standard off-the-shelf LDR (Low Dynamic Range) sensor and a Virtex-6 FPGA board. This smart camera called HDR-ARtiSt (High Dynamic Range Adaptive Real-time Smart camera) is able to produce a real-time HDR live video color stream by recording and combining multiple acquisitions of the same scene while varying the exposure time. This technique appears as one of the most appropriate and cheapest solution to enhance the dynamic range of real-life environments. HDR-ARtiSt embeds real-time multiple captures, HDR processing, data display and transfer of a HDR color video for a full sensor resolution (1280 1024 pixels) at 60 frames per second. The main contributions of this work are: (1) Multiple Exposure Control (MEC) dedicated to the smart image capture with alternating three exposure times that are dynamically evaluated from frame to frame, (2) Multi-streaming Memory Management Unit (MMMU) dedicated to the memory read/write operations of the three parallel video streams, corresponding to the different exposure times, (3) HRD creating by combining the video streams using a specific hardware version of the Devebecs technique, and (4) Global Tone Mapping (GTM) of the HDR scene for display on a standard LCD monitor.
Privacy Sensitive Surveillance for Assisted Living - A Smart Camera Approach
NASA Astrophysics Data System (ADS)
Fleck, Sven; Straßer, Wolfgang
An elderly woman wanders about aimlessly in a home for assisted living. Suddenly, she collapses on the floor of a lonesome hallway. Usually it can take over two hours until a night nurse passes this spot on her next inspection round. But in this case she is already on site after two minutes, ready to help. She has received an alert message on her beeper: "Inhabitant fallen in hallway 2b". The source: the SmartSurv distributed network of smart cameras for automated and privacy respecting video analysis.Welcome to the future of smart surveillance Although this scenario is not yet daily practice, it shall make clear how such systems will impact the safety of the elderly without the privacy intrusion of traditional video surveillance systems.
NASA Astrophysics Data System (ADS)
Klaessens, John H.; van der Veen, Albert; Verdaasdonk, Rudolf M.
2017-03-01
Recently, low cost smart phone based thermal cameras are being considered to be used in a clinical setting for monitoring physiological temperature responses such as: body temperature change, local inflammations, perfusion changes or (burn) wound healing. These thermal cameras contain uncooled micro-bolometers with an internal calibration check and have a temperature resolution of 0.1 degree. For clinical applications a fast quality measurement before use is required (absolute temperature check) and quality control (stability, repeatability, absolute temperature, absolute temperature differences) should be performed regularly. Therefore, a calibrated temperature phantom has been developed based on thermistor heating on both ends of a black coated metal strip to create a controllable temperature gradient from room temperature 26 °C up to 100 °C. The absolute temperatures on the strip are determined with software controlled 5 PT-1000 sensors using lookup tables. In this study 3 FLIR-ONE cameras and one high end camera were checked with this temperature phantom. The results show a relative good agreement between both low-cost and high-end camera's and the phantom temperature gradient, with temperature differences of 1 degree up to 6 degrees between the camera's and the phantom. The measurements were repeated as to absolute temperature and temperature stability over the sensor area. Both low-cost and high-end thermal cameras measured relative temperature changes with high accuracy and absolute temperatures with constant deviations. Low-cost smart phone based thermal cameras can be a good alternative to high-end thermal cameras for routine clinical measurements, appropriate to the research question, providing regular calibration checks for quality control.
Human detection and motion analysis at security points
NASA Astrophysics Data System (ADS)
Ozer, I. Burak; Lv, Tiehan; Wolf, Wayne H.
2003-08-01
This paper presents a real-time video surveillance system for the recognition of specific human activities. Specifically, the proposed automatic motion analysis is used as an on-line alarm system to detect abnormal situations in a campus environment. A smart multi-camera system developed at Princeton University is extended for use in smart environments in which the camera detects the presence of multiple persons as well as their gestures and their interaction in real-time.
Face recognition system for set-top box-based intelligent TV.
Lee, Won Oh; Kim, Yeong Gon; Hong, Hyung Gil; Park, Kang Ryoung
2014-11-18
Despite the prevalence of smart TVs, many consumers continue to use conventional TVs with supplementary set-top boxes (STBs) because of the high cost of smart TVs. However, because the processing power of a STB is quite low, the smart TV functionalities that can be implemented in a STB are very limited. Because of this, negligible research has been conducted regarding face recognition for conventional TVs with supplementary STBs, even though many such studies have been conducted with smart TVs. In terms of camera sensors, previous face recognition systems have used high-resolution cameras, cameras with high magnification zoom lenses, or camera systems with panning and tilting devices that can be used for face recognition from various positions. However, these cameras and devices cannot be used in intelligent TV environments because of limitations related to size and cost, and only small, low cost web-cameras can be used. The resulting face recognition performance is degraded because of the limited resolution and quality levels of the images. Therefore, we propose a new face recognition system for intelligent TVs in order to overcome the limitations associated with low resource set-top box and low cost web-cameras. We implement the face recognition system using a software algorithm that does not require special devices or cameras. Our research has the following four novelties: first, the candidate regions in a viewer's face are detected in an image captured by a camera connected to the STB via low processing background subtraction and face color filtering; second, the detected candidate regions of face are transmitted to a server that has high processing power in order to detect face regions accurately; third, in-plane rotations of the face regions are compensated based on similarities between the left and right half sub-regions of the face regions; fourth, various poses of the viewer's face region are identified using five templates obtained during the initial user registration stage and multi-level local binary pattern matching. Experimental results indicate that the recall; precision; and genuine acceptance rate were about 95.7%; 96.2%; and 90.2%, respectively.
Detection and Spatial Mapping of Mercury Contamination in Water Samples Using a Smart-Phone
2014-01-01
Detection of environmental contamination such as trace-level toxic heavy metal ions mostly relies on bulky and costly analytical instruments. However, a considerable global need exists for portable, rapid, specific, sensitive, and cost-effective detection techniques that can be used in resource-limited and field settings. Here we introduce a smart-phone-based hand-held platform that allows the quantification of mercury(II) ions in water samples with parts per billion (ppb) level of sensitivity. For this task, we created an integrated opto-mechanical attachment to the built-in camera module of a smart-phone to digitally quantify mercury concentration using a plasmonic gold nanoparticle (Au NP) and aptamer based colorimetric transmission assay that is implemented in disposable test tubes. With this smart-phone attachment that weighs <40 g, we quantified mercury(II) ion concentration in water samples by using a two-color ratiometric method employing light-emitting diodes (LEDs) at 523 and 625 nm, where a custom-developed smart application was utilized to process each acquired transmission image on the same phone to achieve a limit of detection of ∼3.5 ppb. Using this smart-phone-based detection platform, we generated a mercury contamination map by measuring water samples at over 50 locations in California (USA), taken from city tap water sources, rivers, lakes, and beaches. With its cost-effective design, field-portability, and wireless data connectivity, this sensitive and specific heavy metal detection platform running on cellphones could be rather useful for distributed sensing, tracking, and sharing of water contamination information as a function of both space and time. PMID:24437470
Surface Plasmon Resonance Biosensor Based on Smart Phone Platforms
NASA Astrophysics Data System (ADS)
Liu, Yun; Liu, Qiang; Chen, Shimeng; Cheng, Fang; Wang, Hanqi; Peng, Wei
2015-08-01
We demonstrate a fiber optic surface plasmon resonance (SPR) biosensor based on smart phone platforms. The light-weight optical components and sensing element are connected by optical fibers on a phone case. This SPR adaptor can be conveniently installed or removed from smart phones. The measurement, control and reference channels are illuminated by the light entering the lead-in fibers from the phone’s LED flash, while the light from the end faces of the lead-out fibers is detected by the phone’s camera. The SPR-sensing element is fabricated by a light-guiding silica capillary that is stripped off its cladding and coated with 50-nm gold film. Utilizing a smart application to extract the light intensity information from the camera images, the light intensities of each channel are recorded every 0.5 s with refractive index (RI) changes. The performance of the smart phone-based SPR platform for accurate and repeatable measurements was evaluated by detecting different concentrations of antibody binding to a functionalized sensing element, and the experiment results were validated through contrast experiments with a commercial SPR instrument. This cost-effective and portable SPR biosensor based on smart phones has many applications, such as medicine, health and environmental monitoring.
Surface Plasmon Resonance Biosensor Based on Smart Phone Platforms.
Liu, Yun; Liu, Qiang; Chen, Shimeng; Cheng, Fang; Wang, Hanqi; Peng, Wei
2015-08-10
We demonstrate a fiber optic surface plasmon resonance (SPR) biosensor based on smart phone platforms. The light-weight optical components and sensing element are connected by optical fibers on a phone case. This SPR adaptor can be conveniently installed or removed from smart phones. The measurement, control and reference channels are illuminated by the light entering the lead-in fibers from the phone's LED flash, while the light from the end faces of the lead-out fibers is detected by the phone's camera. The SPR-sensing element is fabricated by a light-guiding silica capillary that is stripped off its cladding and coated with 50-nm gold film. Utilizing a smart application to extract the light intensity information from the camera images, the light intensities of each channel are recorded every 0.5 s with refractive index (RI) changes. The performance of the smart phone-based SPR platform for accurate and repeatable measurements was evaluated by detecting different concentrations of antibody binding to a functionalized sensing element, and the experiment results were validated through contrast experiments with a commercial SPR instrument. This cost-effective and portable SPR biosensor based on smart phones has many applications, such as medicine, health and environmental monitoring.
Web Camera Based Eye Tracking to Assess Visual Memory on a Visual Paired Comparison Task.
Bott, Nicholas T; Lange, Alex; Rentz, Dorene; Buffalo, Elizabeth; Clopton, Paul; Zola, Stuart
2017-01-01
Background: Web cameras are increasingly part of the standard hardware of most smart devices. Eye movements can often provide a noninvasive "window on the brain," and the recording of eye movements using web cameras is a burgeoning area of research. Objective: This study investigated a novel methodology for administering a visual paired comparison (VPC) decisional task using a web camera.To further assess this method, we examined the correlation between a standard eye-tracking camera automated scoring procedure [obtaining images at 60 frames per second (FPS)] and a manually scored procedure using a built-in laptop web camera (obtaining images at 3 FPS). Methods: This was an observational study of 54 clinically normal older adults.Subjects completed three in-clinic visits with simultaneous recording of eye movements on a VPC decision task by a standard eye tracker camera and a built-in laptop-based web camera. Inter-rater reliability was analyzed using Siegel and Castellan's kappa formula. Pearson correlations were used to investigate the correlation between VPC performance using a standard eye tracker camera and a built-in web camera. Results: Strong associations were observed on VPC mean novelty preference score between the 60 FPS eye tracker and 3 FPS built-in web camera at each of the three visits ( r = 0.88-0.92). Inter-rater agreement of web camera scoring at each time point was high (κ = 0.81-0.88). There were strong relationships on VPC mean novelty preference score between 10, 5, and 3 FPS training sets ( r = 0.88-0.94). Significantly fewer data quality issues were encountered using the built-in web camera. Conclusions: Human scoring of a VPC decisional task using a built-in laptop web camera correlated strongly with automated scoring of the same task using a standard high frame rate eye tracker camera.While this method is not suitable for eye tracking paradigms requiring the collection and analysis of fine-grained metrics, such as fixation points, built-in web cameras are a standard feature of most smart devices (e.g., laptops, tablets, smart phones) and can be effectively employed to track eye movements on decisional tasks with high accuracy and minimal cost.
Evaluation of 3D printed optofluidic smart glass prototypes.
Wolfe, Daniel; Goossen, K W
2018-01-22
Smart glass or smart windows are an innovative technology used for thermal management, energy efficiency, and privacy applications. Notable commercially available smart glass relies on an electric stimuli to modulate the glass from a transparent to a translucent mode of operation. However, the current market technologies, such as electrochromic, polymer dispersed liquid crystal, and suspended particle devices are expensive and suffer from solar absorption, poor transmittance modulation, and in some cases, continuous power consumption. The authors of this paper present a novel optofluidic smart glass prototype capable of modulating visible light transmittance from 8% to 85%.
NASA Astrophysics Data System (ADS)
Tchernykh, Valerij; Dyblenko, Sergej; Janschek, Klaus; Seifart, Klaus; Harnisch, Bernd
2005-08-01
The cameras commonly used for Earth observation from satellites require high attitude stability during the image acquisition. For some types of cameras (high-resolution "pushbroom" scanners in particular), instantaneous attitude changes of even less than one arcsecond result in significant image distortion and blurring. Especially problematic are the effects of high-frequency attitude variations originating from micro-shocks and vibrations produced by the momentum and reaction wheels, mechanically activated coolers, and steering and deployment mechanisms on board. The resulting high attitude-stability requirements for Earth-observation satellites are one of the main reasons for their complexity and high cost. The novel SmartScan imaging concept, based on an opto-electronic system with no moving parts, offers the promise of high-quality imaging with only moderate satellite attitude stability. SmartScan uses real-time recording of the actual image motion in the focal plane of the camera during frame acquisition to correct the distortions in the image. Exceptional real-time performances with subpixel-accuracy image-motion measurement are provided by an innovative high-speed onboard opto-electronic correlation processor. SmartScan will therefore allow pushbroom scanners to be used for hyper-spectral imaging from satellites and other space platforms not primarily intended for imaging missions, such as micro- and nano-satellites with simplified attitude control, low-orbiting communications satellites, and manned space stations.
Wu, Jing; Dong, Mingling; Zhang, Cheng; Wang, Yu; Xie, Mengxia; Chen, Yiping
2017-06-05
Magnetic lateral flow strip (MLFS) based on magnetic bead (MB) and smart phone camera has been developed for quantitative detection of cocaine (CC) in urine samples. CC and CC-bovine serum albumin (CC-BSA) could competitively react with MB-antibody (MB-Ab) of CC on the surface of test line of MLFS. The color of MB-Ab conjugate on the test line relates to the concentration of target in the competition immunoassay format, which can be used as a visual signal. Furthermore, the color density of the MB-Ab conjugate can be transferred into digital signal (gray value) by a smart phone, which can be used as a quantitative signal. The linear detection range for CC is 5-500 ng/mL and the relative standard deviations are under 10%. The visual limit of detection was 5 ng/mL and the whole analysis time was within 10 min. The MLFS has been successfully employed for the detection of CC in urine samples without sample pre-treatment and the result is also agreed to that of enzyme-linked immunosorbent assay (ELISA). With the popularization of smart phone cameras, the MLFS has large potential in the detection of drug residues in virtue of its stability, speediness, and low-cost.
IEEE 1451.2 based Smart sensor system using ADuc847
NASA Astrophysics Data System (ADS)
Sreejithlal, A.; Ajith, Jose
IEEE 1451 standard defines a standard interface for connecting transducers to microprocessor based data acquisition systems, instrumentation systems, control and field networks. Smart transducer interface module (STIM) acts as a unit which provides signal conditioning, digitization and data packet generation functions to the transducers connected to it. This paper describes the implementation of a microcontroller based smart transducer interface module based on IEEE 1451.2 standard. The module, implemented using ADuc847 microcontroller has 2 transducer channels and is programmed using Embedded C language. The Sensor system consists of a Network Controlled Application Processor (NCAP) module which controls the Smart transducer interface module (STIM) over an IEEE1451.2-RS232 bus. The NCAP module is implemented as a software module in C# language. The hardware details, control principles involved and the software implementation for the STIM are described in detail.
Prototype of smart office system using based security system
NASA Astrophysics Data System (ADS)
Prasetyo, T. F.; Zaliluddin, D.; Iqbal, M.
2018-05-01
Creating a new technology in the modern era gives a positive impact on business and industry. Internet of Things (IoT) as a new communication technology is very useful in realizing smart systems such as: smart home, smart office, smart parking and smart city. This study presents a prototype of the smart office system which was designed as a security system based on IoT. Smart office system development method used waterfall model. IoT-based smart office system used platform (project builder) cayenne so that. The data can be accessed and controlled through internet network from long distance. Smart office system used arduino mega 2560 microcontroller as a controller component. In this study, Smart office system is able to detect threats of dangerous objects made from metals, earthquakes, fires, intruders or theft and perform security monitoring outside the building by using raspberry pi cameras on autonomous robots in real time to the security guard.
Kang, Sung-Won; Choi, Hyeob; Park, Hyung-Il; Choi, Byoung-Gun; Im, Hyobin; Shin, Dongjun; Jung, Young-Giu; Lee, Jun-Young; Park, Hong-Won; Park, Sukyung; Roh, Jung-Sim
2017-11-07
Spinal disease is a common yet important condition that occurs because of inappropriate posture. Prevention could be achieved by continuous posture monitoring, but most measurement systems cannot be used in daily life due to factors such as burdensome wires and large sensing modules. To improve upon these weaknesses, we developed comfortable "smart wear" for posture measurement using conductive yarn for circuit patterning and a flexible printed circuit board (FPCB) for interconnections. The conductive yarn was made by twisting polyester yarn and metal filaments, and the resistance per unit length was about 0.05 Ω/cm. An embroidered circuit was made using the conductive yarn, which showed increased yield strength and uniform electrical resistance per unit length. Circuit networks of sensors and FPCBs for interconnection were integrated into clothes using a computer numerical control (CNC) embroidery process. The system was calibrated and verified by comparing the values measured by the smart wear with those measured by a motion capture camera system. Six subjects performed fixed movements and free computer work, and, with this system, we were able to measure the anterior/posterior direction tilt angle with an error of less than 4°. The smart wear does not have excessive wires, and its structure will be optimized for better posture estimation in a later study.
Smart-Phone Based Magnetic Levitation for Measuring Densities
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform. PMID:26308615
Smart-Phone Based Magnetic Levitation for Measuring Densities.
Knowlton, Stephanie; Yu, Chu Hsiang; Jain, Nupur; Ghiran, Ionita Calin; Tasoglu, Savas
2015-01-01
Magnetic levitation, which uses a magnetic field to suspend objects in a fluid, is a powerful and versatile technology. We develop a compact magnetic levitation platform compatible with a smart-phone to separate micro-objects and estimate the density of the sample based on its levitation height. A 3D printed attachment is mechanically installed over the existing camera unit of a smart-phone. Micro-objects, which may be either spherical or irregular in shape, are suspended in a paramagnetic medium and loaded in a microcapillary tube which is then inserted between two permanent magnets. The micro-objects are levitated and confined in the microcapillary at an equilibrium height dependent on their volumetric mass densities (causing a buoyancy force toward the edge of the microcapillary) and magnetic susceptibilities (causing a magnetic force toward the center of the microcapillary) relative to the suspending medium. The smart-phone camera captures magnified images of the levitating micro-objects through an additional lens positioned between the sample and the camera lens cover. A custom-developed Android application then analyzes these images to determine the levitation height and estimate the density. Using this platform, we were able to separate microspheres with varying densities and calibrate their levitation heights to known densities to develop a technique for precise and accurate density estimation. We have also characterized the magnetic field, the optical imaging capabilities, and the thermal state over time of this platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, Jason; Dobrzynski, Daniel S.
A smart charging system for charging a plug-in electric vehicle (PEV) includes an electric vehicle supply equipment (EVSE) configured to supply electrical power to the PEV through a smart charging module coupled to the EVSE. The smart charging module comprises an electronic circuitry which includes a processor. The electronic circuitry includes electronic components structured to receive electrical power from the EVSE, and supply the electrical power to the PEV. The electronic circuitry is configured to measure a charging parameter of the PEV. The electronic circuitry is further structured to emulate a pulse width modulated signal generated by the EVSE. Themore » smart charging module can also include a first coupler structured to be removably couple to the EVSE and a second coupler structured to be removably coupled to the PEV.« less
SMART: The Future of Spaceflight Avionics
NASA Technical Reports Server (NTRS)
Alhorn, Dean C.; Howard, David E.
2010-01-01
A novel avionics approach is necessary to meet the future needs of low cost space and lunar missions that require low mass and low power electronics. The current state of the art for avionics systems are centralized electronic units that perform the required spacecraft functions. These electronic units are usually custom-designed for each application and the approach compels avionics designers to have in-depth system knowledge before design can commence. The overall design, development, test and evaluation (DDT&E) cycle for this conventional approach requires long delivery times for space flight electronics and is very expensive. The Small Multi-purpose Advanced Reconfigurable Technology (SMART) concept is currently being developed to overcome the limitations of traditional avionics design. The SMART concept is based upon two multi-functional modules that can be reconfigured to drive and sense a variety of mechanical and electrical components. The SMART units are key to a distributed avionics architecture whereby the modules are located close to or right at the desired application point. The drive module, SMART-D, receives commands from the main computer and controls the spacecraft mechanisms and devices with localized feedback. The sensor module, SMART-S, is used to sense the environmental sensors and offload local limit checking from the main computer. There are numerous benefits that are realized by implementing the SMART system. Localized sensor signal conditioning electronics reduces signal loss and overall wiring mass. Localized drive electronics increase control bandwidth and minimize time lags for critical functions. These benefits in-turn reduce the main processor overhead functions. Since SMART units are standard flight qualified units, DDT&E is reduced and system design can commence much earlier in the design cycle. Increased production scale lowers individual piece part cost and using standard modules also reduces non-recurring costs. The benefit list continues, but the overall message is already evident: the SMART concept is an evolution in spacecraft avionics. SMART devices have the potential to change the design paradigm for future satellites, spacecraft and even commercial applications.
A semantic autonomous video surveillance system for dense camera networks in Smart Cities.
Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M; Carro, Belén; Sánchez-Esguevillas, Antonio
2012-01-01
This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network.
A Semantic Autonomous Video Surveillance System for Dense Camera Networks in Smart Cities
Calavia, Lorena; Baladrón, Carlos; Aguiar, Javier M.; Carro, Belén; Sánchez-Esguevillas, Antonio
2012-01-01
This paper presents a proposal of an intelligent video surveillance system able to detect and identify abnormal and alarming situations by analyzing object movement. The system is designed to minimize video processing and transmission, thus allowing a large number of cameras to be deployed on the system, and therefore making it suitable for its usage as an integrated safety and security solution in Smart Cities. Alarm detection is performed on the basis of parameters of the moving objects and their trajectories, and is performed using semantic reasoning and ontologies. This means that the system employs a high-level conceptual language easy to understand for human operators, capable of raising enriched alarms with descriptions of what is happening on the image, and to automate reactions to them such as alerting the appropriate emergency services using the Smart City safety network. PMID:23112607
A wide-angle camera module for disposable endoscopy
NASA Astrophysics Data System (ADS)
Shim, Dongha; Yeon, Jesun; Yi, Jason; Park, Jongwon; Park, Soo Nam; Lee, Nanhee
2016-08-01
A wide-angle miniaturized camera module for disposable endoscope is demonstrated in this paper. A lens module with 150° angle of view (AOV) is designed and manufactured. All plastic injection-molded lenses and a commercial CMOS image sensor are employed to reduce the manufacturing cost. The image sensor and LED illumination unit are assembled with a lens module. The camera module does not include a camera processor to further reduce its size and cost. The size of the camera module is 5.5 × 5.5 × 22.3 mm3. The diagonal field of view (FOV) of the camera module is measured to be 110°. A prototype of a disposable endoscope is implemented to perform a pre-clinical animal testing. The esophagus of an adult beagle dog is observed. These results demonstrate the feasibility of a cost-effective and high-performance camera module for disposable endoscopy.
Smart SPHERES: A Telerobotic Free-Flyer for Intravehicular Activities in Space
NASA Technical Reports Server (NTRS)
Fong, Terrence; Micire, Mark J.; Morse, Ted; Park, Eric; Provencher, Chris; To, Vinh; Wheeler, D. W.; Mittman, David; Torres, R. Jay; Smith, Ernest
2013-01-01
Smart SPHERES is a prototype free-flying space robot based on the SPHERES platform. Smart SPHERES can be remotely operated by astronauts inside a spacecraft, or by mission controllers on the ground. We developed Smart SPHERES to perform a variety of intravehicular activities (IVA), such as operations inside the International Space Station (ISS). These IVA tasks include environmental monitoring surveys (radiation, sound levels, etc.), inventory, and mobile camera work. In this paper, we first discuss the motivation for free-flying space robots. We then describe the development of the Smart SPHERES prototype, including avionics, software, and data communications. Finally, we present results of initial flight tests on-board the ISS.
Smart SPHERES: A Telerobotic Free-Flyer for Intravehicular Activities in Space
NASA Technical Reports Server (NTRS)
Fong, Terrence; Micire, Mark J.; Morse, Ted; Park, Eric; Provencher, Chris
2013-01-01
Smart SPHERES is a prototype free-flying space robot based on the SPHERES platform. Smart SPHERES can be remotely operated by astronauts inside a spacecraft, or by mission controllers on the ground. We developed Smart SPHERES to perform a variety of intravehicular activities (IVA), such as operations inside the International Space Station (ISS). These IVA tasks include environmental monitoring surveys (radiation, sound levels, etc.), inventory, and mobile camera work. In this paper, we first discuss the motivation for free- flying space robots. We then describe the development of the Smart SPHERES prototype, including avionics, software, and data communications. Finally, we present results of initial flight tests on-board the ISS.
System requirements specification for SMART structures mode
NASA Technical Reports Server (NTRS)
1992-01-01
Specified here are the functional and informational requirements for software modules which address the geometric and data modeling needs of the aerospace structural engineer. The modules are to be included as part of the Solid Modeling Aerospace Research Tool (SMART) package developed for the Vehicle Analysis Branch (VAB) at the NASA Langley Research Center (LaRC). The purpose is to precisely state what the SMART Structures modules will do, without consideration of how it will be done. Each requirement is numbered for reference in development and testing.
Seeing-Is-Believing: Using Camera Phones for Human-Verifiable Authentication
2004-11-01
the context of, e.g., a smart home (Section 7). Our implementation is detailed in Section 8, with a security analysis is Section 9. Section 10...establishment of security parame- ters [17]. This work considers a smart home , where a user may want to establish a security context for controlling...appliances or other devices in a smart - home . We refer to the security property discussed in this work as presence, where it is desirable that only users or
Kang, Sung-Won; Park, Hyung-Il; Choi, Byoung-Gun; Shin, Dongjun; Jung, Young-Giu; Lee, Jun-Young; Park, Hong-Won; Park, Sukyung
2017-01-01
Spinal disease is a common yet important condition that occurs because of inappropriate posture. Prevention could be achieved by continuous posture monitoring, but most measurement systems cannot be used in daily life due to factors such as burdensome wires and large sensing modules. To improve upon these weaknesses, we developed comfortable “smart wear” for posture measurement using conductive yarn for circuit patterning and a flexible printed circuit board (FPCB) for interconnections. The conductive yarn was made by twisting polyester yarn and metal filaments, and the resistance per unit length was about 0.05 Ω/cm. An embroidered circuit was made using the conductive yarn, which showed increased yield strength and uniform electrical resistance per unit length. Circuit networks of sensors and FPCBs for interconnection were integrated into clothes using a computer numerical control (CNC) embroidery process. The system was calibrated and verified by comparing the values measured by the smart wear with those measured by a motion capture camera system. Six subjects performed fixed movements and free computer work, and, with this system, we were able to measure the anterior/posterior direction tilt angle with an error of less than 4°. The smart wear does not have excessive wires, and its structure will be optimized for better posture estimation in a later study. PMID:29112125
NASA Astrophysics Data System (ADS)
Barla, Lindi; Verdaasdonk, Rudolf M.; Rustemeyer, Thomas; Klaessens, John; van der Veen, Albert
2016-02-01
Allergy testing is usually performed by exposing the skin to small quantities of potential allergens on the inner forearm and scratching the protective epidermis to increase exposure. After 15 minutes the dermatologist performs a visual check for swelling and erythema which is subjective and difficult for e.g. dark skin types. A small smart phone based thermo camera (FLIR One) was used to obtain quantitative images in a feasibility study of 17 patients Directly after allergen exposure on the forearm, thermal images were captured at 30 seconds interval and processed to a time lapse movie over 15 minutes. Considering the 'subjective' reading of the dermatologist as golden standard, in 11/17 pts (65%) the evaluation of dermatologist was confirmed by the thermo camera including 5 of 6 patients without allergic response. In 7 patients thermo showed additional spots. Of the 342 sites tested, the dermatologist detected 47 allergies of which 28 (60%) were confirmed by thermo imaging while thermo imaging showed 12 additional spots. The method can be improved with user dedicated acquisition software and better registration between normal and thermal images. The lymphatic reaction seems to shift from the original puncture site. The interpretation of the thermal images is still subjective since collecting quantitative data is difficult due to motion patient during 15 minutes. Although not yet conclusive, thermal imaging shows to be promising to improve the sensitivity and selectivity of allergy testing using a smart phone based camera.
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Assessing the Accuracy of Ortho-image using Photogrammetric Unmanned Aerial System
NASA Astrophysics Data System (ADS)
Jeong, H. H.; Park, J. W.; Kim, J. S.; Choi, C. U.
2016-06-01
Smart-camera can not only be operated under network environment anytime and any place but also cost less than the existing photogrammetric UAV since it provides high-resolution image, 3D location and attitude data on a real-time basis from a variety of built-in sensors. This study's proposed UAV photogrammetric method, low-cost UAV and smart camera were used. The elements of interior orientation were acquired through camera calibration. The image triangulation was conducted in accordance with presence or absence of consideration of the interior orientation (IO) parameters determined by camera calibration, The Digital Elevation Model (DEM) was constructed using the image data photographed at the target area and the results of the ground control point survey. This study also analyzes the proposed method's application possibility by comparing a Ortho-image the results of the ground control point survey. Considering these study findings, it is suggested that smartphone is very feasible as a payload for UAV system. It is also expected that smartphone may be loaded onto existing UAV playing direct or indirect roles significantly.
GOOSE: semantic search on internet connected sensors
NASA Astrophysics Data System (ADS)
Schutte, Klamer; Bomhof, Freek; Burghouts, Gertjan; van Diggelen, Jurriaan; Hiemstra, Peter; van't Hof, Jaap; Kraaij, Wessel; Pasman, Huib; Smith, Arthur; Versloot, Corne; de Wit, Joost
2013-05-01
More and more sensors are getting Internet connected. Examples are cameras on cell phones, CCTV cameras for traffic control as well as dedicated security and defense sensor systems. Due to the steadily increasing data volume, human exploitation of all this sensor data is impossible for effective mission execution. Smart access to all sensor data acts as enabler for questions such as "Is there a person behind this building" or "Alert me when a vehicle approaches". The GOOSE concept has the ambition to provide the capability to search semantically for any relevant information within "all" (including imaging) sensor streams in the entire Internet of sensors. This is similar to the capability provided by presently available Internet search engines which enable the retrieval of information on "all" web pages on the Internet. In line with current Internet search engines any indexing services shall be utilized cross-domain. The two main challenge for GOOSE is the Semantic Gap and Scalability. The GOOSE architecture consists of five elements: (1) an online extraction of primitives on each sensor stream; (2) an indexing and search mechanism for these primitives; (3) a ontology based semantic matching module; (4) a top-down hypothesis verification mechanism and (5) a controlling man-machine interface. This paper reports on the initial GOOSE demonstrator, which consists of the MES multimedia analysis platform and the CORTEX action recognition module. It also provides an outlook into future GOOSE development.
SNE Industrial Fieldbus Interface
NASA Technical Reports Server (NTRS)
Lucena, Angel; Raines, Matthew; Oostdyk, Rebecca; Mata, Carlos
2011-01-01
Programmable logic controllers (PLCs) have very limited diagnostic and no prognostic capabilities, while current smart sensor designs do not have the capability to communicate over Fieldbus networks. The aim is to interface smart sensors with PLCs so that health and status information, such as failure mode identification and measurement tolerance, can be communicated via an industrial Fieldbus such as ControlNet. The SNE Industrial Fieldbus Interface (SIFI) is an embedded device that acts as a communication module in a networked smart sensor. The purpose is to enable a smart sensor to communicate health and status information to other devices, such as PLCs, via an industrial Fieldbus networking protocol. The SNE (Smart Network Element) is attached to a commercial off-the-shelf Any bus-S interface module through the SIFI. Numerous Anybus-S modules are available, each one designed to interface with a specific Fieldbus. Development of the SIFI focused on communications using the ControlNet protocol, but any of the Anybus-S modules can be used. The SIFI communicates with the Any-bus module via a data buffer and mailbox system on the Anybus module, and supplies power to the module. The Anybus module transmits and receives data on the Fieldbus using the proper protocol. The SIFI is intended to be connected to other existing SNE modules in order to monitor the health and status of a transducer. The SIFI can also monitor aspects of its own health using an onboard watchdog timer and voltage monitors. The SIFI also has the hardware to drive a touchscreen LCD (liquid crystal display) unit for manual configuration and status monitoring.
ERIC Educational Resources Information Center
Los Angeles Unified School District, CA. Div. of Adult and Occupational Education.
This document consists of performance, computational, and communication modules used by the Working Smart workplace literacy project, a project conducted for the hotel and food industry in the Los Angeles area by a public school district and several profit and nonprofit companies. Literacy instruction was merged with job requirements of the…
Albumin testing in urine using a smart-phone
Coskun, Ahmet F.; Nagi, Richie; Sadeghi, Kayvon; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a digital sensing platform, termed Albumin Tester, running on a smart-phone that images and automatically analyses fluorescent assays confined within disposable test tubes for sensitive and specific detection of albumin in urine. This light-weight and compact Albumin Tester attachment, weighing approximately 148 grams, is mechanically installed on the existing camera unit of a smart-phone, where test and control tubes are inserted from the side and are excited by a battery powered laser diode. This excitation beam, after probing the sample of interest located within the test tube, interacts with the control tube, and the resulting fluorescent emission is collected perpendicular to the direction of the excitation, where the cellphone camera captures the images of the fluorescent tubes through the use of an external plastic lens that is inserted between the sample and the camera lens. The acquired fluorescent images of the sample and control tubes are digitally processed within one second through an Android application running on the same cellphone for quantification of albumin concentration in urine specimen of interest. Using a simple sample preparation approach which takes ~ 5 minutes per test (including the incubation time), we experimentally confirmed the detection limit of our sensing platform as 5–10 μg/mL (which is more than 3 times lower than clinically accepted normal range) in buffer as well as urine samples. This automated albumin testing tool running on a smart-phone could be useful for early diagnosis of kidney disease or for monitoring of chronic patients, especially those suffering from diabetes, hypertension, and/or cardiovascular diseases. PMID:23995895
a Novel Approach to Camera Calibration Method for Smart Phones Under Road Environment
NASA Astrophysics Data System (ADS)
Lee, Bijun; Zhou, Jian; Ye, Maosheng; Guo, Yuan
2016-06-01
Monocular vision-based lane departure warning system has been increasingly used in advanced driver assistance systems (ADAS). By the use of the lane mark detection and identification, we proposed an automatic and efficient camera calibration method for smart phones. At first, we can detect the lane marker feature in a perspective space and calculate edges of lane markers in image sequences. Second, because of the width of lane marker and road lane is fixed under the standard structural road environment, we can automatically build a transformation matrix between perspective space and 3D space and get a local map in vehicle coordinate system. In order to verify the validity of this method, we installed a smart phone in the `Tuzhi' self-driving car of Wuhan University and recorded more than 100km image data on the road in Wuhan. According to the result, we can calculate the positions of lane markers which are accurate enough for the self-driving car to run smoothly on the road.
Smart image sensors: an emerging key technology for advanced optical measurement and microsystems
NASA Astrophysics Data System (ADS)
Seitz, Peter
1996-08-01
Optical microsystems typically include photosensitive devices, analog preprocessing circuitry and digital signal processing electronics. The advances in semiconductor technology have made it possible today to integrate all photosensitive and electronical devices on one 'smart image sensor' or photo-ASIC (application-specific integrated circuits containing photosensitive elements). It is even possible to provide each 'smart pixel' with additional photoelectronic functionality, without compromising the fill factor substantially. This technological capability is the basis for advanced cameras and optical microsystems showing novel on-chip functionality: Single-chip cameras with on- chip analog-to-digital converters for less than $10 are advertised; image sensors have been developed including novel functionality such as real-time selectable pixel size and shape, the capability of performing arbitrary convolutions simultaneously with the exposure, as well as variable, programmable offset and sensitivity of the pixels leading to image sensors with a dynamic range exceeding 150 dB. Smart image sensors have been demonstrated offering synchronous detection and demodulation capabilities in each pixel (lock-in CCD), and conventional image sensors are combined with an on-chip digital processor for complete, single-chip image acquisition and processing systems. Technological problems of the monolithic integration of smart image sensors include offset non-uniformities, temperature variations of electronic properties, imperfect matching of circuit parameters, etc. These problems can often be overcome either by designing additional compensation circuitry or by providing digital correction routines. Where necessary for technological or economic reasons, smart image sensors can also be combined with or realized as hybrids, making use of commercially available electronic components. It is concluded that the possibilities offered by custom smart image sensors will influence the design and the performance of future electronic imaging systems in many disciplines, reaching from optical metrology to machine vision on the factory floor and in robotics applications.
Internet of Things Platform for Smart Farming: Experiences and Lessons Learnt.
Jayaraman, Prem Prakash; Yavari, Ali; Georgakopoulos, Dimitrios; Morshed, Ahsan; Zaslavsky, Arkady
2016-11-09
Improving farm productivity is essential for increasing farm profitability and meeting the rapidly growing demand for food that is fuelled by rapid population growth across the world. Farm productivity can be increased by understanding and forecasting crop performance in a variety of environmental conditions. Crop recommendation is currently based on data collected in field-based agricultural studies that capture crop performance under a variety of conditions (e.g., soil quality and environmental conditions). However, crop performance data collection is currently slow, as such crop studies are often undertaken in remote and distributed locations, and such data are typically collected manually. Furthermore, the quality of manually collected crop performance data is very low, because it does not take into account earlier conditions that have not been observed by the human operators but is essential to filter out collected data that will lead to invalid conclusions (e.g., solar radiation readings in the afternoon after even a short rain or overcast in the morning are invalid, and should not be used in assessing crop performance). Emerging Internet of Things (IoT) technologies, such as IoT devices (e.g., wireless sensor networks, network-connected weather stations, cameras, and smart phones) can be used to collate vast amount of environmental and crop performance data, ranging from time series data from sensors, to spatial data from cameras, to human observations collected and recorded via mobile smart phone applications. Such data can then be analysed to filter out invalid data and compute personalised crop recommendations for any specific farm. In this paper, we present the design of SmartFarmNet, an IoT-based platform that can automate the collection of environmental, soil, fertilisation, and irrigation data; automatically correlate such data and filter-out invalid data from the perspective of assessing crop performance; and compute crop forecasts and personalised crop recommendations for any particular farm. SmartFarmNet can integrate virtually any IoT device, including commercially available sensors, cameras, weather stations, etc., and store their data in the cloud for performance analysis and recommendations. An evaluation of the SmartFarmNet platform and our experiences and lessons learnt in developing this system concludes the paper. SmartFarmNet is the first and currently largest system in the world (in terms of the number of sensors attached, crops assessed, and users it supports) that provides crop performance analysis and recommendations.
Switchable Materials for Smart Windows.
Wang, Yang; Runnerstrom, Evan L; Milliron, Delia J
2016-06-07
This article reviews the basic principles of and recent developments in electrochromic, photochromic, and thermochromic materials for applications in smart windows. Compared with current static windows, smart windows can dynamically modulate the transmittance of solar irradiation based on weather conditions and personal preferences, thus simultaneously improving building energy efficiency and indoor human comfort. Although some smart windows are commercially available, their widespread implementation has not yet been realized. Recent advances in nanostructured materials provide new opportunities for next-generation smart window technology owing to their unique structure-property relations. Nanomaterials can provide enhanced coloration efficiency, faster switching kinetics, and longer lifetime. In addition, their compatibility with solution processing enables low-cost and high-throughput fabrication. This review also discusses the importance of dual-band modulation of visible and near-infrared (NIR) light, as nearly 50% of solar energy lies in the NIR region. Some latest results show that solution-processable nanostructured systems can selectively modulate the NIR light without affecting the visible transmittance, thus reducing energy consumption by air conditioning, heating, and artificial lighting.
Real-time optimizations for integrated smart network camera
NASA Astrophysics Data System (ADS)
Desurmont, Xavier; Lienard, Bruno; Meessen, Jerome; Delaigle, Jean-Francois
2005-02-01
We present an integrated real-time smart network camera. This system is composed of an image sensor, an embedded PC based electronic card for image processing and some network capabilities. The application detects events of interest in visual scenes, highlights alarms and computes statistics. The system also produces meta-data information that could be shared between other cameras in a network. We describe the requirements of such a system and then show how the design of the system is optimized to process and compress video in real-time. Indeed, typical video-surveillance algorithms as background differencing, tracking and event detection should be highly optimized and simplified to be used in this hardware. To have a good adequation between hardware and software in this light embedded system, the software management is written on top of the java based middle-ware specification established by the OSGi alliance. We can integrate easily software and hardware in complex environments thanks to the Java Real-Time specification for the virtual machine and some network and service oriented java specifications (like RMI and Jini). Finally, we will report some outcomes and typical case studies of such a camera like counter-flow detection.
Fluorescent Imaging of Single Nanoparticles and Viruses on a Smart Phone
Wei, Qingshan; Qi, Hangfei; Luo, Wei; Tseng, Derek; Ki, So Jung; Wan, Zhe; Göröcs, Zoltán; Bentolila, Laurent A.; Wu, Ting-Ting; Sun, Ren; Ozcan, Aydogan
2014-01-01
Optical imaging of nanoscale objects, whether it is based on scattering or fluorescence, is a challenging task due to reduced detection signal-to-noise ratio and contrast at subwavelength dimensions. Here, we report a field-portable fluorescence microscopy platform installed on a smart phone for imaging of individual nanoparticles as well as viruses using a lightweight and compact opto-mechanical attachment to the existing camera module of the cell phone. This hand-held fluorescent imaging device utilizes (i) a compact 450 nm laser diode that creates oblique excitation on the sample plane with an incidence angle of ~75°, (ii) a long-pass thin-film interference filter to reject the scattered excitation light, (iii) an external lens creating 2× optical magnification, and (iv) a translation stage for focus adjustment. We tested the imaging performance of this smart-phone-enabled microscopy platform by detecting isolated 100 nm fluorescent particles as well as individual human cytomegaloviruses that are fluorescently labeled. The size of each detected nano-object on the cell phone platform was validated using scanning electron microscopy images of the same samples. This field-portable fluorescence microscopy attachment to the cell phone, weighing only ~186 g, could be used for specific and sensitive imaging of subwavelength objects including various bacteria and viruses and, therefore, could provide a valuable platform for the practice of nanotechnology in field settings and for conducting viral load measurements and other biomedical tests even in remote and resource-limited environments. PMID:24016065
Sensor and Video Monitoring of Water Quality at Bristol Floating Harbour
NASA Astrophysics Data System (ADS)
Chen, Yiheng; Han, Dawei
2017-04-01
Water system is an essential component in a smart city for its sustainability and resilience. The harbourside is a focal area of Bristol with new buildings and features redeveloped in the last ten years, attracting numerous visitors by the diversity of attractions and beautiful views. There is a strong relationship between the satisfactory of the visitors and local people with the water quality in the Harbour. The freshness and beauty of the water body would please people as well as benefit the aquatic ecosystems. As we are entering a data-rich era, this pilot project aims to explore the concept of using video cameras and smart sensors to collect and monitor water quality condition at the Bristol harbourside. The video cameras and smart sensors are connected to the Bristol Is Open network, an open programmable city platform. This will be the first attempt to collect water quality data in real time in the Bristol urban area with the wireless network. The videos and images of the water body collected by the cameras will be correlated with the in-situ water quality parameters for research purposes. The successful implementation of the sensors can attract more academic researchers and industrial partners to expand the sensor network to multiple locations around the city covering the other parts of the Harbour and River Avon, leading to a new generation of urban system infrastructure model.
Design of smart home terminal controller based on ZigBee
NASA Astrophysics Data System (ADS)
Li, Biqing; Li, Zhao; Zhang, Hongyan
2018-04-01
With the development in scienc and technology, and the improvement of living conditions, people pay more and more attention to the comfort of household life. Therefore, smart home has become the development trend of the future furniture. This design is composed of three blocks: transmitting module, receiving module and data acquisition module. ZigBee and STC89C52 belong to launch module as well as belong to receive module. Launch module contains ZigBee, serial communication module and monolithic STC89C52. The receiving module contains light control parts, curtain control part, ZigBee and microcontroller STC89C52. Data acquisition module includes temperature and humidity detection.
Development of integrated control system for smart factory in the injection molding process
NASA Astrophysics Data System (ADS)
Chung, M. J.; Kim, C. Y.
2018-03-01
In this study, we proposed integrated control system for automation of injection molding process required for construction of smart factory. The injection molding process consists of heating, tool close, injection, cooling, tool open, and take-out. Take-out robot controller, image processing module, and process data acquisition interface module are developed and assembled to integrated control system. By adoption of integrated control system, the injection molding process can be simplified and the cost for construction of smart factory can be inexpensive.
Module Embedded Micro-inverter Smart Grid Ready Residential Solar Electric System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Agamy, Mohammed
The “Module Embedded Micro-inverter Smart Grid Ready Residential Solar Electric System” program is focused on developing innovative concepts for residential photovoltaic (PV) systems with the following objectives: to create an Innovative micro-inverter topology that reduces the cost from the best in class micro-inverter and provides high efficiency (>96% CEC - California Energy Commission), and 25+ year warranty, as well as reactive power support; integrate micro-inverter and PV module to reduce system price by at least $0.25/W through a) accentuating dual use of the module metal frame as a large area heat spreader reducing operating temperature, and b) eliminating redundant wiringmore » and connectors; and create micro-inverter controller handles smart grid and safety functions to simplify implementation and reduce cost.« less
Live video monitoring robot controlled by web over internet
NASA Astrophysics Data System (ADS)
Lokanath, M.; Akhil Sai, Guruju
2017-11-01
Future is all about robots, robot can perform tasks where humans cannot, Robots have huge applications in military and industrial area for lifting heavy weights, for accurate placements, for repeating the same task number of times, where human are not efficient. Generally robot is a mix of electronic, electrical and mechanical engineering and can do the tasks automatically on its own or under the supervision of humans. The camera is the eye for robot, call as robovision helps in monitoring security system and also can reach into the places where the human eye cannot reach. This paper presents about developing a live video streaming robot controlled from the website. We designed the web, controlling for the robot to move left, right, front and back while streaming video. As we move to the smart environment or IoT (Internet of Things) by smart devices the system we developed here connects over the internet and can be operated with smart mobile phone using a web browser. The Raspberry Pi model B chip acts as heart for this system robot, the sufficient motors, surveillance camera R pi 2 are connected to Raspberry pi.
A self-learning camera for the validation of highly variable and pseudorandom patterns
NASA Astrophysics Data System (ADS)
Kelley, Michael
2004-05-01
Reliable and productive manufacturing operations have depended on people to quickly detect and solve problems whenever they appear. Over the last 20 years, more and more manufacturing operations have embraced machine vision systems to increase productivity, reliability and cost-effectiveness, including reducing the number of human operators required. Although machine vision technology has long been capable of solving simple problems, it has still not been broadly implemented. The reason is that until now, no machine vision system has been designed to meet the unique demands of complicated pattern recognition. The ZiCAM family was specifically developed to be the first practical hardware to meet these needs. To be able to address non-traditional applications, the machine vision industry must include smart camera technology that meets its users" demands for lower costs, better performance and the ability to address applications of irregular lighting, patterns and color. The next-generation smart cameras will need to evolve as a fundamentally different kind of sensor, with new technology that behaves like a human but performs like a computer. Neural network based systems, coupled with self-taught, n-space, non-linear modeling, promises to be the enabler of the next generation of machine vision equipment. Image processing technology is now available that enables a system to match an operator"s subjectivity. A Zero-Instruction-Set-Computer (ZISC) powered smart camera allows high-speed fuzzy-logic processing, without the need for computer programming. This can address applications of validating highly variable and pseudo-random patterns. A hardware-based implementation of a neural network, Zero-Instruction-Set-Computer, enables a vision system to "think" and "inspect" like a human, with the speed and reliability of a machine.
NASA Astrophysics Data System (ADS)
Ahamed, Mohammad Shahed; Saito, Yuji; Mashiko, Koichi; Mochizuki, Masataka
2017-11-01
In recent years, heat pipes have been widely used in various hand held mobile electronic devices such as smart phones, tablet PCs, digital cameras. With the development of technology these devices have different user friendly features and applications; which require very high clock speeds of the processor. In general, a high clock speed generates a lot of heat, which needs to be spreaded or removed to eliminate the hot spot on the processor surface. However, it is a challenging task to achieve proper cooling of such electronic devices mentioned above because of their confined spaces and concentrated heat sources. Regarding this challenge, we introduced an ultra-thin heat pipe; this heat pipe consists of a special fiber wick structure named as "Center Fiber Wick" which can provide sufficient vapor space on the both sides of the wick structure. We also developed a cooling module that uses this kind of ultra-thin heat pipe to eliminate the hot spot issue. This cooling module consists of an ultra-thin heat pipe and a metal plate. By changing the width, the flattened thickness and the effective length of the ultra-thin heat pipe, several experiments have been conducted to characterize the thermal properties of the developed cooling module. In addition, other experiments were also conducted to determine the effects of changes in the number of heat pipes in a single module. Characterization and comparison of the module have also been conducted both experimentally and theoretically.
Smart security system for Indian rail wagons using IOT
NASA Astrophysics Data System (ADS)
Bhanuteja, S.; Shilpi, S.; Pragna, K.; Arun, M.
2017-11-01
The objective of this project is to create a Security System for the goods that are carried in open top freight trains. The most efficient way to secure anything from thieves is to have a continuous observation. So for continuous observation of the open top freight train, Camera module2 has been used. Passive Infrared Sensor (PIR) 1 has been used to detect the motion or to sense movement of people, animals, or any object. So whenever a motion is detected by the PIR sensor, the Camera takes a picture of that particular instance. That picture will be send to the Raspberry PI which does Skin Detection Algorithm and specifies whether that motion was created by a human or not. If a human makes it, then that picture will send to the drop box. Any Official can have a look at the same. The existing system has a CCTV installed at various critical locations like bridges, railway stations etc. but they does not provide a continuous observation. This paper describes about the Security System that provides continuous observation for open top freight trains so that goods can be carried safely to its destination.
NASA Astrophysics Data System (ADS)
Marshall, Stuart; Thaler, Jon; Schalk, Terry; Huffer, Michael
2006-06-01
The LSST Camera Control System (CCS) will manage the activities of the various camera subsystems and coordinate those activities with the LSST Observatory Control System (OCS). The CCS comprises a set of modules (nominally implemented in software) which are each responsible for managing one camera subsystem. Generally, a control module will be a long lived "server" process running on an embedded computer in the subsystem. Multiple control modules may run on a single computer or a module may be implemented in "firmware" on a subsystem. In any case control modules must exchange messages and status data with a master control module (MCM). The main features of this approach are: (1) control is distributed to the local subsystem level; (2) the systems follow a "Master/Slave" strategy; (3) coordination will be achieved by the exchange of messages through the interfaces between the CCS and its subsystems. The interface between the camera data acquisition system and its downstream clients is also presented.
Vertically integrated photonic multichip module architecture for vision applications
NASA Astrophysics Data System (ADS)
Tanguay, Armand R., Jr.; Jenkins, B. Keith; von der Malsburg, Christoph; Mel, Bartlett; Holt, Gary; O'Brien, John D.; Biederman, Irving; Madhukar, Anupam; Nasiatka, Patrick; Huang, Yunsong
2000-05-01
The development of a truly smart camera, with inherent capability for low latency semi-autonomous object recognition, tracking, and optimal image capture, has remained an elusive goal notwithstanding tremendous advances in the processing power afforded by VLSI technologies. These features are essential for a number of emerging multimedia- based applications, including enhanced augmented reality systems. Recent advances in understanding of the mechanisms of biological vision systems, together with similar advances in hybrid electronic/photonic packaging technology, offer the possibility of artificial biologically-inspired vision systems with significantly different, yet complementary, strengths and weaknesses. We describe herein several system implementation architectures based on spatial and temporal integration techniques within a multilayered structure, as well as the corresponding hardware implementation of these architectures based on the hybrid vertical integration of multiple silicon VLSI vision chips by means of dense 3D photonic interconnections.
Voss with video camera in Service Module
2001-04-08
ISS002-E-5329 (08 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, sets up a video camera on a mounting bracket in the Zvezda / Service Module of the International Space Station (ISS). A 35mm camera and a digital still camera are also visible nearby. This image was recorded with a digital still camera.
Optical zoom lens module using MEMS deformable mirrors for portable device
NASA Astrophysics Data System (ADS)
Lu, Jia-Shiun; Su, Guo-Dung J.
2012-10-01
The thickness of the smart phones in today's market is usually below than 10 mm, and with the shrinking of the phone volume, the difficulty of its production of the camera lens has been increasing. Therefore, how to give the imaging device more functionality in the smaller space is one of the interesting research topics for today's mobile phone companies. In this paper, we proposed a thin optical zoom system which is combined of micro-electromechanical components and reflective optical architecture. By the adopting of the MEMS deformable mirrors, we can change their radius of curvature to reach the optical zoom in and zoom out. And because we used the all-reflective architecture, so this system has eliminated the considerable chromatic aberrations in the absence of lenses. In our system, the thickness of the zoom system is about 11 mm. The smallest EFL (effective focal length) is 4.61 mm at a diagonal field angle of 52° and f/# of 5.24. The longest EFL of the module is 9.22 mm at a diagonal field angle of 27.4 with f/# of 5.03.°
Reconfiguration of a smart surface using heteroclinic connections
McInnes, Colin R.; Xu, Ming
2017-01-01
A reconfigurable smart surface with multiple equilibria is presented, modelled using discrete point masses and linear springs with geometric nonlinearity. An energy-efficient reconfiguration scheme is then investigated to connect equal-energy unstable (but actively controlled) equilibria. In principle, zero net energy input is required to transition the surface between these unstable states, compared to transitions between stable equilibria across a potential barrier. These transitions between equal-energy unstable states, therefore, form heteroclinic connections in the phase space of the problem. Moreover, the smart surface model developed can be considered as a unit module for a range of applications, including modules which can aggregate together to form larger distributed smart surface systems. PMID:28265191
Internet of Things Platform for Smart Farming: Experiences and Lessons Learnt
Jayaraman, Prem Prakash; Yavari, Ali; Georgakopoulos, Dimitrios; Morshed, Ahsan; Zaslavsky, Arkady
2016-01-01
Improving farm productivity is essential for increasing farm profitability and meeting the rapidly growing demand for food that is fuelled by rapid population growth across the world. Farm productivity can be increased by understanding and forecasting crop performance in a variety of environmental conditions. Crop recommendation is currently based on data collected in field-based agricultural studies that capture crop performance under a variety of conditions (e.g., soil quality and environmental conditions). However, crop performance data collection is currently slow, as such crop studies are often undertaken in remote and distributed locations, and such data are typically collected manually. Furthermore, the quality of manually collected crop performance data is very low, because it does not take into account earlier conditions that have not been observed by the human operators but is essential to filter out collected data that will lead to invalid conclusions (e.g., solar radiation readings in the afternoon after even a short rain or overcast in the morning are invalid, and should not be used in assessing crop performance). Emerging Internet of Things (IoT) technologies, such as IoT devices (e.g., wireless sensor networks, network-connected weather stations, cameras, and smart phones) can be used to collate vast amount of environmental and crop performance data, ranging from time series data from sensors, to spatial data from cameras, to human observations collected and recorded via mobile smart phone applications. Such data can then be analysed to filter out invalid data and compute personalised crop recommendations for any specific farm. In this paper, we present the design of SmartFarmNet, an IoT-based platform that can automate the collection of environmental, soil, fertilisation, and irrigation data; automatically correlate such data and filter-out invalid data from the perspective of assessing crop performance; and compute crop forecasts and personalised crop recommendations for any particular farm. SmartFarmNet can integrate virtually any IoT device, including commercially available sensors, cameras, weather stations, etc., and store their data in the cloud for performance analysis and recommendations. An evaluation of the SmartFarmNet platform and our experiences and lessons learnt in developing this system concludes the paper. SmartFarmNet is the first and currently largest system in the world (in terms of the number of sensors attached, crops assessed, and users it supports) that provides crop performance analysis and recommendations. PMID:27834862
Portable, low-priced retinal imager for eye disease screening
NASA Astrophysics Data System (ADS)
Soliz, Peter; Nemeth, Sheila; VanNess, Richard; Barriga, E. S.; Zamora, Gilberto
2014-02-01
The objective of this project was to develop and demonstrate a portable, low-priced, easy to use non-mydriatic retinal camera for eye disease screening in underserved urban and rural locations. Existing portable retinal imagers do not meet the requirements of a low-cost camera with sufficient technical capabilities (field of view, image quality, portability, battery power, and ease-of-use) to be distributed widely to low volume clinics, such as the offices of single primary care physicians serving rural communities or other economically stressed healthcare facilities. Our approach for Smart i-Rx is based primarily on a significant departure from current generations of desktop and hand-held commercial retinal cameras as well as those under development. Our techniques include: 1) Exclusive use of off-the-shelf components; 2) Integration of retinal imaging device into low-cost, high utility camera mount and chin rest; 3) Unique optical and illumination designed for small form factor; and 4) Exploitation of autofocus technology built into present digital SLR recreational cameras; and 5) Integration of a polarization technique to avoid the corneal reflex. In a prospective study, 41 out of 44 diabetics were imaged successfully. No imaging was attempted on three of the subjects due to noticeably small pupils (less than 2mm). The images were of sufficient quality to detect abnormalities related to diabetic retinopathy, such as microaneurysms and exudates. These images were compared with ones taken non-mydriatically with a Canon CR-1 Mark II camera. No cases identified as having DR by expert retinal graders were missed in the Smart i-Rx images.
SMART- Small Motor AerRospace Technology
NASA Astrophysics Data System (ADS)
Balucani, M.; Crescenzi, R.; Ferrari, A.; Guarrea, G.; Pontetti, G.; Orsini, F.; Quattrino, L.; Viola, F.
2004-11-01
This paper presents the "SMART" (Small Motor AerRospace Tecnology) propulsion system, constituted of microthrusters array realised by semiconductor technology on silicon wafers. SMART system is obtained gluing three main modules: combustion chambers, igniters and nozzles. The module was then filled with propellant and closed by gluing a piece of silicon wafer in the back side of the combustion chambers. The complete assembled module composed of 25 micro- thrusters with a 3 x 5 nozzle is presented. The measurement showed a thrust of 129 mN and impulse of 56,8 mNs burning about 70mg of propellant for the micro-thruster with nozzle and a thrust of 21 mN and impulse of 8,4 mNs for the micro-thruster without nozzle.
ePave: A Self-Powered Wireless Sensor for Smart and Autonomous Pavement.
Xiao, Jian; Zou, Xiang; Xu, Wenyao
2017-09-26
"Smart Pavement" is an emerging infrastructure for various on-road applications in transportation and road engineering. However, existing road monitoring solutions demand a certain periodic maintenance effort due to battery life limits in the sensor systems. To this end, we present an end-to-end self-powered wireless sensor-ePave-to facilitate smart and autonomous pavements. The ePave system includes a self-power module, an ultra-low-power sensor system, a wireless transmission module and a built-in power management module. First, we performed an empirical study to characterize the piezoelectric module in order to optimize energy-harvesting efficiency. Second, we developed an integrated sensor system with the optimized energy harvester. An adaptive power knob is designated to adjust the power consumption according to energy budgeting. Finally, we intensively evaluated the ePave system in real-world applications to examine the system's performance and explore the trade-off.
UHF wearable battery free sensor module for activity and falling detection.
Nam Trung Dang; Thang Viet Tran; Wan-Young Chung
2016-08-01
Falling is one of the most serious medical and social problems in aging population. Therefore taking care of the elderly by detecting activity and falling for preventing and mitigating the injuries caused by falls needs to be concerned. This study proposes a wearable, wireless, battery free ultra-high frequency (UHF) smart sensor tag module for falling and activity detection. The proposed tag is powered by UHF RF wave from reader and read by a standard UHF Electronic Product Code (EPC) Class-1 Generation-2 reader. The battery free sensor module could improve the wearability of the wireless device. The combination of accelerometer signal and received signal strength indication (RSSI) from a reader in the passive smart sensor tag detect the activity and falling of the elderly very successfully. The fabricated smart sensor tag module has an operating range of up to 2.5m and conducting in real-time activity and falling detection.
ePave: A Self-Powered Wireless Sensor for Smart and Autonomous Pavement
Xiao, Jian; Zou, Xiang
2017-01-01
“Smart Pavement” is an emerging infrastructure for various on-road applications in transportation and road engineering. However, existing road monitoring solutions demand a certain periodic maintenance effort due to battery life limits in the sensor systems. To this end, we present an end-to-end self-powered wireless sensor—ePave—to facilitate smart and autonomous pavements. The ePave system includes a self-power module, an ultra-low-power sensor system, a wireless transmission module and a built-in power management module. First, we performed an empirical study to characterize the piezoelectric module in order to optimize energy-harvesting efficiency. Second, we developed an integrated sensor system with the optimized energy harvester. An adaptive power knob is designated to adjust the power consumption according to energy budgeting. Finally, we intensively evaluated the ePave system in real-world applications to examine the system’s performance and explore the trade-off. PMID:28954430
Mach-zehnder based optical marker/comb generator for streak camera calibration
Miller, Edward Kirk
2015-03-03
This disclosure is directed to a method and apparatus for generating marker and comb indicia in an optical environment using a Mach-Zehnder (M-Z) modulator. High speed recording devices are configured to record image or other data defining a high speed event. To calibrate and establish time reference, the markers or combs are indicia which serve as timing pulses (markers) or a constant-frequency train of optical pulses (comb) to be imaged on a streak camera for accurate time based calibration and time reference. The system includes a camera, an optic signal generator which provides an optic signal to an M-Z modulator and biasing and modulation signal generators configured to provide input to the M-Z modulator. An optical reference signal is provided to the M-Z modulator. The M-Z modulator modulates the reference signal to a higher frequency optical signal which is output through a fiber coupled link to the streak camera.
Real-time vehicle matching for multi-camera tunnel surveillance
NASA Astrophysics Data System (ADS)
Jelača, Vedran; Niño Castañeda, Jorge Oswaldo; Frías-Velázquez, Andrés; Pižurica, Aleksandra; Philips, Wilfried
2011-03-01
Tracking multiple vehicles with multiple cameras is a challenging problem of great importance in tunnel surveillance. One of the main challenges is accurate vehicle matching across the cameras with non-overlapping fields of view. Since systems dedicated to this task can contain hundreds of cameras which observe dozens of vehicles each, for a real-time performance computational efficiency is essential. In this paper, we propose a low complexity, yet highly accurate method for vehicle matching using vehicle signatures composed of Radon transform like projection profiles of the vehicle image. The proposed signatures can be calculated by a simple scan-line algorithm, by the camera software itself and transmitted to the central server or to the other cameras in a smart camera environment. The amount of data is drastically reduced compared to the whole image, which relaxes the data link capacity requirements. Experiments on real vehicle images, extracted from video sequences recorded in a tunnel by two distant security cameras, validate our approach.
Smart Sensor Network for Aircraft Corrosion Monitoring
2010-02-01
Network Elements – Hub, Network capable application processor ( NCAP ) – Node, Smart transducer interface module (STIM) Corrosion Sensing and...software Transducer software Network Protocol 1451.2 1451.3 1451.5 1451.6 1451.7 I/O Node -processor Power TEDS Smart Sensor Hub ( NCAP ) IEEE 1451.0 and
The biometric-based module of smart grid system
NASA Astrophysics Data System (ADS)
Engel, E.; Kovalev, I. V.; Ermoshkina, A.
2015-10-01
Within Smart Grid concept the flexible biometric-based module base on Principal Component Analysis (PCA) and selective Neural Network is developed. The formation of the selective Neural Network the biometric-based module uses the method which includes three main stages: preliminary processing of the image, face localization and face recognition. Experiments on the Yale face database show that (i) selective Neural Network exhibits promising classification capability for face detection, recognition problems; and (ii) the proposed biometric-based module achieves near real-time face detection, recognition speed and the competitive performance, as compared to some existing subspaces-based methods.
Flow visualization by mobile phone cameras
NASA Astrophysics Data System (ADS)
Cierpka, Christian; Hain, Rainer; Buchmann, Nicolas A.
2016-06-01
Mobile smart phones were completely changing people's communication within the last ten years. However, these devices do not only offer communication through different channels but also devices and applications for fun and recreation. In this respect, mobile phone cameras include now relatively fast (up to 240 Hz) cameras to capture high-speed videos of sport events or other fast processes. The article therefore explores the possibility to make use of this development and the wide spread availability of these cameras in the terms of velocity measurements for industrial or technical applications and fluid dynamics education in high schools and at universities. The requirements for a simplistic PIV (particle image velocimetry) system are discussed. A model experiment of a free water jet was used to prove the concept and shed some light on the achievable quality and determine bottle necks by comparing the results obtained with a mobile phone camera with data taken by a high-speed camera suited for scientific experiments.
Mount Sharp Panorama in Raw Colors
2013-03-15
This mosaic of images from the Mastcam onboard NASA Mars rover Curiosity shows Mount Sharp in raw color. Raw color shows the scene colors as they would look in a typical smart-phone camera photo, before any adjustment.
NASA Astrophysics Data System (ADS)
Razdan, Vikram; Bateman, Richard
2015-05-01
This study investigates the use of a Smartphone and its camera vision capabilities in Engineering metrology and flaw detection, with a view to develop a low cost alternative to Machine vision systems which are out of range for small scale manufacturers. A Smartphone has to provide a similar level of accuracy as Machine Vision devices like Smart cameras. The objective set out was to develop an App on an Android Smartphone, incorporating advanced Computer vision algorithms written in java code. The App could then be used for recording measurements of Twist Drill bits and hole geometry, and analysing the results for accuracy. A detailed literature review was carried out for in-depth study of Machine vision systems and their capabilities, including a comparison between the HTC One X Android Smartphone and the Teledyne Dalsa BOA Smart camera. A review of the existing metrology Apps in the market was also undertaken. In addition, the drilling operation was evaluated to establish key measurement parameters of a twist Drill bit, especially flank wear and diameter. The methodology covers software development of the Android App, including the use of image processing algorithms like Gaussian Blur, Sobel and Canny available from OpenCV software library, as well as designing and developing the experimental set-up for carrying out the measurements. The results obtained from the experimental set-up were analysed for geometry of Twist Drill bits and holes, including diametrical measurements and flaw detection. The results show that Smartphones like the HTC One X have the processing power and the camera capability to carry out metrological tasks, although dimensional accuracy achievable from the Smartphone App is below the level provided by Machine vision devices like Smart cameras. A Smartphone with mechanical attachments, capable of image processing and having a reasonable level of accuracy in dimensional measurement, has the potential to become a handy low-cost Machine vision system for small scale manufacturers, especially in field metrology and flaw detection.
Designing components using smartMOVE electroactive polymer technology
NASA Astrophysics Data System (ADS)
Rosenthal, Marcus; Weaber, Chris; Polyakov, Ilya; Zarrabi, Al; Gise, Peter
2008-03-01
Designing components using SmartMOVE TM electroactive polymer technology requires an understanding of the basic operation principles and the necessary design tools for integration into actuator, sensor and energy generation applications. Artificial Muscle, Inc. is collaborating with OEMs to develop customized solutions for their applications using smartMOVE. SmartMOVE is an advanced and elegant way to obtain almost any kind of movement using dielectric elastomer electroactive polymers. Integration of this technology offers the unique capability to create highly precise and customized motion for devices and systems that require actuation. Applications of SmartMOVE include linear actuators for medical, consumer and industrial applications, such as pumps, valves, optical or haptic devices. This paper will present design guidelines for selecting a smartMOVE actuator design to match the stroke, force, power, size, speed, environmental and reliability requirements for a range of applications. Power supply and controller design and selection will also be introduced. An overview of some of the most versatile configuration options will be presented with performance comparisons. A case example will include the selection, optimization, and performance overview of a smartMOVE actuator for the cell phone camera auto-focus and proportional valve applications.
Clinical and surgical applications of smart glasses.
Mitrasinovic, Stefan; Camacho, Elvis; Trivedi, Nirali; Logan, Julia; Campbell, Colson; Zilinyi, Robert; Lieber, Bryan; Bruce, Eliza; Taylor, Blake; Martineau, David; Dumont, Emmanuel L P; Appelboom, Geoff; Connolly, E Sander
2015-01-01
With the increased efforts to adopt health information technology in the healthcare field, many innovative devices have emerged to improve patient care, increase efficiency, and decrease healthcare costs. A recent addition is smart glasses: web-connected glasses that can present data onto the lenses and record images or videos through a front-facing camera. In this article, we review the most salient uses of smart glasses in healthcare, while also denoting their limitations including practical capabilities and patient confidentiality. Using keywords including, but not limited to, ``smart glasses'', ``healthcare'', ``evaluation'', ``privacy'', and ``development'', we conducted a search on Ovid-MEDLINE, PubMed, and Google Scholar. A total of 71 studies were included in this review. Smart glasses have been adopted into the healthcare setting with several useful applications including, hands-free photo and video documentation, telemedicine, Electronic Health Record retrieval and input, rapid diagnostic test analysis, education, and live broadcasting. In order for the device to gain acceptance by medical professionals, smart glasses will need to be tailored to fit the needs of medical and surgical sub-specialties. Future studies will need to qualitatively assess the benefits of smart glasses as an adjunct to the current health information technology infrastructure.
Dual-modality smartphone endoscope for cervical pre-cancer detection (Conference Presentation)
NASA Astrophysics Data System (ADS)
Hong, Xiangqian; Yu, Bing
2017-02-01
Early detection is the key to the prevention of cervical cancer. There is an urgent need for a portable, affordable, and easy-to-use device for cervical pre-cancer detection, especially in low-resource settings. We have developed a dual-modality fiber-optic endoscope system (SmartME) that integrates high-resolution fluorescence imaging (FLI) and quantitative diffuse reflectance spectroscopy (DRS) onto a smartphone platform. The SmartME consists of a smartphone, a miniature fiber-optic endoscope, a phone attachment containing imaging optics, and a smartphone application (app). FLI is obtained by painting the tissue with a contrast agent (e.g., proflavine), illuminating the tissue and collecting its fluorescence images through an imaging bundle that is coupled to the phone camera. DRS is achieved by using a white LED, attaching additional source and detection fibers to the imaging bundle, and converting the phone camera into a spectrometer. The app collects images/spectra and transmits them to a remote server for analysis to extract the tissue parameters, including nuclear-to-cytoplasm ratio (calculated from FLI), concentrations of oxyhemoglobin (HbO2) and deoxyhemoglobin (Hb) as well as scattering (measured by DRS). These parameters can be used to detect cervical dysplasia. Our preliminary studies have demonstrated that the SmartME can clearly visualize the nuclei in living cells and in vivo biological samples, with a high spatial resolution of 3.1μm. The device can also measure tissue absorption and scattering properties with comparable accuracy to those of a benchtop DRS system. The SmartME has great potential to provide a compact, affordable, and `smart' solution for early detection of neoplastic changes in cervix.
Jekel, Katrin; Damian, Marinella; Storf, Holger; Hausner, Lucrezia; Frölich, Lutz
2016-01-01
Background: The assessment of activities of daily living (ADL) is essential for dementia diagnostics. Even in mild cognitive impairment (MCI), subtle deficits in instrumental ADL (IADL) may occur and signal a higher risk of conversion to dementia. Thus, sensitive and reliable ADL assessment tools are important. Smart homes equipped with sensor technology and video cameras may provide a proxy-free assessment tool for the detection of IADL deficits. Objective:The aim of this paper is to investigate the potential of a smart home environment for the assessment of IADL in MCI. Method: The smart home consisted of a two-room flat equipped with activity sensors and video cameras. Participants with either MCI or healthy controls (HC) had to solve a standardized set of six tasks, e.g., meal preparation, telephone use, and finding objects in the flat. Results: MCI participants needed more time (1384 versus 938 seconds, p < 0.001) and scored less total points (48 versus 57 points, p < 0.001) while solving the tasks than HC. Analyzing the subtasks, intergroup differences were observed for making a phone call, operating the television, and retrieving objects. MCI participants showed more searching and task-irrelevant behavior than HC. Task performance was correlated with cognitive status and IADL questionnaires but not with participants’ age. Conclusion: This pilot study showed that smart home technologies offer the chance for an objective and ecologically valid assessment of IADL. It can be analyzed not only whether a task is successfully completed but also how it is completed. Future studies should concentrate on the development of automated detection of IADL deficits. PMID:27031479
Jekel, Katrin; Damian, Marinella; Storf, Holger; Hausner, Lucrezia; Frölich, Lutz
2016-01-01
The assessment of activities of daily living (ADL) is essential for dementia diagnostics. Even in mild cognitive impairment (MCI), subtle deficits in instrumental ADL (IADL) may occur and signal a higher risk of conversion to dementia. Thus, sensitive and reliable ADL assessment tools are important. Smart homes equipped with sensor technology and video cameras may provide a proxy-free assessment tool for the detection of IADL deficits. The aim of this paper is to investigate the potential of a smart home environment for the assessment of IADL in MCI. The smart home consisted of a two-room flat equipped with activity sensors and video cameras. Participants with either MCI or healthy controls (HC) had to solve a standardized set of six tasks, e.g., meal preparation, telephone use, and finding objects in the flat. MCI participants needed more time (1384 versus 938 seconds, p < 0.001) and scored less total points (48 versus 57 points, p < 0.001) while solving the tasks than HC. Analyzing the subtasks, intergroup differences were observed for making a phone call, operating the television, and retrieving objects. MCI participants showed more searching and task-irrelevant behavior than HC. Task performance was correlated with cognitive status and IADL questionnaires but not with participants' age. This pilot study showed that smart home technologies offer the chance for an objective and ecologically valid assessment of IADL. It can be analyzed not only whether a task is successfully completed but also how it is completed. Future studies should concentrate on the development of automated detection of IADL deficits.
Accurate and cost-effective MTF measurement system for lens modules of digital cameras
NASA Astrophysics Data System (ADS)
Chang, Gao-Wei; Liao, Chia-Cheng; Yeh, Zong-Mu
2007-01-01
For many years, the widening use of digital imaging products, e.g., digital cameras, has given rise to much attention in the market of consumer electronics. However, it is important to measure and enhance the imaging performance of the digital ones, compared to that of conventional cameras (with photographic films). For example, the effect of diffraction arising from the miniaturization of the optical modules tends to decrease the image resolution. As a figure of merit, modulation transfer function (MTF) has been broadly employed to estimate the image quality. Therefore, the objective of this paper is to design and implement an accurate and cost-effective MTF measurement system for the digital camera. Once the MTF of the sensor array is provided, that of the optical module can be then obtained. In this approach, a spatial light modulator (SLM) is employed to modulate the spatial frequency of light emitted from the light-source. The modulated light going through the camera under test is consecutively detected by the sensors. The corresponding images formed from the camera are acquired by a computer and then, they are processed by an algorithm for computing the MTF. Finally, through the investigation on the measurement accuracy from various methods, such as from bar-target and spread-function methods, it appears that our approach gives quite satisfactory results.
Defence Technology Strategy for the Demands of the 21st Century
2006-10-01
understanding of human capability in the CBM role. Ownership of the intellectual property behind algorithms may be sovereign10, but implementation will...synchronisation schemes. · coding schemes. · modulation techniques. · access schemes. · smart spectrum usage . · low probability of intercept. · implementation...modulation techniques; access schemes; smart spectrum usage ; low probability of intercept Spectrum and bandwidth management · cross layer technologies to
The Use of Smart Glasses for Surgical Video Streaming.
Hiranaka, Takafumi; Nakanishi, Yuta; Fujishiro, Takaaki; Hida, Yuichi; Tsubosaka, Masanori; Shibata, Yosaku; Okimura, Kenjiro; Uemoto, Harunobu
2017-04-01
Observation of surgical procedures performed by experts is extremely important for acquisition and improvement of surgical skills. Smart glasses are small computers, which comprise a head-mounted monitor and video camera, and can be connected to the internet. They can be used for remote observation of surgeries by video streaming. Although Google Glass is the most commonly used smart glasses for medical purposes, it is still unavailable commercially and has some limitations. This article reports the use of a different type of smart glasses, InfoLinker, for surgical video streaming. InfoLinker has been commercially available in Japan for industrial purposes for more than 2 years. It is connected to a video server via wireless internet directly, and streaming video can be seen anywhere an internet connection is available. We have attempted live video streaming of knee arthroplasty operations that were viewed at several different locations, including foreign countries, on a common web browser. Although the quality of video images depended on the resolution and dynamic range of the video camera, speed of internet connection, and the wearer's attention to minimize image shaking, video streaming could be easily performed throughout the procedure. The wearer could confirm the quality of the video as the video was being shot by the head-mounted display. The time and cost for observation of surgical procedures can be reduced by InfoLinker, and further improvement of hardware as well as the wearer's video shooting technique is expected. We believe that this can be used in other medical settings.
Evaluation of smart video for transit event detection : final report.
DOT National Transportation Integrated Search
2009-06-01
Transit agencies are increasingly using video cameras to fight crime and terrorism. As the volume of video data increases, the existing digital video surveillance systems provide the infrastructure only to capture, store and distribute video, while l...
Lee, Heng Yeong; Cai, Yufeng; Bi, Shuguang; Liang, Yen Nan; Song, Yujie; Hu, Xiao Matthew
2017-02-22
In this work, a novel fully autonomous photothermotropic material made by hybridization of the poly(N-isopropylacrylamide) (PNIPAM) hydrogel and antimony-tin oxide (ATO) is presented. In this photothermotropic system, the near-infrared (NIR)-absorbing ATO acts as nanoheater to induce the optical switching of the hydrogel. Such a new passive smart window is characterized by excellent NIR shielding, a photothermally activated switching mechanism, enhanced response speed, and solar modulation ability. Systems with 0, 5, 10, and 15 atom % Sb-doped ATO in PNIPAM were investigated, and it was found that a PNIPAM/ATO nanocomposite is able to be photothermally activated. The 10 atom % Sb-doped PNIPAM/ATO exhibits the best response speed and solar modulation ability. Different film thicknesses and ATO contents will affect the response rate and solar modulation ability. Structural stability tests at 15 cycles under continuous exposure to solar irradiation at 1 sun intensity demonstrated the performance stability of such a photothermotropic system. We conclude that such a novel photothermotropic hybrid can be used as a new generation of autonomous passive smart windows for climate-adaptable solar modulation.
Vision-based navigation in a dynamic environment for virtual human
NASA Astrophysics Data System (ADS)
Liu, Yan; Sun, Ji-Zhou; Zhang, Jia-Wan; Li, Ming-Chu
2004-06-01
Intelligent virtual human is widely required in computer games, ergonomics software, virtual environment and so on. We present a vision-based behavior modeling method to realize smart navigation in a dynamic environment. This behavior model can be divided into three modules: vision, global planning and local planning. Vision is the only channel for smart virtual actor to get information from the outside world. Then, the global and local planning module use A* and D* algorithm to find a way for virtual human in a dynamic environment. Finally, the experiments on our test platform (Smart Human System) verify the feasibility of this behavior model.
Moraitou, Marina; Pateli, Adamantia; Fotiou, Sotiris
2017-01-01
As access to health care is important to people's health especially for vulnerable groups that need nursing for a long period of time, new studies in the human sciences argue that the health of the population depend less on the quality of the health care, or on the amount of spending that goes into health care, and more heavily on the quality of everyday life. Smart home applications are designed to "sense" and monitor the health conditions of its residents through the use of a wide range of technological components (motion sensors, video cameras, wearable devices etc.), and web-based services that support their wish to stay at home. In this work, we provide a review of the main technological, psychosocial/ethical and economic challenges that the implementation of a Smart Health Caring Home raises.
Note: Simple hysteresis parameter inspector for camera module with liquid lens
NASA Astrophysics Data System (ADS)
Chen, Po-Jui; Liao, Tai-Shan; Hwang, Chi-Hung
2010-05-01
A method to inspect hysteresis parameter is presented in this article. The hysteresis of whole camera module with liquid lens can be measured rather than a single lens merely. Because the variation in focal length influences image quality, we propose utilizing the sharpness of images which is captured from camera module for hysteresis evaluation. Experiments reveal that the profile of sharpness hysteresis corresponds to the characteristic of contact angle of liquid lens. Therefore, it can infer that the hysteresis of camera module is induced by the contact angle of liquid lens. An inspection process takes only 20 s to complete. Thus comparing with other instruments, this inspection method is more suitable to integrate into the mass production lines for online quality assurance.
Zhao, Qiaole; Schelen, Ben; Schouten, Raymond; van den Oever, Rein; Leenen, René; van Kuijk, Harry; Peters, Inge; Polderdijk, Frank; Bosiers, Jan; Raspe, Marcel; Jalink, Kees; Geert Sander de Jong, Jan; van Geest, Bert; Stoop, Karel; Young, Ian Ted
2012-12-01
We have built an all-solid-state camera that is directly modulated at the pixel level for frequency-domain fluorescence lifetime imaging microscopy (FLIM) measurements. This novel camera eliminates the need for an image intensifier through the use of an application-specific charge coupled device design in a frequency-domain FLIM system. The first stage of evaluation for the camera has been carried out. Camera characteristics such as noise distribution, dark current influence, camera gain, sampling density, sensitivity, linearity of photometric response, and optical transfer function have been studied through experiments. We are able to do lifetime measurement using our modulated, electron-multiplied fluorescence lifetime imaging microscope (MEM-FLIM) camera for various objects, e.g., fluorescein solution, fixed green fluorescent protein (GFP) cells, and GFP-actin stained live cells. A detailed comparison of a conventional microchannel plate (MCP)-based FLIM system and the MEM-FLIM system is presented. The MEM-FLIM camera shows higher resolution and a better image quality. The MEM-FLIM camera provides a new opportunity for performing frequency-domain FLIM.
Liang, Xiao; Chen, Mei; Guo, Shumeng; Zhang, Lanying; Li, Fasheng; Yang, Huai
2017-11-22
Smart windows with controllable visible and near-infrared light transmittance can significantly improve the building's energy efficiency and inhabitant comfort. However, most of the current smart window technology cannot achieve the target of ideal solar control. Herein, we present a novel all-solution-processed hybrid micronano composite smart material that have four optical states to separately modulate the visible and NIR light transmittance through voltage and temperature, respectively. This dual-band optical modulation was achieved by constructing a phase-separated polymer framework, which contains the microsized liquid crystals domains with a negative dielectric constant and tungsten-doped vanadium dioxide (W-VO 2 ) nanocrystals (NCs). The film with 2.5 wt % W-VO 2 NCs exhibits transparency at normal condition, and the passage of visible light can be reversibly and actively regulated between 60.8% and 1.3% by external applied voltage. Also, the transmittance of NIR light can be reversibly and passively modulated between 59.4% and 41.2% by temperature. Besides, the film also features easy all-solution processability, fast electro-optical (E-O) response time, high mechanical strength, and long-term stability. The as-prepared film provides new opportunities for next-generation smart window technology, and the proposed strategy is conductive to engineering novel hybrid inorganic-organic functional matters.
The imaging system design of three-line LMCCD mapping camera
NASA Astrophysics Data System (ADS)
Zhou, Huai-de; Liu, Jin-Guo; Wu, Xing-Xing; Lv, Shi-Liang; Zhao, Ying; Yu, Da
2011-08-01
In this paper, the authors introduced the theory about LMCCD (line-matrix CCD) mapping camera firstly. On top of the introduction were consists of the imaging system of LMCCD mapping camera. Secondly, some pivotal designs which were Introduced about the imaging system, such as the design of focal plane module, the video signal's procession, the controller's design of the imaging system, synchronous photography about forward and nadir and backward camera and the nadir camera of line-matrix CCD. At last, the test results of LMCCD mapping camera imaging system were introduced. The results as following: the precision of synchronous photography about forward and nadir and backward camera is better than 4 ns and the nadir camera of line-matrix CCD is better than 4 ns too; the photography interval of line-matrix CCD of the nadir camera can satisfy the butter requirements of LMCCD focal plane module; the SNR tested in laboratory is better than 95 under typical working condition(the solar incidence degree is 30, the reflectivity of the earth's surface is 0.3) of each CCD image; the temperature of the focal plane module is controlled under 30° in a working period of 15 minutes. All of these results can satisfy the requirements about the synchronous photography, the temperature control of focal plane module and SNR, Which give the guarantee of precision for satellite photogrammetry.
TOPDAQ Acquisition Utility Beta version 1.0
DOE Office of Scientific and Technical Information (OSTI.GOV)
MOreno, Mario; & Barret, Keith
2010-01-07
This TOPDAQ Acquisition Utility uses 5 digital cameras mounted on a vertical pole, maintained in a vertical position using sensors and actuators, to take photographs of an RP-2 or RP-3 module, one camera for each row (4) and one in the center for driving, when the module is at 0 degrees, or facing the eastern horizon. These photographs and other data collected at the same time the pictures are taken are analyzed by the TOPAAP Analysis Utility. The TOPCAT implemented by the TOPDAQ Acquisition Utility and TOPAAP Analysis Utility programs optimizes the alignment of each RP in a module onmore » a parabolic trough solar collector array (SCA) to maximize the amount of solar energy intercepted by the solar receiver. The camera fixture and related hardware are mounted on a pickup truck and driven between rows in a parabolic trough solar power plant. An ultrasonic distance meter is used to maintain the correct distance between the cameras and the RP module. Along with the two leveling actuators, a third actuator is used to maintain a proper relative vertical position between the cameras and the RP module. The TOPDAQ Acquisition Utility facilitates file management by keeping track of which RP module data is being taken and also controls the exposure levels for each camera to maintain a high contract ratio in the photograph even as the available daylight changes throughout the day. The theoretical TOPCAT hardware and software support the current industry standard RP-2 and RP-3 module geometries.« less
A photoelastic modulator-based birefringence imaging microscope for measuring biological specimens
NASA Astrophysics Data System (ADS)
Freudenthal, John; Leadbetter, Andy; Wolf, Jacob; Wang, Baoliang; Segal, Solomon
2014-11-01
The photoelastic modulator (PEM) has been applied to a variety of polarimetric measurements. However, nearly all such applications use point-measurements where each point (spot) on the sample is measured one at a time. The main challenge for employing the PEM in a camera-based imaging instrument is that the PEM modulates too fast for typical cameras. The PEM modulates at tens of KHz. To capture the specific polarization information that is carried on the modulation frequency of the PEM, the camera needs to be at least ten times faster. However, the typical frame rates of common cameras are only in the tens or hundreds frames per second. In this paper, we report a PEM-camera birefringence imaging microscope. We use the so-called stroboscopic illumination method to overcome the incompatibility of the high frequency of the PEM to the relatively slow frame rate of a camera. We trigger the LED light source using a field-programmable gate array (FPGA) in synchrony with the modulation of the PEM. We show the measurement results of several standard birefringent samples as a part of the instrument calibration. Furthermore, we show results observed in two birefringent biological specimens, a human skin tissue that contains collagen and a slice of mouse brain that contains bundles of myelinated axonal fibers. Novel applications of this PEM-based birefringence imaging microscope to both research communities and industrial applications are being tested.
Thermal Management Architecture for Future Responsive Spacecraft
NASA Astrophysics Data System (ADS)
Bugby, D.; Zimbeck, W.; Kroliczek, E.
2009-03-01
This paper describes a novel thermal design architecture that enables satellites to be conceived, configured, launched, and operationally deployed very quickly. The architecture has been given the acronym SMARTS for Satellite Modular and Reconfigurable Thermal System and it involves four basic design rules: modest radiator oversizing, maximum external insulation, internal isothermalization and radiator heat flow modulation. The SMARTS philosophy is being developed in support of the DoD Operationally Responsive Space (ORS) initiative which seeks to drastically improve small satellite adaptability, deployability, and design flexibility. To illustrate the benefits of the philosophy for a prototypical multi-paneled small satellite, the paper describes a SMARTS thermal control system implementation that uses: panel-to-panel heat conduction, intra-panel heat pipe isothermalization, radiator heat flow modulation via a thermoelectric cooler (TEC) cold-biased loop heat pipe (LHP) and maximum external multi-layer insulation (MLI). Analyses are presented that compare the traditional "cold-biasing plus heater power" passive thermal design approach to the SMARTS approach. Plans for a 3-panel SMARTS thermal test bed are described. Ultimately, the goal is to incorporate SMARTS into the design of future ORS satellites, but it is also possible that some aspects of SMARTS technology could be used to improve the responsiveness of future NASA spacecraft. [22 CFR 125.4(b)(13) applicable
Design of smart home gateway based on Wi-Fi and ZigBee
NASA Astrophysics Data System (ADS)
Li, Yang
2018-04-01
With the increasing demand for home lifestyle, the traditional smart home products have been unable to meet the needs of users. Aim at the complex wiring, high cost and difficult operation problems of traditional smart home system, this paper designs a home gateway for smart home system based on Wi-Fi and ZigBee. This paper first gives a smart home system architecture base on cloud server, Wi-Fi and ZigBee. This architecture enables users to access the smart home system remotely from Internet through the cloud server or through Wi-Fi at home. It also offers the flexibility and low cost of ZigBee wireless networking for home equipment. This paper analyzes the functional requirements of the home gateway, and designs a modular hardware architecture based on the RT5350 wireless gateway module and the CC2530 ZigBee coordinator module. Also designs the software of the home gateway, including the gateway master program and the ZigBee coordinator program. Finally, the smart home system and home gateway are tested in two kinds of network environments, internal network and external network. The test results show that the designed home gateway can meet the requirements, support remote and local access, support multi-user, support information security technology, and can timely report equipment status information.
Astronaut John Young in shadow of Lunar Module behind ultraviolet camera
1972-04-22
AS16-114-18439 (22 April 1972) --- Astronaut Charles M. Duke Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, commander, during the mission's second extravehicular activity (EVA). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
Timing Calibration in PET Using a Time Alignment Probe
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moses, William W.; Thompson, Christopher J.
2006-05-05
We evaluate the Scanwell Time Alignment Probe for performing the timing calibration for the LBNL Prostate-Specific PET Camera. We calibrate the time delay correction factors for each detector module in the camera using two methods--using the Time Alignment Probe (which measures the time difference between the probe and each detector module) and using the conventional method (which measures the timing difference between all module-module combinations in the camera). These correction factors, which are quantized in 2 ns steps, are compared on a module-by-module basis. The values are in excellent agreement--of the 80 correction factors, 62 agree exactly, 17 differ bymore » 1 step, and 1 differs by 2 steps. We also measure on-time and off-time counting rates when the two sets of calibration factors are loaded into the camera and find that they agree within statistical error. We conclude that the performance using the Time Alignment Probe and conventional methods are equivalent.« less
Astronaut Charles M. Duke, Jr., in shadow of Lunar Module behind ultraviolet camera
NASA Technical Reports Server (NTRS)
1972-01-01
Astronaut Charles M. Duke, Jr., lunar module pilot, stands in the shadow of the Lunar Module (LM) behind the ultraviolet (UV) camera which is in operation. This photograph was taken by astronaut John W. Young, mission commander, during the mission's second extravehicular activity (EVA-2). The UV camera's gold surface is designed to maintain the correct temperature. The astronauts set the prescribed angles of azimuth and elevation (here 14 degrees for photography of the large Magellanic Cloud) and pointed the camera. Over 180 photographs and spectra in far-ultraviolet light were obtained showing clouds of hydrogen and other gases and several thousand stars. The United States flag and Lunar Roving Vehicle (LRV) are in the left background. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (lm) 'Orion' to explore the Descartes highlands landing site on the Moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (csm) 'Casper' in lunar orbit.
MS Kavandi with camera in Service Module
2001-07-16
STS104-E-5125 (16 July 2001) --- Astronaut Janet L. Kavandi, STS-104 mission specialist, uses a camera as she floats through the Zvezda service module aboard the International Space Station (ISS). The five STS-104 crew members were visiting the orbital outpost to perform various tasks. The image was recorded with a digital still camera.
Son, Sanghyun; Baek, Yunju
2015-01-01
As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%. PMID:26295230
Son, Sanghyun; Baek, Yunju
2015-08-18
As society has developed, the number of vehicles has increased and road conditions have become complicated, increasing the risk of crashes. Therefore, a service that provides safe vehicle control and various types of information to the driver is urgently needed. In this study, we designed and implemented a real-time traffic information system and a smart camera device for smart driver assistance systems. We selected a commercial device for the smart driver assistance systems, and applied a computer vision algorithm to perform image recognition. For application to the dynamic region of interest, dynamic frame skip methods were implemented to perform parallel processing in order to enable real-time operation. In addition, we designed and implemented a model to estimate congestion by analyzing traffic information. The performance of the proposed method was evaluated using images of a real road environment. We found that the processing time improved by 15.4 times when all the proposed methods were applied in the application. Further, we found experimentally that there was little or no change in the recognition accuracy when the proposed method was applied. Using the traffic congestion estimation model, we also found that the average error rate of the proposed model was 5.3%.
Generic Dynamic Environment Perception Using Smart Mobile Devices.
Danescu, Radu; Itu, Razvan; Petrovai, Andra
2016-10-17
The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device's camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system.
NectarCAM, a camera for the medium sized telescopes of the Cherenkov telescope array
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Shayduk, M.
2017-01-01
NectarCAM is a camera proposed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) which covers the core energy range of 100 GeV to 30 TeV. It has a modular design and is based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 8 degrees. Each module includes photomultiplier bases, high voltage supply, pre-amplifier, trigger, readout and Ethernet transceiver. The recorded events last between a few nanoseconds and tens of nanoseconds. The expected performance of the camera are discussed. Prototypes of NectarCAM components have been built to validate the design. Preliminary results of a 19-module mini-camera are presented, as well as future plans for building and testing a full size camera.
Smart Materials and Structures-Smart Wing. Volumes 1, 2, 3 and 4
1998-12-01
repeatable fashion when heat is applied. Therefore, once the pre-twist is successfully applied and the tube is installed in the model, heating the...modules were operated and calibrated online by the PSI 8400 Control System. Because the transducer modules are extremely sensitive to temperature, a...again substantiates that adaptive features tend to support each other, though not necessarily in a completely linear fashion , and essentially provide a
Smart substrates: Making multi-chip modules smarter
NASA Astrophysics Data System (ADS)
Wunsch, T. F.; Treece, R. K.
1995-05-01
A novel multi-chip module (MCM) design and manufacturing methodology which utilizes active CMOS circuits in what is normally a passive substrate realizes the 'smart substrate' for use in highly testable, high reliability MCMS. The active devices are used to test the bare substrate, diagnose assembly errors or integrated circuit (IC) failures that require rework, and improve the testability of the final MCM assembly. A static random access memory (SRAM) MCM has been designed and fabricated in Sandia Microelectronics Development Laboratory in order to demonstrate the technical feasibility of this concept and to examine design and manufacturing issues which will ultimately determine the economic viability of this approach. The smart substrate memory MCM represents a first in MCM packaging. At the time the first modules were fabricated, no other company or MCM vendor had incorporated active devices in the substrate to improve manufacturability and testability, and thereby improve MCM reliability and reduce cost.
The remote infrared remote control system based on LPC1114
NASA Astrophysics Data System (ADS)
Ren, Yingjie; Guo, Kai; Xu, Xinni; Sun, Dayu; Wang, Li
2018-05-01
In view of the shortcomings such as the short control distance of the traditional air conditioner remote controller on the market nowadays and combining with the current smart home new mode "Cloud+ Terminal" mode, a smart home system based on internet is designed and designed to be fully applied to the simple and reliable features of the LPC1114 chip. The controller is added with temperature control module, timing module and other modules. Through the actual test, it achieved remote control air conditioning, with reliability and stability and brought great convenience to people's lives.
Sub-surface defects detection of by using active thermography and advanced image edge detection
NASA Astrophysics Data System (ADS)
Tse, Peter W.; Wang, Gaochao
2017-05-01
Active or pulsed thermography is a popular non-destructive testing (NDT) tool for inspecting the integrity and anomaly of industrial equipment. One of the recent research trends in using active thermography is to automate the process in detecting hidden defects. As of today, human effort has still been using to adjust the temperature intensity of the thermo camera in order to visually observe the difference in cooling rates caused by a normal target as compared to that by a sub-surface crack exists inside the target. To avoid the tedious human-visual inspection and minimize human induced error, this paper reports the design of an automatic method that is capable of detecting subsurface defects. The method used the technique of active thermography, edge detection in machine vision and smart algorithm. An infrared thermo-camera was used to capture a series of temporal pictures after slightly heating up the inspected target by flash lamps. Then the Canny edge detector was employed to automatically extract the defect related images from the captured pictures. The captured temporal pictures were preprocessed by a packet of Canny edge detector and then a smart algorithm was used to reconstruct the whole sequences of image signals. During the processes, noise and irrelevant backgrounds exist in the pictures were removed. Consequently, the contrast of the edges of defective areas had been highlighted. The designed automatic method was verified by real pipe specimens that contains sub-surface cracks. After applying such smart method, the edges of cracks can be revealed visually without the need of using manual adjustment on the setting of thermo-camera. With the help of this automatic method, the tedious process in manually adjusting the colour contract and the pixel intensity in order to reveal defects can be avoided.
Fringe projection profilometry with portable consumer devices
NASA Astrophysics Data System (ADS)
Liu, Danji; Pan, Zhipeng; Wu, Yuxiang; Yue, Huimin
2018-01-01
A fringe projection profilometry (FPP) using portable consumer devices is attractive because it can realize optical three dimensional (3D) measurement for ordinary consumers in their daily lives. We demonstrate a FPP using a camera in a smart mobile phone and a digital consumer mini projector. In our experiment of testing the smart phone (iphone7) camera performance, the rare-facing camera in the iphone7 causes the FPP to have a fringe contrast ratio of 0.546, nonlinear carrier phase aberration value of 0.6 rad, and nonlinear phase error of 0.08 rad and RMS random phase error of 0.033 rad. In contrast, the FPP using the industrial camera has a fringe contrast ratio of 0.715, nonlinear carrier phase aberration value of 0.5 rad, nonlinear phase error of 0.05 rad and RMS random phase error of 0.011 rad. Good performance is achieved by using the FPP composed of an iphone7 and a mini projector. 3D information of a facemask with a size for an adult is also measured by using the FPP that uses portable consumer devices. After the system calibration, the 3D absolute information of the facemask is obtained. The measured results are in good agreement with the ones that are carried out in a traditional way. Our results show that it is possible to use portable consumer devices to construct a good FPP, which is useful for ordinary people to get 3D information in their daily lives.
A compact 16-module camera using 64-pixel CsI(Tl)/Si p-i-n photodiode imaging modules
NASA Astrophysics Data System (ADS)
Choong, W.-S.; Gruber, G. J.; Moses, W. W.; Derenzo, S. E.; Holland, S. E.; Pedrali-Noy, M.; Krieger, B.; Mandelli, E.; Meddeler, G.; Wang, N. W.; Witt, E. K.
2002-10-01
We present a compact, configurable scintillation camera employing a maximum of 16 individual 64-pixel imaging modules resulting in a 1024-pixel camera covering an area of 9.6 cm/spl times/9.6 cm. The 64-pixel imaging module consists of optically isolated 3 mm/spl times/3 mm/spl times/5 mm CsI(Tl) crystals coupled to a custom array of Si p-i-n photodiodes read out by a custom integrated circuit (IC). Each imaging module plugs into a readout motherboard that controls the modules and interfaces with a data acquisition card inside a computer. For a given event, the motherboard employs a custom winner-take-all IC to identify the module with the largest analog output and to enable the output address bits of the corresponding module's readout IC. These address bits identify the "winner" pixel within the "winner" module. The peak of the largest analog signal is found and held using a peak detect circuit, after which it is acquired by an analog-to-digital converter on the data acquisition card. The camera is currently operated with four imaging modules in order to characterize its performance. At room temperature, the camera demonstrates an average energy resolution of 13.4% full-width at half-maximum (FWHM) for the 140-keV emissions of /sup 99m/Tc. The system spatial resolution is measured using a capillary tube with an inner diameter of 0.7 mm and located 10 cm from the face of the collimator. Images of the line source in air exhibit average system spatial resolutions of 8.7- and 11.2-mm FWHM when using an all-purpose and high-sensitivity parallel hexagonal holes collimator, respectively. These values do not change significantly when an acrylic scattering block is placed between the line source and the camera.
LAMOST CCD camera-control system based on RTS2
NASA Astrophysics Data System (ADS)
Tian, Yuan; Wang, Zheng; Li, Jian; Cao, Zi-Huang; Dai, Wei; Wei, Shou-Lin; Zhao, Yong-Heng
2018-05-01
The Large Sky Area Multi-Object Fiber Spectroscopic Telescope (LAMOST) is the largest existing spectroscopic survey telescope, having 32 scientific charge-coupled-device (CCD) cameras for acquiring spectra. Stability and automation of the camera-control software are essential, but cannot be provided by the existing system. The Remote Telescope System 2nd Version (RTS2) is an open-source and automatic observatory-control system. However, all previous RTS2 applications were developed for small telescopes. This paper focuses on implementation of an RTS2-based camera-control system for the 32 CCDs of LAMOST. A virtual camera module inherited from the RTS2 camera module is built as a device component working on the RTS2 framework. To improve the controllability and robustness, a virtualized layer is designed using the master-slave software paradigm, and the virtual camera module is mapped to the 32 real cameras of LAMOST. The new system is deployed in the actual environment and experimentally tested. Finally, multiple observations are conducted using this new RTS2-framework-based control system. The new camera-control system is found to satisfy the requirements for automatic camera control in LAMOST. This is the first time that RTS2 has been applied to a large telescope, and provides a referential solution for full RTS2 introduction to the LAMOST observatory control system.
Seam tracking with adaptive image capture for fine-tuning of a high power laser welding process
NASA Astrophysics Data System (ADS)
Lahdenoja, Olli; Säntti, Tero; Laiho, Mika; Paasio, Ari; Poikonen, Jonne K.
2015-02-01
This paper presents the development of methods for real-time fine-tuning of a high power laser welding process of thick steel by using a compact smart camera system. When performing welding in butt-joint configuration, the laser beam's location needs to be adjusted exactly according to the seam line in order to allow the injected energy to be absorbed uniformly into both steel sheets. In this paper, on-line extraction of seam parameters is targeted by taking advantage of a combination of dynamic image intensity compression, image segmentation with a focal-plane processor ASIC, and Hough transform on an associated FPGA. Additional filtering of Hough line candidates based on temporal windowing is further applied to reduce unrealistic frame-to-frame tracking variations. The proposed methods are implemented in Matlab by using image data captured with adaptive integration time. The simulations are performed in a hardware oriented way to allow real-time implementation of the algorithms on the smart camera system.
Modulated CMOS camera for fluorescence lifetime microscopy.
Chen, Hongtao; Holst, Gerhard; Gratton, Enrico
2015-12-01
Widefield frequency-domain fluorescence lifetime imaging microscopy (FD-FLIM) is a fast and accurate method to measure the fluorescence lifetime of entire images. However, the complexity and high costs involved in construction of such a system limit the extensive use of this technique. PCO AG recently released the first luminescence lifetime imaging camera based on a high frequency modulated CMOS image sensor, QMFLIM2. Here we tested and provide operational procedures to calibrate the camera and to improve the accuracy using corrections necessary for image analysis. With its flexible input/output options, we are able to use a modulated laser diode or a 20 MHz pulsed white supercontinuum laser as the light source. The output of the camera consists of a stack of modulated images that can be analyzed by the SimFCS software using the phasor approach. The nonuniform system response across the image sensor must be calibrated at the pixel level. This pixel calibration is crucial and needed for every camera settings, e.g. modulation frequency and exposure time. A significant dependency of the modulation signal on the intensity was also observed and hence an additional calibration is needed for each pixel depending on the pixel intensity level. These corrections are important not only for the fundamental frequency, but also for the higher harmonics when using the pulsed supercontinuum laser. With these post data acquisition corrections, the PCO CMOS-FLIM camera can be used for various biomedical applications requiring a large frame and high speed acquisition. © 2015 Wiley Periodicals, Inc.
VizieR Online Data Catalog: Galactic outer disk: a field toward Tombaugh 1 (Carraro+, 2017)
NASA Astrophysics Data System (ADS)
Carraro, G.; Sales Silva, J. V.; Moni Bidin, C.; Vazquez, R. A.
2018-04-01
The region of interest has been observed with the Y4KCAM camera attached to the 1.0 m telescope operated by the SMARTS consortium (http://www.astro.yale.edu/smarts/) and located at the Cerro Tololo Inter-American Observatory (CTIO). This camera is equipped with an STA 4064x4064 CCD with 15 μm pixels, yielding a scale of 0.289"/pixel and a field of view (FOV) of 20'x20' at the Cassegrain focus of the CTIO 1.0 m telescope. The observational data were acquired on the night of 2008 January 30. We observed Landolt's SA 98 UBV(RI)KC standard star area (Landolt 1992AJ....104..372L) to tie our UBVRI instrumental system to the standard system. The average seeing was 1.0". During the nights of 2010 January 5, 6, 9, and 10, we observed 40 stars of the field toward the open cluster Tombaugh 1 (10 stars from boxes A and B, 11 stars from box C, and 9 stars from box D) on Cerro Manqui at the Las Campanas Observatory using the Inamori-Magellan Areal Camera & Spectrograph (IMACS; Dressler et al. 2006SPIE.6269E..0FD), attached to the 6.5 m Magellan Telescope. (7 data files).
Harper, Jason
2018-03-02
Jason Harper, an electrical engineer in Argonne National Laboratory's EV-Smart Grid Interoperability Center, discusses his SpEC Module invention that will enable fast charging of electric vehicles in under 15 minutes. The module has been licensed to BTCPower.
A New Digital Imaging and Analysis System for Plant and Ecosystem Phenological Studies
NASA Astrophysics Data System (ADS)
Ramirez, G.; Ramirez, G. A.; Vargas, S. A., Jr.; Luna, N. R.; Tweedie, C. E.
2015-12-01
Over the past decade, environmental scientists have increasingly used low-cost sensors and custom software to gather and analyze environmental data. Included in this trend has been the use of imagery from field-mounted static digital cameras. Published literature has highlighted the challenge scientists have encountered with poor and problematic camera performance and power consumption, limited data download and wireless communication options, general ruggedness of off the shelf camera solutions, and time consuming and hard-to-reproduce digital image analysis options. Data loggers and sensors are typically limited to data storage in situ (requiring manual downloading) and/or expensive data streaming options. Here we highlight the features and functionality of a newly invented camera/data logger system and coupled image analysis software suited to plant and ecosystem phenological studies (patent pending). The camera has resulted from several years of development and prototype testing supported by several grants funded by the US NSF. These inventions have several unique features and functionality and have been field tested in desert, arctic, and tropical rainforest ecosystems. The system can be used to acquire imagery/data from static and mobile platforms. Data is collected, preprocessed, and streamed to the cloud without the need of an external computer and can run for extended time periods. The camera module is capable of acquiring RGB, IR, and thermal (LWIR) data and storing it in a variety of formats including RAW. The system is full customizable with a wide variety of passive and smart sensors. The camera can be triggered by state conditions detected by sensors and/or select time intervals. The device includes USB, Wi-Fi, Bluetooth, serial, GSM, Ethernet, and Iridium connections and can be connected to commercial cloud servers such as Dropbox. The complementary image analysis software is compatible with all popular operating systems. Imagery can be viewed and analyzed in RGB, HSV, and l*a*b color space. Users can select a spectral index, which have been derived from published literature and/or choose to have analytical output reported as separate channel strengths for a given color space. Results of the analysis can be viewed in a plot and/or saved as a .csv file for additional analysis and visualization.
Modules to enhance smart lighting education
NASA Astrophysics Data System (ADS)
Bunch, Robert M.; Joenathan, Charles; Connor, Kenneth; Chouikha, Mohamed
2012-10-01
Over the past several years there has been a rapid advancement in solid state lighting applications brought on by the development of high efficiency light emitting diodes. Development of lighting devices, systems and products that meet the demands of the future lighting marketplace requires workers from many disciplines including engineers, scientists, designers and architects. The National Science Foundation has recognized this fact and established the Smart Lighting Engineering Research Center that promotes research leading to smart lighting systems, partners with industry to enhance innovation and educates a diverse, world-class workforce. The lead institution is Rensselaer Polytechnic Institute with core partners Boston University and The University of New Mexico. Outreach partners include Howard University, Morgan State University, and Rose-Hulman Institute of Technology. Because of the multidisciplinary nature of advanced smart lighting systems workers often have little or no formal education in basic optics, lighting and illumination. This paper describes the initial stages of the development of self-contained and universally applicable educational modules that target essential optics topics needed for lighting applications. The modules are intended to be easily incorporated into new and existing courses by a variety of educators and/or to be used in a series of stand-alone, asynchronous training exercises by new graduate students. The ultimate goal of this effort is to produce resources such as video lectures, video presentations of students-teaching-students, classroom activities, assessment tools, student research projects and laboratories integrated into learning modules. Sample modules and resources will be highlighted. Other outreach activities such as plans for coursework, undergraduate research, design projects, and high school enrichment programs will be discussed.
An autonomous sensor module based on a legacy CCTV camera
NASA Astrophysics Data System (ADS)
Kent, P. J.; Faulkner, D. A. A.; Marshall, G. F.
2016-10-01
A UK MoD funded programme into autonomous sensors arrays (SAPIENT) has been developing new, highly capable sensor modules together with a scalable modular architecture for control and communication. As part of this system there is a desire to also utilise existing legacy sensors. The paper reports upon the development of a SAPIENT-compliant sensor module using a legacy Close-Circuit Television (CCTV) pan-tilt-zoom (PTZ) camera. The PTZ camera sensor provides three modes of operation. In the first mode, the camera is automatically slewed to acquire imagery of a specified scene area, e.g. to provide "eyes-on" confirmation for a human operator or for forensic purposes. In the second mode, the camera is directed to monitor an area of interest, with zoom level automatically optimized for human detection at the appropriate range. Open source algorithms (using OpenCV) are used to automatically detect pedestrians; their real world positions are estimated and communicated back to the SAPIENT central fusion system. In the third mode of operation a "follow" mode is implemented where the camera maintains the detected person within the camera field-of-view without requiring an end-user to directly control the camera with a joystick.
Energy monitoring and managing for electromobility purposes
NASA Astrophysics Data System (ADS)
Slanina, Zdenek; Docekal, Tomas
2016-09-01
This paper describes smart energy meter design and implementation focused on using in charging stations (stands) for electric vehicle (follows as EV) charging support and possible embedding into current smart building technology. In this article there are included results of research of commercial devices available in Czech republic for energy measuring for buildings as well as analysis of energy meter for given purposes. For example in described module there was required measurement of voltage, electric current and frequency of power network. Finally there was designed a communication module with common interface to energy meter for standard communication support between charging station and electric car. After integration into smart buildings (home automation, parking houses) there are pros and cons of such solution mentioned1,2.
a Cloud-Based Architecture for Smart Video Surveillance
NASA Astrophysics Data System (ADS)
Valentín, L.; Serrano, S. A.; Oves García, R.; Andrade, A.; Palacios-Alonso, M. A.; Sucar, L. Enrique
2017-09-01
Turning a city into a smart city has attracted considerable attention. A smart city can be seen as a city that uses digital technology not only to improve the quality of people's life, but also, to have a positive impact in the environment and, at the same time, offer efficient and easy-to-use services. A fundamental aspect to be considered in a smart city is people's safety and welfare, therefore, having a good security system becomes a necessity, because it allows us to detect and identify potential risk situations, and then take appropriate decisions to help people or even prevent criminal acts. In this paper we present an architecture for automated video surveillance based on the cloud computing schema capable of acquiring a video stream from a set of cameras connected to the network, process that information, detect, label and highlight security-relevant events automatically, store the information and provide situational awareness in order to minimize response time to take the appropriate action.
System Security And Monitoring On Smart Home Using Android
NASA Astrophysics Data System (ADS)
Romadhon, A. S.
2018-01-01
Home security system is needed for homeowners who have a lot of activities, as a result, they often leave the house without locking the door and even leave the house in a state of lights that are not lit. In order to overcome this case, a system that can control and can monitor the state of the various devices contained in the house or smart home system is urgently required. The working principle of this smart home using android is when the homeowner sends a certain command using android, the command will be forwarded to the microcontroller and then it will be executed based on the parameters that have been determined. For example, it can turn off and on the light using android app. In this study, testing was conducted to a smart home prototype which is equipped with light bulbs, odour sensors, heat sensors, ultrasonic sensors, LDR, buzzer and camera. The test results indicate that the application has been able to control all the sensors of home appliances well.
Image quality testing of assembled IR camera modules
NASA Astrophysics Data System (ADS)
Winters, Daniel; Erichsen, Patrik
2013-10-01
Infrared (IR) camera modules for the LWIR (8-12_m) that combine IR imaging optics with microbolometer focal plane array (FPA) sensors with readout electronics are becoming more and more a mass market product. At the same time, steady improvements in sensor resolution in the higher priced markets raise the requirement for imaging performance of objectives and the proper alignment between objective and FPA. This puts pressure on camera manufacturers and system integrators to assess the image quality of finished camera modules in a cost-efficient and automated way for quality control or during end-of-line testing. In this paper we present recent development work done in the field of image quality testing of IR camera modules. This technology provides a wealth of additional information in contrast to the more traditional test methods like minimum resolvable temperature difference (MRTD) which give only a subjective overall test result. Parameters that can be measured are image quality via the modulation transfer function (MTF) for broadband or with various bandpass filters on- and off-axis and optical parameters like e.g. effective focal length (EFL) and distortion. If the camera module allows for refocusing the optics, additional parameters like best focus plane, image plane tilt, auto-focus quality, chief ray angle etc. can be characterized. Additionally, the homogeneity and response of the sensor with the optics can be characterized in order to calculate the appropriate tables for non-uniformity correction (NUC). The technology can also be used to control active alignment methods during mechanical assembly of optics to high resolution sensors. Other important points that are discussed are the flexibility of the technology to test IR modules with different form factors, electrical interfaces and last but not least the suitability for fully automated measurements in mass production.
3-dimensional telepresence system for a robotic environment
Anderson, Matthew O.; McKay, Mark D.
2000-01-01
A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.
Optical smart card using semipassive communication.
Glaser, I; Green, Shlomo; Dimkov, Ilan
2006-03-15
An optical secure short-range communication system is presented. The mobile unit (optical smart card) of this system utilizes a retroreflector with an optical modulator, using light from the stationary unit; this mobile unit has very low power consumption and can be as small as a credit card. Such optical smart cards offer better security than RF-based solutions, yet do not require physical contact. Results from a feasibility study model are included.
Optical smart card using semipassive communication
NASA Astrophysics Data System (ADS)
Glaser, I.; Green, Shlomo; Dimkov, Ilan
2006-03-01
An optical secure short-range communication system is presented. The mobile unit (optical smart card) of this system utilizes a retroreflector with an optical modulator, using light from the stationary unit; this mobile unit has very low power consumption and can be as small as a credit card. Such optical smart cards offer better security than RF-based solutions, yet do not require physical contact. Results from a feasibility study model are included.
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on how one's credit history will affect one's credit future is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to make participants…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on what homeownership is all about is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to familiarize participants with the process for…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module, an introduction to credit, is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to decide when and how to use…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on how to keep track of one's money is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to prepare a personal…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on knowing what one is borrowing before buying is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to familiarize participants with the…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on why one should save is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to recognize the importance of saving…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module, an introduction to bank services, is one of ten in the Money Smart curriculum, and includes an instuctor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to build a relationship with…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on one's rights as a consumer is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to become familiar with their…
A Ground-Based Near Infrared Camera Array System for UAV Auto-Landing in GPS-Denied Environment.
Yang, Tao; Li, Guangpo; Li, Jing; Zhang, Yanning; Zhang, Xiaoqiang; Zhang, Zhuoyue; Li, Zhi
2016-08-30
This paper proposes a novel infrared camera array guidance system with capability to track and provide real time position and speed of a fixed-wing Unmanned air vehicle (UAV) during a landing process. The system mainly include three novel parts: (1) Infrared camera array and near infrared laser lamp based cooperative long range optical imaging module; (2) Large scale outdoor camera array calibration module; and (3) Laser marker detection and 3D tracking module. Extensive automatic landing experiments with fixed-wing flight demonstrate that our infrared camera array system has the unique ability to guide the UAV landing safely and accurately in real time. Moreover, the measurement and control distance of our system is more than 1000 m. The experimental results also demonstrate that our system can be used for UAV automatic accurate landing in Global Position System (GPS)-denied environments.
Digital micromirror device camera with per-pixel coded exposure for high dynamic range imaging.
Feng, Wei; Zhang, Fumin; Wang, Weijing; Xing, Wei; Qu, Xinghua
2017-05-01
In this paper, we overcome the limited dynamic range of the conventional digital camera, and propose a method of realizing high dynamic range imaging (HDRI) from a novel programmable imaging system called a digital micromirror device (DMD) camera. The unique feature of the proposed new method is that the spatial and temporal information of incident light in our DMD camera can be flexibly modulated, and it enables the camera pixels always to have reasonable exposure intensity by DMD pixel-level modulation. More importantly, it allows different light intensity control algorithms used in our programmable imaging system to achieve HDRI. We implement the optical system prototype, analyze the theory of per-pixel coded exposure for HDRI, and put forward an adaptive light intensity control algorithm to effectively modulate the different light intensity to recover high dynamic range images. Via experiments, we demonstrate the effectiveness of our method and implement the HDRI on different objects.
Performance prediction of optical image stabilizer using SVM for shaker-free production line
NASA Astrophysics Data System (ADS)
Kim, HyungKwan; Lee, JungHyun; Hyun, JinWook; Lim, Haekeun; Kim, GyuYeol; Moon, HyukSoo
2016-04-01
Recent smartphones adapt the camera module with optical image stabilizer(OIS) to enhance imaging quality in handshaking conditions. However, compared to the non-OIS camera module, the cost for implementing the OIS module is still high. One reason is that the production line for the OIS camera module requires a highly precise shaker table in final test process, which increases the unit cost of the production. In this paper, we propose a framework for the OIS quality prediction that is trained with the support vector machine and following module characterizing features : noise spectral density of gyroscope, optically measured linearity and cross-axis movement of hall and actuator. The classifier was tested on an actual production line and resulted in 88% accuracy of recall rate.
Analysis of a dielectric EAP as smart component for a neonatal respiratory simulator.
Tognarelli, S; Deri, L; Cecchi, F; Scaramuzzo, R; Cuttano, A; Laschi, C; Menciassi, A; Dario, P
2013-01-01
Nowadays, respiratory syndrome represents the most common neonatal pathology. Nevertheless, being respiratory assistance in newborns a great challenge for neonatologists and nurses, use of simulation-based training is quickly becoming a valid meaning of clinical education for an optimal therapy outcome. Commercially available simulators, are, however, not able to represent complex breathing patterns and to evaluate specific alterations. The purpose of this work has been to develop a smart, lightweight, compliant system with variable rigidity able to replicate the anatomical behavior of the neonatal lung, with the final aim to integrate such system into an innovative mechatronic simulator device. A smart material based-system has been proposed and validated: Dielectric Electro Active Polymers (DEAP), coupled to a purposely shaped silicone camera, has been investigated as active element for a compliance change simulator able to replicate both physiological and pathological lung properties. Two different tests have been performed by using a bi-components camera (silicone shape coupled to PolyPower film) both as an isolated system and connected to an infant ventilator. By means of a pressure sensor held on the silicon structure, pressure values have been collected and compared for active and passive PolyPower working configuration. The obtained results confirm a slight pressure decrease in active configuration, that is in agreement with the film stiffness reduction under activation and demonstrates the real potentiality of DEAP for active volume changing of the proposed system.
Generic Dynamic Environment Perception Using Smart Mobile Devices
Danescu, Radu; Itu, Razvan; Petrovai, Andra
2016-01-01
The driving environment is complex and dynamic, and the attention of the driver is continuously challenged, therefore computer based assistance achieved by processing image and sensor data may increase traffic safety. While active sensors and stereovision have the advantage of obtaining 3D data directly, monocular vision is easy to set up, and can benefit from the increasing computational power of smart mobile devices, and from the fact that almost all of them come with an embedded camera. Several driving assistance application are available for mobile devices, but they are mostly targeted for simple scenarios and a limited range of obstacle shapes and poses. This paper presents a technique for generic, shape independent real-time obstacle detection for mobile devices, based on a dynamic, free form 3D representation of the environment: the particle based occupancy grid. Images acquired in real time from the smart mobile device’s camera are processed by removing the perspective effect and segmenting the resulted bird-eye view image to identify candidate obstacle areas, which are then used to update the occupancy grid. The occupancy grid tracked cells are grouped into obstacles depicted as cuboids having position, size, orientation and speed. The easy to set up system is able to reliably detect most obstacles in urban traffic, and its measurement accuracy is comparable to a stereovision system. PMID:27763501
Image Intensifier Modules For Use With Commercially Available Solid State Cameras
NASA Astrophysics Data System (ADS)
Murphy, Howard; Tyler, Al; Lake, Donald W.
1989-04-01
A modular approach to design has contributed greatly to the success of the family of machine vision video equipment produced by EG&G Reticon during the past several years. Internal modularity allows high-performance area (matrix) and line scan cameras to be assembled with two or three electronic subassemblies with very low labor costs, and permits camera control and interface circuitry to be realized by assemblages of various modules suiting the needs of specific applications. Product modularity benefits equipment users in several ways. Modular matrix and line scan cameras are available in identical enclosures (Fig. 1), which allows enclosure components to be purchased in volume for economies of scale and allows field replacement or exchange of cameras within a customer-designed system to be easily accomplished. The cameras are optically aligned (boresighted) at final test; modularity permits optical adjustments to be made with the same precise test equipment for all camera varieties. The modular cameras contain two, or sometimes three, hybrid microelectronic packages (Fig. 2). These rugged and reliable "submodules" perform all of the electronic operations internal to the camera except for the job of image acquisition performed by the monolithic image sensor. Heat produced by electrical power dissipation in the electronic modules is conducted through low resistance paths to the camera case by the metal plates, which results in a thermally efficient and environmentally tolerant camera with low manufacturing costs. A modular approach has also been followed in design of the camera control, video processor, and computer interface accessory called the Formatter (Fig. 3). This unit can be attached directly onto either a line scan or matrix modular camera to form a self-contained units, or connected via a cable to retain the advantages inherent to a small, light weight, and rugged image sensing component. Available modules permit the bus-structured Formatter to be configured as required by a specific camera application. Modular line and matrix scan cameras incorporating sensors with fiber optic faceplates (Fig 4) are also available. These units retain the advantages of interchangeability, simple construction, ruggedness, and optical precision offered by the more common lens input units. Fiber optic faceplate cameras are used for a wide variety of applications. A common usage involves mating of the Reticon-supplied camera to a customer-supplied intensifier tube for low light level and/or short exposure time situations.
Smart mobility solution with multiple input Output interface.
Sethi, Aartika; Deb, Sujay; Ranjan, Prabhat; Sardar, Arghya
2017-07-01
Smart wheelchairs are commonly used to provide solution for mobility impairment. However their usage is limited primarily due to high cost owing from sensors required for giving input, lack of adaptability for different categories of input and limited functionality. In this paper we propose a smart mobility solution using smartphone with inbuilt sensors (accelerometer, camera and speaker) as an input interface. An Emotiv EPOC+ is also used for motor imagery based input control synced with facial expressions in cases of extreme disability. Apart from traction, additional functions like home security and automation are provided using Internet of Things (IoT) and web interfaces. Although preliminary, our results suggest that this system can be used as an integrated and efficient solution for people suffering from mobility impairment. The results also indicate a decent accuracy is obtained for the overall system.
Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo
2018-01-01
Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy. PMID:29757211
Implementation of smart phone video plethysmography and dependence on lighting parameters.
Fletcher, Richard Ribón; Chamberlain, Daniel; Paggi, Nicholas; Deng, Xinyue
2015-08-01
The remote measurement of heart rate (HR) and heart rate variability (HRV) via a digital camera (video plethysmography) has emerged as an area of great interest for biomedical and health applications. While a few implementations of video plethysmography have been demonstrated on smart phones under controlled lighting conditions, it has been challenging to create a general scalable solution due to the large variability in smart phone hardware performance, software architecture, and the variable response to lighting parameters. In this context, we present a selfcontained smart phone implementation of video plethysmography for Android OS, which employs both stochastic and deterministic algorithms, and we use this to study the effect of lighting parameters (illuminance, color spectrum) on the accuracy of the remote HR measurement. Using two different phone models, we present the median HR error for five different video plethysmography algorithms under three different types of lighting (natural sunlight, compact fluorescent, and halogen incandescent) and variations in brightness. For most algorithms, we found the optimum light brightness to be in the range 1000-4000 lux and the optimum lighting types to be compact fluorescent and natural light. Moderate errors were found for most algorithms with some devices under conditions of low-brightness (<;500 lux) and highbrightness (>4000 lux). Our analysis also identified camera frame rate jitter as a major source of variability and error across different phone models, but this can be largely corrected through non-linear resampling. Based on testing with six human subjects, our real-time Android implementation successfully predicted the measured HR with a median error of -0.31 bpm, and an inter-quartile range of 2.1bpm.
Yang, Guanci; Yang, Jing; Sheng, Weihua; Junior, Francisco Erivaldo Fernandes; Li, Shaobo
2018-05-12
Recent research has shown that the ubiquitous use of cameras and voice monitoring equipment in a home environment can raise privacy concerns and affect human mental health. This can be a major obstacle to the deployment of smart home systems for elderly or disabled care. This study uses a social robot to detect embarrassing situations. Firstly, we designed an improved neural network structure based on the You Only Look Once (YOLO) model to obtain feature information. By focusing on reducing area redundancy and computation time, we proposed a bounding-box merging algorithm based on region proposal networks (B-RPN), to merge the areas that have similar features and determine the borders of the bounding box. Thereafter, we designed a feature extraction algorithm based on our improved YOLO and B-RPN, called F-YOLO, for our training datasets, and then proposed a real-time object detection algorithm based on F-YOLO (RODA-FY). We implemented RODA-FY and compared models on our MAT social robot. Secondly, we considered six types of situations in smart homes, and developed training and validation datasets, containing 2580 and 360 images, respectively. Meanwhile, we designed three types of experiments with four types of test datasets composed of 960 sample images. Thirdly, we analyzed how a different number of training iterations affects our prediction estimation, and then we explored the relationship between recognition accuracy and learning rates. Our results show that our proposed privacy detection system can recognize designed situations in the smart home with an acceptable recognition accuracy of 94.48%. Finally, we compared the results among RODA-FY, Inception V3, and YOLO, which indicate that our proposed RODA-FY outperforms the other comparison models in recognition accuracy.
NASA Astrophysics Data System (ADS)
Szu, Harold; Hsu, Charles; Landa, Joseph; Cha, Jae H.; Krapels, Keith A.
2015-05-01
How can we design cameras that image selectively in Full Electro-Magnetic (FEM) spectra? Without selective imaging, we cannot use, for example, ordinary tourist cameras to see through fire, smoke, or other obscurants contributing to creating a Visually Degraded Environment (VDE). This paper addresses a possible new design of selective-imaging cameras at firmware level. The design is consistent with physics of the irreversible thermodynamics of Boltzmann's molecular entropy. It enables imaging in appropriate FEM spectra for sensing through the VDE, and displaying in color spectra for Human Visual System (HVS). We sense within the spectra the largest entropy value of obscurants such as fire, smoke, etc. Then we apply a smart firmware implementation of Blind Sources Separation (BSS) to separate all entropy sources associated with specific Kelvin temperatures. Finally, we recompose the scene using specific RGB colors constrained by the HVS, by up/down shifting Planck spectra at each pixel and time.
Cai, Zhipeng; Luo, Kan; Liu, Chengyu; Li, Jianqing
2017-08-09
A smart electrocardiogram (ECG) garment system was designed for continuous, non-invasive and comfortable ECG monitoring, which mainly consists of four components: Conductive textile electrode, garment, flexible printed circuit board (FPCB)-based ECG processing module and android application program. Conductive textile electrode and FPCB-based ECG processing module (6.8 g, 55 mm × 53 mm × 5 mm) are identified as two key techniques to improve the system's comfort and flexibility. Preliminary experimental results verified that the textile electrodes with circle shape, 40 mm size in diameter, and 5 mm thickness sponge are best suited for the long-term ECG monitoring application. The tests on the whole system confirmed that the designed smart garment can obtain long-term ECG recordings with high signal quality.
NASA Tech Briefs, January 2012
NASA Technical Reports Server (NTRS)
2012-01-01
Contents of this issue are: (1) Energy-Based Tetrahedron Sensor for High-Temperature, High-Pressure Environments (2) Handheld Universal Diagnostic Sensor (3) Large-Area Vacuum Ultraviolet Sensors (4) Fiber Bragg Grating Sensor System for Monitoring Smart Composite Aerospace Structures (5) Health-Enabled Smart Sensor Fusion Technology (6) Extended-Range Passive RFID and Sensor Tags (7) Hybrid Collaborative Learning for Classification and Clustering in Sensor Networks (8) Self-Healing, Inflatable, Rigidizable Shelter (9) Improvements in Cold-Plate Fabrication (10) Technique for Radiometer and Antenna Array Calibration - TRAAC (11) Real-Time Cognitive Computing Architecture for Data Fusion in a Dynamic Environment (12) Programmable Digital Controller (13) Use of CCSDS Packets Over SpaceWire to Control Hardware (14) Key Decision Record Creation and Approval Module (15) Enhanced Graphics for Extended Scale Range (16) Debris Examination Using Ballistic and Radar Integrated Software (17) Data Distribution System (DDS) and Solar Dynamic Observatory Ground Station (SDOGS) (18) Integration Manager (19) Eclipse-Free-Time Assessment Tool for IRIS (20) Automated and Manual Rocket Crater Measurement Software (21) MATLAB Stability and Control Toolbox Trim and Static Stability Module (22) Patched Conic Trajectory Code (23) Ring Image Analyzer (24) SureTrak Probability of Impact Display (25) Implementation of a Non-Metallic Barrier in an Electric Motor (26) Multi-Mission Radioisotope Thermoelectric Generator Heat Exchangers for the Mars Science Laboratory Rover (27) Uniform Dust Distributor for Testing Radiative Emittance of Dust-Coated Surfaces (28) MicroProbe Small Unmanned Aerial System (29) Highly Stable and Active Catalyst for Sabatier Reactions (30) Better Proton-Conducting Polymers for Fuel-Cell Membranes (31) CCD Camera Lens Interface for Real-Time Theodolite Alignment (32) Peregrine 100-km Sounding Rocket Project (33) SOFIA Closed- and Open-Door Aerodynamic Analyses (34) Sonic Thermometer for High-Altitude Balloons (35) Near-Infrared Photon-Counting Camera for High-Sensitivity Observations (36) Integrated Optics Achromatic Nuller for Stellar Interferometry (37) High-Speed Digital Interferometry (38) Ultra-Miniature Lidar Scanner for Launch Range Data Collection (39) Shape and Color Features for Object Recognition Search (40) Explanation Capabilities for Behavior-Based Robot Control (41) A DNA-Inspired Encryption Methodology for Secure, Mobile Ad Hoc Networks (42) Quality Control Method for a Micro-Nano-Channel Microfabricated Device (43) Corner-Cube Retroreflector Instrument for Advanced Lunar Laser Ranging (44) Electrospray Collection of Lunar Dust (45) Fabrication of a Kilopixel Array of Superconducting Microcalorimeters with Microstripline Wiring Spacecraft Attitude Tracking and Maneuver Using Combined Magnetic Actuators (46) Coherent Detector for Near-Angle Scattering and Polarization Characterization of Telescope Mirror Coatings
An indoor augmented reality mobile application for simulation of building evacuation
NASA Astrophysics Data System (ADS)
Sharma, Sharad; Jerripothula, Shanmukha
2015-03-01
Augmented Reality enables people to remain connected with the physical environment they are in, and invites them to look at the world from new and alternative perspectives. There has been an increasing interest in emergency evacuation applications for mobile devices. Nearly all the smart phones these days are Wi-Fi and GPS enabled. In this paper, we propose a novel emergency evacuation system that will help people to safely evacuate a building in case of an emergency situation. It will further enhance knowledge and understanding of where the exits are in the building and safety evacuation procedures. We have applied mobile augmented reality (mobile AR) to create an application with Unity 3D gaming engine. We show how the mobile AR application is able to display a 3D model of the building and animation of people evacuation using markers and web camera. The system gives a visual representation of a building in 3D space, allowing people to see where exits are in the building through the use of a smart phone or tablets. Pilot studies were conducted with the system showing its partial success and demonstrated the effectiveness of the application in emergency evacuation. Our computer vision methods give good results when the markers are closer to the camera, but accuracy decreases when the markers are far away from the camera.
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on how to choose and keep a checking account is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to open and keep a…
ERIC Educational Resources Information Center
Federal Deposit Insurance Corp., Washington, DC.
This module on managing a credit card is one of ten in the Money Smart curriculum, and includes an instructor guide and a take-home guide. It was developed to help adults outside the financial mainstream enhance their money skills and create positive banking relationships. It is designed to enable participants to describe the costs and benefits of…
NASA Astrophysics Data System (ADS)
Pagnutti, Mary; Ryan, Robert E.; Cazenavette, George; Gold, Maxwell; Harlan, Ryan; Leggett, Edward; Pagnutti, James
2017-01-01
A comprehensive radiometric characterization of raw-data format imagery acquired with the Raspberry Pi 3 and V2.1 camera module is presented. The Raspberry Pi is a high-performance single-board computer designed to educate and solve real-world problems. This small computer supports a camera module that uses a Sony IMX219 8 megapixel CMOS sensor. This paper shows that scientific and engineering-grade imagery can be produced with the Raspberry Pi 3 and its V2.1 camera module. Raw imagery is shown to be linear with exposure and gain (ISO), which is essential for scientific and engineering applications. Dark frame, noise, and exposure stability assessments along with flat fielding results, spectral response measurements, and absolute radiometric calibration results are described. This low-cost imaging sensor, when calibrated to produce scientific quality data, can be used in computer vision, biophotonics, remote sensing, astronomy, high dynamic range imaging, and security applications, to name a few.
Course Modules on Structural Health Monitoring with Smart Materials
ERIC Educational Resources Information Center
Shih, Hui-Ru; Walters, Wilbur L.; Zheng, Wei; Everett, Jessica
2009-01-01
Structural Health Monitoring (SHM) is an emerging technology that has multiple applications. SHM emerged from the wide field of smart structures, and it also encompasses disciplines such as structural dynamics, materials and structures, nondestructive testing, sensors and actuators, data acquisition, signal processing, and possibly much more. To…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harper, Jason
Jason Harper, an electrical engineer in Argonne National Laboratory's EV-Smart Grid Interoperability Center, discusses his SpEC Module invention that will enable fast charging of electric vehicles in under 15 minutes. The module has been licensed to BTCPower.
Eltoukhy, Moataz; Kelly, Adam; Kim, Chang-Young; Jun, Hyung-Pil; Campbell, Richard; Kuenze, Christopher
2016-01-01
Cost effective, quantifiable assessment of lower extremity movement represents potential improvement over standard tools for evaluation of injury risk. Ten healthy participants completed three trials of a drop jump, overhead squat, and single leg squat task. Peak hip and knee kinematics were assessed using an 8 camera BTS Smart 7000DX motion analysis system and the Microsoft Kinect® camera system. The agreement and consistency between both uncorrected and correct Kinect kinematic variables and the BTS camera system were assessed using interclass correlations coefficients. Peak sagittal plane kinematics measured using the Microsoft Kinect® camera system explained a significant amount of variance [Range(hip) = 43.5-62.8%; Range(knee) = 67.5-89.6%] in peak kinematics measured using the BTS camera system. Across tasks, peak knee flexion angle and peak hip flexion were found to be consistent and in agreement when the Microsoft Kinect® camera system was directly compared to the BTS camera system but these values were improved following application of a corrective factor. The Microsoft Kinect® may not be an appropriate surrogate for traditional motion analysis technology, but it may have potential applications as a real-time feedback tool in pathological or high injury risk populations.
Hernandez, Andres; Murcia, Harold; Copot, Cosmin; De Keyser, Robin
2015-07-10
Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology.
Towards the Development of a Smart Flying Sensor: Illustration in the Field of Precision Agriculture
Hernandez, Andres; Murcia, Harold; Copot, Cosmin; De Keyser, Robin
2015-01-01
Sensing is an important element to quantify productivity, product quality and to make decisions. Applications, such as mapping, surveillance, exploration and precision agriculture, require a reliable platform for remote sensing. This paper presents the first steps towards the development of a smart flying sensor based on an unmanned aerial vehicle (UAV). The concept of smart remote sensing is illustrated and its performance tested for the task of mapping the volume of grain inside a trailer during forage harvesting. Novelty lies in: (1) the development of a position-estimation method with time delay compensation based on inertial measurement unit (IMU) sensors and image processing; (2) a method to build a 3D map using information obtained from a regular camera; and (3) the design and implementation of a path-following control algorithm using model predictive control (MPC). Experimental results on a lab-scale system validate the effectiveness of the proposed methodology. PMID:26184205
Smart lighting using a liquid crystal modulator
NASA Astrophysics Data System (ADS)
Baril, Alexandre; Thibault, Simon; Galstian, Tigran
2017-08-01
Now that LEDs have massively invaded the illumination market, a clear trend has emerged for more efficient and targeted lighting. The project described here is at the leading edge of the trend and aims at developing an evaluation board to test smart lighting applications. This is made possible thanks to a new liquid crystal light modulator recently developed for broadening LED light beams. The modulator is controlled by electrical signals and is characterized by a linear working zone. This feature allows the implementation of a closed loop control with a sensor feedback. This project shows that the use of computer vision is a promising opportunity for cheap closed loop control. The developed evaluation board integrates the liquid crystal modulator, a webcam, a LED light source and all the required electronics to implement a closed loop control with a computer vision algorithm.
Graafland, Maurits; Bok, Kiki; Schreuder, Henk W R; Schijven, Marlies P
2014-06-01
Untrained laparoscopic camera assistants in minimally invasive surgery (MIS) may cause suboptimal view of the operating field, thereby increasing risk for errors. Camera navigation is often performed by the least experienced member of the operating team, such as inexperienced surgical residents, operating room nurses, and medical students. The operating room nurses and medical students are currently not included as key user groups in structured laparoscopic training programs. A new virtual reality laparoscopic camera navigation (LCN) module was specifically developed for these key user groups. This multicenter prospective cohort study assesses face validity and construct validity of the LCN module on the Simendo virtual reality simulator. Face validity was assessed through a questionnaire on resemblance to reality and perceived usability of the instrument among experts and trainees. Construct validity was assessed by comparing scores of groups with different levels of experience on outcome parameters of speed and movement proficiency. The results obtained show uniform and positive evaluation of the LCN module among expert users and trainees, signifying face validity. Experts and intermediate experience groups performed significantly better in task time and camera stability during three repetitions, compared to the less experienced user groups (P < .007). Comparison of learning curves showed significant improvement of proficiency in time and camera stability for all groups during three repetitions (P < .007). The results of this study show face validity and construct validity of the LCN module. The module is suitable for use in training curricula for operating room nurses and novice surgical trainees, aimed at improving team performance in minimally invasive surgery. © The Author(s) 2013.
Chang, Tianci; Cao, Xun; Li, Ning; Long, Shiwei; Gao, Xiang; Dedon, Liv R; Sun, Guangyao; Luo, Hongjie; Jin, Ping
2017-08-09
In the pursuit of energy efficient materials, vanadium dioxide (VO 2 ) based smart coatings have gained much attention in recent years. For smart window applications, VO 2 thin films should be fabricated at low temperature to reduce the cost in commercial fabrication and solve compatibility problems. Meanwhile, thermochromic performance with high luminous transmittance and solar modulation ability, as well as effective UV shielding function has become the most important developing strategy for ideal smart windows. In this work, facile Cr 2 O 3 /VO 2 bilayer coatings on quartz glasses were designed and fabricated by magnetron sputtering at low temperatures ranging from 250 to 350 °C as compared with typical high growth temperatures (>450 °C). The bottom Cr 2 O 3 layer not only provides a structural template for the growth of VO 2 (R), but also serves as an antireflection layer for improving the luminous transmittance. It was found that the deposition of Cr 2 O 3 layer resulted in a dramatic enhancement of the solar modulation ability (56.4%) and improvement of luminous transmittance (26.4%) when compared to single-layer VO 2 coating. According to optical measurements, the Cr 2 O 3 /VO 2 bilayer structure exhibits excellent optical performances with an enhanced solar modulation ability (ΔT sol = 12.2%) and a high luminous transmittance (T lum,lt = 46.0%), which makes a good balance between ΔT sol and T lum for smart windows applications. As for UV-shielding properties, more than 95.8% UV radiation (250-400 nm) can be blocked out by the Cr 2 O 3 /VO 2 structure. In addition, the visualized energy-efficient effect was modeled by heating a beaker of water using infrared imaging method with/without a Cr 2 O 3 /VO 2 coating glass.
Unattended real-time re-establishment of visibility in high dynamic range video and stills
NASA Astrophysics Data System (ADS)
Abidi, B.
2014-05-01
We describe a portable unattended persistent surveillance system that corrects for harsh illumination conditions, where bright sun light creates mixed contrast effects, i.e., heavy shadows and washouts. These effects result in high dynamic range scenes, where illuminance can vary from few luxes to a 6 figure value. When using regular monitors and cameras, such wide span of illuminations can only be visualized if the actual range of values is compressed, leading to the creation of saturated and/or dark noisy areas and a loss of information in these areas. Images containing extreme mixed contrast cannot be fully enhanced from a single exposure, simply because all information is not present in the original data. The active intervention in the acquisition process is required. A software package, capable of integrating multiple types of COTS and custom cameras, ranging from Unmanned Aerial Systems (UAS) data links to digital single-lens reflex cameras (DSLR), is described. Hardware and software are integrated via a novel smart data acquisition algorithm, which communicates to the camera the parameters that would maximize information content in the final processed scene. A fusion mechanism is then applied to the smartly acquired data, resulting in an enhanced scene where information in both dark and bright areas is revealed. Multi-threading and parallel processing are exploited to produce automatic real time full motion corrected video. A novel enhancement algorithm was also devised to process data from legacy and non-controllable cameras. The software accepts and processes pre-recorded sequences and stills, enhances visible, night vision, and Infrared data, and successfully applies to night time and dark scenes. Various user options are available, integrating custom functionalities of the application into intuitive and easy to use graphical interfaces. The ensuing increase in visibility in surveillance video and intelligence imagery will expand the performance and timely decision making of the human analyst, as well as that of unmanned systems performing automatic data exploitation, such as target detection and identification.
Person and gesture tracking with smart stereo cameras
NASA Astrophysics Data System (ADS)
Gordon, Gaile; Chen, Xiangrong; Buck, Ron
2008-02-01
Physical security increasingly involves sophisticated, real-time visual tracking of a person's location inside a given environment, often in conjunction with biometrics and other security-related technologies. However, demanding real-world conditions like crowded rooms, changes in lighting and physical obstructions have proved incredibly challenging for 2D computer vision technology. In contrast, 3D imaging technology is not affected by constant changes in lighting and apparent color, and thus allows tracking accuracy to be maintained in dynamically lit environments. In addition, person tracking with a 3D stereo camera can provide the location and movement of each individual very precisely, even in a very crowded environment. 3D vision only requires that the subject be partially visible to a single stereo camera to be correctly tracked; multiple cameras are used to extend the system's operational footprint, and to contend with heavy occlusion. A successful person tracking system, must not only perform visual analysis robustly, but also be small, cheap and consume relatively little power. The TYZX Embedded 3D Vision systems are perfectly suited to provide the low power, small footprint, and low cost points required by these types of volume applications. Several security-focused organizations, including the U.S Government, have deployed TYZX 3D stereo vision systems in security applications. 3D image data is also advantageous in the related application area of gesture tracking. Visual (uninstrumented) tracking of natural hand gestures and movement provides new opportunities for interactive control including: video gaming, location based entertainment, and interactive displays. 2D images have been used to extract the location of hands within a plane, but 3D hand location enables a much broader range of interactive applications. In this paper, we provide some background on the TYZX smart stereo cameras platform, describe the person tracking and gesture tracking systems implemented on this platform, and discuss some deployed applications.
Construction of a small and lightweight hyperspectral imaging system
NASA Astrophysics Data System (ADS)
Vogel, Britta; Hünniger, Dirk; Bastian, Georg
2014-05-01
The analysis of the reflected sunlight offers great opportunity to gain information about the environment, including vegetation and soil. In the case of plants the wavelength ratio of the reflected light usually undergoes a change if the state of growth or state of health changes. So the measurement of the reflected light allows drawing conclusions about the state of, amongst others, vegetation. Using a hyperspectral imaging system for data acquisition leads to a large dataset, which can be evaluated with respect to several different questions to obtain various information by one measurement. Based on commercially available plain optical components we developed a small and lightweight hyperspectral imaging system within the INTERREG IV A-Project SMART INSPECTORS. The project SMART INSPECTORS [Smart Aerial Test Rigs with Infrared Spectrometers and Radar] deals with the fusion of airborne visible and infrared imaging remote sensing instruments and wireless sensor networks for precision agriculture and environmental research. A high performance camera was required in terms of good signal, good wavelength resolution and good spatial resolution, while severe constraints of size, proportions and mass had to be met due to the intended use on small unmanned aerial vehicles. The detector was chosen to operate without additional cooling. The refractive and focusing optical components were identified by supporting works with an optical raytracing software and a self-developed program. We present details of design and construction of our camera system, test results to confirm the optical simulation predictions as well as our first measurements.
Design issues for semi-passive optical communication devices
NASA Astrophysics Data System (ADS)
Glaser, I.
2007-09-01
Optical smart cards are devices containing a retro-reflector, light modulator, and some computing and data storage capabilities to affect semi-passive communication. They do not produce light; instead they modulate and send back light received from a stationary unit. These devices can replace contact-based smart cards as well as RF based ones for applications ranging from identification to transmitting and validating data. Since their transmission is essentially focused on the receiving unit, they are harder to eavesdrop than RF devices, yet need no physical contact or alignment. In this paper we explore optical design issues of these devices and estimate their optical behavior. Specifically, we analyze how these compact devices can be optimized for selected application profiles. Some of the key parameters addressed are effective light efficiency (how much modulated signal can be received by the stationary unit given the amount of light it transmits), range of tilt angles (angle between device surface normal to the line connecting the optical smart card with the stationary unit) through which the device would be effective, and power requirements of the semi-passive unit. In addition, issues concerning compact packaging of this device are discussed. Finally, results of the analysis are employed to produce a comparison of achievable capabilities of these optical smart cards, as opposed to alternative devices, and discuss potential applications were they can be best utilized.
An Experimental Concept for Probing Nonlinear Physics in Radiation Belts
NASA Astrophysics Data System (ADS)
Crabtree, C. E.; Ganguli, G.; Tejero, E. M.; Amatucci, B.; Siefring, C. L.
2017-12-01
A sounding rocket experiment, Space Measurement of Rocket-Released Turbulence (SMART), can be used to probe the nonlinear response to a known stimulus injected into the radiation belt. Release of high-speed neutral barium atoms (8- 10 km/s) generated by a shaped charge explosion in the ionosphere can be used as the source of free energy to seed weak turbulence in the ionosphere. The Ba atoms are photo-ionized forming a ring velocity distribution of heavy Ba+ that is known to generate lower hybrid waves. Induced nonlinear scattering will convert the lower hybrid waves into EM whistler/magnetosonic waves. The escape of the whistlers from the ionospheric region into the radiation belts has been studied and their observable signatures quantified. The novelty of the SMART experiment is to make coordinated measurement of the cause and effect of the turbulence in space plasmas and from that to deduce the role of nonlinear scattering in the radiation belts. Sounding rocket will carry a Ba release module and an instrumented daughter section that includes vector wave magnetic and electric field sensors, Langmuir probes and energetic particle detectors. The goal of these measurements is to determine the whistler and lower hybrid wave amplitudes and spectrum in the ionospheric source region and look for precipitated particles. The Ba release may occur at 600-700 km near apogee. Ground based cameras and radio diagnostics can be used to characterize the Ba and Ba+ release. The Van Allen Probes can be used to detect the propagation of the scattering-generated whistler waves and their effects in the radiation belts. By detecting whistlers and measuring their energy density in the radiation belts the SMART mission will confirm the nonlinear generation of whistlers through scattering of lower hybrid along with other nonlinear responses of the radiation belts and their connection to weak turbulence.
Design and implementation of a smart card based healthcare information system.
Kardas, Geylani; Tunali, E Turhan
2006-01-01
Smart cards are used in information technologies as portable integrated devices with data storage and data processing capabilities. As in other fields, smart card use in health systems became popular due to their increased capacity and performance. Their efficient use with easy and fast data access facilities leads to implementation particularly widespread in security systems. In this paper, a smart card based healthcare information system is developed. The system uses smart card for personal identification and transfer of health data and provides data communication via a distributed protocol which is particularly developed for this study. Two smart card software modules are implemented that run on patient and healthcare professional smart cards, respectively. In addition to personal information, general health information about the patient is also loaded to patient smart card. Health care providers use their own smart cards to be authenticated on the system and to access data on patient cards. Encryption keys and digital signature keys stored on smart cards of the system are used for secure and authenticated data communication between clients and database servers over distributed object protocol. System is developed on Java platform by using object oriented architecture and design patterns.
MS Lucid places samples in the TEHOF aboard the Spektr module
1997-03-26
STS079-S-082 (16-26 Sept. 1996) --- Cosmonaut guest researcher Shannon W. Lucid and Valeri G. Korzun, her Mir-22 commander, are pictured on the Spektr Module aboard Russia's Earth-orbiting Mir Space Station. Korzun was the third of four commanders that Lucid served with during her record-setting 188 consecutive days in space. Later, Lucid returned to Earth with her fourth commander-astronaut William F. Readdy-and five other NASA astronauts to complete the STS-79 mission. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the space shuttle Atlantis and the various Mir modules. A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.
Motion camera based on a custom vision sensor and an FPGA architecture
NASA Astrophysics Data System (ADS)
Arias-Estrada, Miguel
1998-09-01
A digital camera for custom focal plane arrays was developed. The camera allows the test and development of analog or mixed-mode arrays for focal plane processing. The camera is used with a custom sensor for motion detection to implement a motion computation system. The custom focal plane sensor detects moving edges at the pixel level using analog VLSI techniques. The sensor communicates motion events using the event-address protocol associated to a temporal reference. In a second stage, a coprocessing architecture based on a field programmable gate array (FPGA) computes the time-of-travel between adjacent pixels. The FPGA allows rapid prototyping and flexible architecture development. Furthermore, the FPGA interfaces the sensor to a compact PC computer which is used for high level control and data communication to the local network. The camera could be used in applications such as self-guided vehicles, mobile robotics and smart surveillance systems. The programmability of the FPGA allows the exploration of further signal processing like spatial edge detection or image segmentation tasks. The article details the motion algorithm, the sensor architecture, the use of the event- address protocol for velocity vector computation and the FPGA architecture used in the motion camera system.
Literacies in a Participatory, Multimodal World: The Arts and Aesthetics of Web 2.0
ERIC Educational Resources Information Center
Vasudevan, Lalitha
2010-01-01
Communicative and expressive modalities, such as smart phones and video cameras, have become increasingly multifunctional and reflect an evolving digital landscape often referred to as Web 2.0. The "ethos" and "technical" affordances of Web 2.0 have the potential to catalyze the aesthetic creativity of youth. Following a discussion of aesthetics…
NASA Technical Reports Server (NTRS)
Mahajan, Ajay
2007-01-01
An assembly that contains a sensor, sensor-signal-conditioning circuitry, a sensor-readout analog-to-digital converter (ADC), data-storage circuitry, and a microprocessor that runs special-purpose software and communicates with one or more external computer(s) has been developed as a prototype of "smart" sensor modules for monitoring the integrity and functionality (the "health") of engineering systems. Although these modules are now being designed specifically for use on rocket-engine test stands, it is anticipated that they could also readily be designed to be incorporated into health-monitoring subsystems of such diverse engineering systems as spacecraft, aircraft, land vehicles, bridges, buildings, power plants, oilrigs, and defense installations. The figure is a simplified block diagram of the "smart" sensor module. The analog sensor readout signal is processed by the ADC, the digital output of which is fed to the microprocessor. By means of a standard RS-232 cable, the microprocessor is connected to a local personal computer (PC), from which software is downloaded into a randomaccess memory in the microprocessor. The local PC is also used to debug the software. Once the software is running, the local PC is disconnected and the module is controlled by, and all output data from the module are collected by, a remote PC via an Ethernet bus. Several smart sensor modules like this one could be connected to the same Ethernet bus and controlled by the single remote PC. The software running in the microprocessor includes driver programs for operation of the sensor, programs that implement self-assessment algorithms, programs that implement protocols for communication with the external computer( s), and programs that implement evolutionary methodologies to enable the module to improve its performance over time. The design of the module and of the health-monitoring system of which it is a part reflects the understanding that the main purpose of a health-monitoring system is to detect damage and, therefore, the health-monitoring system must be able to function effectively in the presence of damage and should be capable of distinguishing between damage to itself and damage to the system being monitored. A major benefit afforded by the self-assessment algorithms is that in the output of the module, the sensor data indicative of the health of the engineering system being monitored are coupled with a confidence factor that quantifies the degree of reliability of the data. Hence, the output includes information on the health of the sensor module itself in addition to information on the health of the engineering system being monitored.
Smart guidewires for smooth navigation in neurovascular intervention
NASA Astrophysics Data System (ADS)
Chen, Yanfei; Barry, Matthew M.; Shayan, Mahdis; Jankowitz, Brian T.; Duan, Xinjie; Robertson, Anne M.; Chyu, Minking K.; Chun, Youngjae
2015-04-01
A smart guidewire using nitinol materials was designed, manufactured and evaluated the device functionality, such as bending performance, trackability, thermal effects, and thrombogenic response. Two types of nitinol material were partially used to enhance the guidewire trackability. A proposed smart guidewire system uses either one- or two-way shape-memory alloy nitinol (1W-SMA, 2W-SMA) wires (0.015, 381µm nitinol wire). Bending stiffness was measured using in vitro test system, which contains the NI USB-9162 data logger and LabView Signal Express 2010. Temperature distribution and displacement were evaluated via recording a 60Hz movie using a SC325 camera. Hemocompatibility was evaluated by scanning electron microscopy after one heating cycle of nitinol under the Na-citrate porcine whole blood circulation. A smart guidewire showed 30 degrees bending after applying or disconnecting electrical current. While the temperature of the nitinol wires increased more than 70 °C, the surrounding temperature with the commercially available catheter coverings showed below human body temperature showing 30 ̴ 33 °C. There was no significant platelet attachment or blood coagulation when the guidewire operates. Novel smart guidewires have been developed using shape memory alloy nitinol, which may represent a novel alternative to typical commercially available guidewires for interventional procedures.
ARTIST CONCEPT - ASTRONAUT WORDEN'S EXTRAVEHICULAR ACTIVITY (EVA) (APOLLO XV)
1971-07-09
S71-39614 (July 1971) --- An artist's concept of the Apollo 15 Command and Service Modules (CSM), showing two crewmembers performing a new-to-Apollo extravehicular activity (EVA). The figure at left represents astronaut Alfred M. Worden, command module pilot, connected by an umbilical tether to the CM, at right, where a figure representing astronaut James B. Irwin, lunar module pilot, stands at the open CM hatch. Worden is working with the panoramic camera in the Scientific Instrument Module (SIM). Behind Irwin is the 16mm data acquisition camera. Artwork by North American Rockwell.
[Communication subsystem design of tele-screening system for diabetic retinopathy].
Chen, Jian; Pan, Lin; Zheng, Shaohua; Yu, Lun
2013-12-01
A design scheme of a tele-screening system for diabetic retinopathy (DR) has been proposed, especially the communication subsystem. The scheme uses serial communication module consisting of ARM 7 microcontroller and relays to connect remote computer and fundus camera, and also uses C++ programming language based on MFC to design the communication software consisting of therapy and diagnostic information module, video/audio surveillance module and fundus camera control module. The scheme possesses universal property in some remote medical treatment systems which are similar to the system.
Exploration and design of smart home circuit based on ZigBee
NASA Astrophysics Data System (ADS)
Luo, Huirong
2018-05-01
To apply ZigBee technique in smart home circuit design, in the hardware design link of ZigBee node, TI Company's ZigBee wireless communication chip CC2530 was used to complete the design of ZigBee RF module circuit and peripheral circuit. In addition, the function demand and the overall scheme of the intelligent system based on smart home furnishing were proposed. Finally, the smart home system was built by combining ZigBee network and intelligent gateway. The function realization, reliability and power consumption of ZigBee network were tested. The results showed that ZigBee technology was applied to smart home system, making it have some advantages in terms of flexibility, scalability, power consumption and indoor aesthetics. To sum up, the system has high application value.
Sevrin, Loïc; Noury, Norbert; Abouchi, Nacer; Jumel, Fabrice; Massot, Bertrand; Saraydaryan, Jacques
2015-01-01
An increasing number of systems use indoor positioning for many scenarios such as asset tracking, health care, games, manufacturing, logistics, shopping, and security. Many technologies are available and the use of depth cameras is becoming more and more attractive as this kind of device becomes affordable and easy to handle. This paper contributes to the effort of creating an indoor positioning system based on low cost depth cameras (Kinect). A method is proposed to optimize the calibration of the depth cameras, to describe the multi-camera data fusion and to specify a global positioning projection to maintain the compatibility with outdoor positioning systems. The monitoring of the people trajectories at home is intended for the early detection of a shift in daily activities which highlights disabilities and loss of autonomy. This system is meant to improve homecare health management at home for a better end of life at a sustainable cost for the community.
Using the iPhone as a device for a rapid quantitative analysis of trinitrotoluene in soil.
Choodum, Aree; Kanatharana, Proespichaya; Wongniramaikul, Worawit; Daeid, Niamh Nic
2013-10-15
Mobile 'smart' phones have become almost ubiquitous in society and are typically equipped with a high-resolution digital camera which can be used to produce an image very conveniently. In this study, the built-in digital camera of a smart phone (iPhone) was used to capture the results from a rapid quantitative colorimetric test for trinitrotoluene (TNT) in soil. The results were compared to those from a digital single-lens reflex (DSLR) camera. The colored product from the selective test for TNT was quantified using an innovative application of photography where the relationships between the Red Green Blue (RGB) values and the concentrations of colorimetric product were exploited. The iPhone showed itself to be capable of being used more conveniently than the DSLR while providing similar analytical results with increased sensitivity. The wide linear range and low detection limits achieved were comparable with those from spectrophotometric quantification methods. Low relative errors in the range of 0.4 to 6.3% were achieved in the analysis of control samples and 0.4-6.2% for spiked soil extracts with good precision (2.09-7.43% RSD) for the analysis over 4 days. The results demonstrate that the iPhone provides the potential to be used as an ideal novel platform for the development of a rapid on site semi quantitative field test for the analysis of explosives. © 2013 Elsevier B.V. All rights reserved.
VLSI 'smart' I/O module development
NASA Astrophysics Data System (ADS)
Kirk, Dan
The developmental history, design, and operation of the MIL-STD-1553A/B discrete and serial module (DSM) for the U.S. Navy AN/AYK-14(V) avionics computer are described and illustrated with diagrams. The ongoing preplanned product improvement for the AN/AYK-14(V) includes five dual-redundant MIL-STD-1553 channels based on DSMs. The DSM is a front-end processor for transferring data to and from a common memory, sharing memory with a host processor to provide improved 'smart' input/output performance. Each DSM comprises three hardware sections: three VLSI-6000 semicustomized CMOS arrays, memory units to support the arrays, and buffers and resynchronization circuits. The DSM hardware module design, VLSI-6000 design tools, controlware and test software, and checkout procedures (using a hardware simulator) are characterized in detail.
Optical wireless communications for micromachines
NASA Astrophysics Data System (ADS)
O'Brien, Dominic C.; Yuan, Wei Wen; Liu, Jing Jing; Faulkner, Grahame E.; Elston, Steve J.; Collins, Steve; Parry-Jones, Lesley A.
2006-08-01
A key challenge for wireless sensor networks is minimizing the energy required for network nodes to communicate with each other, and this becomes acute for self-powered devices such as 'smart dust'. Optical communications is a potentially attractive solution for such devices. The University of Oxford is currently involved in a project to build optical wireless links to smart dust. Retro-reflectors combined with liquid crystal modulators can be integrated with the micro-machine to create a low power transceiver. When illuminated from a base station a modulated beam is returned, transmitting data. Data from the base station can be transmitted using modulation of the illuminating beam and a receiver at the micro-machine. In this paper we outline the energy consumption and link budget considerations in the design of such micro-machines, and report preliminary experimental results.
Texture-adaptive hyperspectral video acquisition system with a spatial light modulator
NASA Astrophysics Data System (ADS)
Fang, Xiaojing; Feng, Jiao; Wang, Yongjin
2014-10-01
We present a new hybrid camera system based on spatial light modulator (SLM) to capture texture-adaptive high-resolution hyperspectral video. The hybrid camera system records a hyperspectral video with low spatial resolution using a gray camera and a high-spatial resolution video using a RGB camera. The hyperspectral video is subsampled by the SLM. The subsampled points can be adaptively selected according to the texture characteristic of the scene by combining with digital imaging analysis and computational processing. In this paper, we propose an adaptive sampling method utilizing texture segmentation and wavelet transform (WT). We also demonstrate the effectiveness of the sampled pattern on the SLM with the proposed method.
Versatile microsecond movie camera
NASA Astrophysics Data System (ADS)
Dreyfus, R. W.
1980-03-01
A laboratory-type movie camera is described which satisfies many requirements in the range 1 microsec to 1 sec. The camera consists of a He-Ne laser and compatible state-of-the-art components; the primary components are an acoustooptic modulator, an electromechanical beam deflector, and a video tape system. The present camera is distinct in its operation in that submicrosecond laser flashes freeze the image motion while still allowing the simplicity of electromechanical image deflection in the millisecond range. The gating and pulse delay circuits of an oscilloscope synchronize the modulator and scanner relative to the subject being photographed. The optical table construction and electronic control enhance the camera's versatility and adaptability. The instant replay video tape recording allows for easy synchronization and immediate viewing of the results. Economy is achieved by using off-the-shelf components, optical table construction, and short assembly time.
Visible camera cryostat design and performance for the SuMIRe Prime Focus Spectrograph (PFS)
NASA Astrophysics Data System (ADS)
Smee, Stephen A.; Gunn, James E.; Golebiowski, Mirek; Hope, Stephen C.; Madec, Fabrice; Gabriel, Jean-Francois; Loomis, Craig; Le fur, Arnaud; Dohlen, Kjetil; Le Mignant, David; Barkhouser, Robert; Carr, Michael; Hart, Murdock; Tamura, Naoyuki; Shimono, Atsushi; Takato, Naruhisa
2016-08-01
We describe the design and performance of the SuMIRe Prime Focus Spectrograph (PFS) visible camera cryostats. SuMIRe PFS is a massively multi-plexed ground-based spectrograph consisting of four identical spectrograph modules, each receiving roughly 600 fibers from a 2394 fiber robotic positioner at the prime focus. Each spectrograph module has three channels covering wavelength ranges 380 nm - 640 nm, 640 nm - 955 nm, and 955 nm - 1.26 um, with the dispersed light being imaged in each channel by a f/1.07 vacuum Schmidt camera. The cameras are very large, having a clear aperture of 300 mm at the entrance window, and a mass of 280 kg. In this paper we describe the design of the visible camera cryostats and discuss various aspects of cryostat performance.
NASA Astrophysics Data System (ADS)
Kyrkou, Christos; Theocharides, Theocharis
2016-07-01
Object detection is a major step in several computer vision applications and a requirement for most smart camera systems. Recent advances in hardware acceleration for real-time object detection feature extensive use of reconfigurable hardware [field programmable gate arrays (FPGAs)], and relevant research has produced quite fascinating results, in both the accuracy of the detection algorithms as well as the performance in terms of frames per second (fps) for use in embedded smart camera systems. Detecting objects in images, however, is a daunting task and often involves hardware-inefficient steps, both in terms of the datapath design and in terms of input/output and memory access patterns. We present how a visual-feature-directed search cascade composed of motion detection, depth computation, and edge detection, can have a significant impact in reducing the data that needs to be examined by the classification engine for the presence of an object of interest. Experimental results on a Spartan 6 FPGA platform for face detection indicate data search reduction of up to 95%, which results in the system being able to process up to 50 1024×768 pixels images per second with a significantly reduced number of false positives.
A Control System and Streaming DAQ Platform with Image-Based Trigger for X-ray Imaging
NASA Astrophysics Data System (ADS)
Stevanovic, Uros; Caselle, Michele; Cecilia, Angelica; Chilingaryan, Suren; Farago, Tomas; Gasilov, Sergey; Herth, Armin; Kopmann, Andreas; Vogelgesang, Matthias; Balzer, Matthias; Baumbach, Tilo; Weber, Marc
2015-06-01
High-speed X-ray imaging applications play a crucial role for non-destructive investigations of the dynamics in material science and biology. On-line data analysis is necessary for quality assurance and data-driven feedback, leading to a more efficient use of a beam time and increased data quality. In this article we present a smart camera platform with embedded Field Programmable Gate Array (FPGA) processing that is able to stream and process data continuously in real-time. The setup consists of a Complementary Metal-Oxide-Semiconductor (CMOS) sensor, an FPGA readout card, and a readout computer. It is seamlessly integrated in a new custom experiment control system called Concert that provides a more efficient way of operating a beamline by integrating device control, experiment process control, and data analysis. The potential of the embedded processing is demonstrated by implementing an image-based trigger. It records the temporal evolution of physical events with increased speed while maintaining the full field of view. The complete data acquisition system, with Concert and the smart camera platform was successfully integrated and used for fast X-ray imaging experiments at KIT's synchrotron radiation facility ANKA.
Volume reconstruction optimization for tomo-PIV algorithms applied to experimental data
NASA Astrophysics Data System (ADS)
Martins, Fabio J. W. A.; Foucaut, Jean-Marc; Thomas, Lionel; Azevedo, Luis F. A.; Stanislas, Michel
2015-08-01
Tomographic PIV is a three-component volumetric velocity measurement technique based on the tomographic reconstruction of a particle distribution imaged by multiple camera views. In essence, the performance and accuracy of this technique is highly dependent on the parametric adjustment and the reconstruction algorithm used. Although synthetic data have been widely employed to optimize experiments, the resulting reconstructed volumes might not have optimal quality. The purpose of the present study is to offer quality indicators that can be applied to data samples in order to improve the quality of velocity results obtained by the tomo-PIV technique. The methodology proposed can potentially lead to significantly reduction in the time required to optimize a tomo-PIV reconstruction, also leading to better quality velocity results. Tomo-PIV data provided by a six-camera turbulent boundary-layer experiment were used to optimize the reconstruction algorithms according to this methodology. Velocity statistics measurements obtained by optimized BIMART, SMART and MART algorithms were compared with hot-wire anemometer data and velocity measurement uncertainties were computed. Results indicated that BIMART and SMART algorithms produced reconstructed volumes with equivalent quality as the standard MART with the benefit of reduced computational time.
Knowledge-based imaging-sensor fusion system
NASA Technical Reports Server (NTRS)
Westrom, George
1989-01-01
An imaging system which applies knowledge-based technology to supervise and control both sensor hardware and computation in the imaging system is described. It includes the development of an imaging system breadboard which brings together into one system work that we and others have pursued for LaRC for several years. The goal is to combine Digital Signal Processing (DSP) with Knowledge-Based Processing and also include Neural Net processing. The system is considered a smart camera. Imagine that there is a microgravity experiment on-board Space Station Freedom with a high frame rate, high resolution camera. All the data cannot possibly be acquired from a laboratory on Earth. In fact, only a small fraction of the data will be received. Again, imagine being responsible for some experiments on Mars with the Mars Rover: the data rate is a few kilobits per second for data from several sensors and instruments. Would it not be preferable to have a smart system which would have some human knowledge and yet follow some instructions and attempt to make the best use of the limited bandwidth for transmission. The system concept, current status of the breadboard system and some recent experiments at the Mars-like Amboy Lava Fields in California are discussed.
Bioinspired architecture approach for a one-billion transistor smart CMOS camera chip
NASA Astrophysics Data System (ADS)
Fey, Dietmar; Komann, Marcus
2007-05-01
In the paper we present a massively parallel VLSI architecture for future smart CMOS camera chips with up to one billion transistors. To exploit efficiently the potential offered by future micro- or nanoelectronic devices traditional on central structures oriented parallel architectures based on MIMD or SIMD approaches will fail. They require too long and too many global interconnects for the distribution of code or the access to common memory. On the other hand nature developed self-organising and emergent principles to manage successfully complex structures based on lots of interacting simple elements. Therefore we developed a new as Marching Pixels denoted emergent computing paradigm based on a mixture of bio-inspired computing models like cellular automaton and artificial ants. In the paper we present different Marching Pixels algorithms and the corresponding VLSI array architecture. A detailed synthesis result for a 0.18 μm CMOS process shows that a 256×256 pixel image is processed in less than 10 ms assuming a moderate 100 MHz clock rate for the processor array. Future higher integration densities and a 3D chip stacking technology will allow the integration and processing of Mega pixels within the same time since our architecture is fully scalable.
78 FR 4192 - Petition for Exemption From the Vehicle Theft Prevention Standard; Ford Motor Company
Federal Register 2010, 2011, 2012, 2013, 2014
2013-01-18
... Smart Power Distribution Junction Box (SPDJB), the PEPS/RFA module, the power train control module and a... listed in Sec. 543.6(a)(3): promoting activation; attracting attention to the efforts of unauthorized...
DOE Office of Scientific and Technical Information (OSTI.GOV)
SmartImport.py is a Python source-code file that implements a replacement for the standard Python module importer. The code is derived from knee.py, a file in the standard Python diestribution , and adds functionality to improve the performance of Python module imports in massively parallel contexts.
Space telescope phase B definition study. Volume 2A: Science instruments, f48/96 planetary camera
NASA Technical Reports Server (NTRS)
Grosso, R. P.; Mccarthy, D. J.
1976-01-01
The analysis and preliminary design of the f48/96 planetary camera for the space telescope are discussed. The camera design is for application to the axial module position of the optical telescope assembly.
Camera Development for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Moncada, Roberto Jose
2017-01-01
With the Cherenkov Telescope Array (CTA), the very-high-energy gamma-ray universe, between 30 GeV and 300 TeV, will be probed at an unprecedented resolution, allowing deeper studies of known gamma-ray emitters and the possible discovery of new ones. This exciting project could also confirm the particle nature of dark matter by looking for the gamma rays produced by self-annihilating weakly interacting massive particles (WIMPs). The telescopes will use the imaging atmospheric Cherenkov technique (IACT) to record Cherenkov photons that are produced by the gamma-ray induced extensive air shower. One telescope design features dual-mirror Schwarzschild-Couder (SC) optics that allows the light to be finely focused on the high-resolution silicon photomultipliers of the camera modules starting from a 9.5-meter primary mirror. Each camera module will consist of a focal plane module and front-end electronics, and will have four TeV Array Readout with GSa/s Sampling and Event Trigger (TARGET) chips, giving them 64 parallel input channels. The TARGET chip has a self-trigger functionality for readout that can be used in higher logic across camera modules as well as across individual telescopes, which will each have 177 camera modules. There will be two sites, one in the northern and the other in the southern hemisphere, for full sky coverage, each spanning at least one square kilometer. A prototype SC telescope is currently under construction at the Fred Lawrence Whipple Observatory in Arizona. This work was supported by the National Science Foundation's REU program through NSF award AST-1560016.
MS Lucid and Blaha with MGBX aboard the Mir space station Priroda module
1997-03-26
STS079-S-092 (16-26 Sept. 1996) --- Astronauts Shannon W. Lucid and John E. Blaha work at a microgravity glove box on the Priroda Module aboard Russia's Mir Space Station complex. Blaha, who flew into Earth-orbit with the STS-79 crew, and Lucid are the first participants in a series of ongoing exchanges of NASA astronauts serving time as cosmonaut guest researchers onboard Mir. Lucid went on to spend a total of 188 days in space before returning to Earth with the STS-79 crew. During the STS-79 mission, the crew used an IMAX camera to document activities aboard the Space Shuttle Atlantis and the various Mir modules, with the cooperation of the Russian Space Agency (RSA). A hand-held version of the 65mm camera system accompanied the STS-79 crew into space in Atlantis' crew cabin. NASA has flown IMAX camera systems on many Shuttle missions, including a special cargo bay camera's coverage of other recent Shuttle-Mir rendezvous and/or docking missions.
NASA Technical Reports Server (NTRS)
Schwartz, Daniel A.; Allured, Ryan; Bookbinder, Jay A.; Cotroneo, Vincenzo; Forman, William R.; Freeman, Mark D.; McMuldroch, Stuart; Reid, Paul B.; Tananbaum, Harvey; Vikhlinin, Alexey A.;
2014-01-01
Addressing the astrophysical problems of the 2020's requires sub-arcsecond x-ray imaging with square meter effective area. Such requirements can be derived, for example, by considering deep x-ray surveys to find the young black holes in the early universe (large redshifts) which will grow into the first super-massive black holes. We have envisioned a mission, the Square Meter Arcsecond Resolution Telescope for X-rays (SMART-X), based on adjustable x-ray optics technology, incorporating mirrors with the required small ratio of mass to collecting area. We are pursuing technology which achieves sub-arcsecond resolution by on-orbit adjustment via thin film piezoelectric "cells" deposited directly on the non-reflecting sides of thin, slumped glass. While SMART-X will also incorporate state-of-the-art x-ray cameras, the remaining spacecraft systems have no requirements more stringent than those which are well understood and proven on the current Chandra X-ray Observatory.
APOLLO 16 ASTRONAUTS JOHN YOUNG AND CHARLES DUKE EXAMINE FAR ULTRAVIOLET CAMERA
NASA Technical Reports Server (NTRS)
1971-01-01
Apollo 16 Lunar Module Pilot Charles M. Duke, Jr., left and Mission Commander John W. Young examine Far Ultraviolet Camera they will take to the Moon in March. They will measure the universe's ultraviolet spectrum. They will be launched to the Moon no earlier than March 17, 1972, with Command Module Pilot Thomas K. Mattingly, II.
Optical correlator method and apparatus for particle image velocimetry processing
NASA Technical Reports Server (NTRS)
Farrell, Patrick V. (Inventor)
1991-01-01
Young's fringes are produced from a double exposure image of particles in a flowing fluid by passing laser light through the film and projecting the light onto a screen. A video camera receives the image from the screen and controls a spatial light modulator. The spatial modulator has a two dimensional array of cells the transmissiveness of which are controlled in relation to the brightness of the corresponding pixel of the video camera image of the screen. A collimated beam of laser light is passed through the spatial light modulator to produce a diffraction pattern which is focused onto another video camera, with the output of the camera being digitized and provided to a microcomputer. The diffraction pattern formed when the laser light is passed through the spatial light modulator and is focused to a point corresponds to the two dimensional Fourier transform of the Young's fringe pattern projected onto the screen. The data obtained fro This invention was made with U.S. Government support awarded by the Department of the Army (DOD) and NASA grand number(s): DOD #DAAL03-86-K0174 and NASA #NAG3-718. The U.S. Government has certain rights in this invention.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Birch, Gabriel Carisle; Griffin, John Clark
2015-01-01
The horizontal television lines (HTVL) metric has been the primary quantity used by division 6000 related to camera resolution for high consequence security systems. This document shows HTVL measurements are fundamen- tally insufficient as a metric to determine camera resolution, and propose a quantitative, standards based methodology by measuring the camera system modulation transfer function (MTF), the most common and accepted metric of res- olution in the optical science community. Because HTVL calculations are easily misinterpreted or poorly defined, we present several scenarios in which HTVL is frequently reported, and discuss their problems. The MTF metric is discussed, and scenariosmore » are presented with calculations showing the application of such a metric.« less
Dynamic Human Body Modeling Using a Single RGB Camera.
Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan
2016-03-18
In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones.
Dynamic Human Body Modeling Using a Single RGB Camera
Zhu, Haiyu; Yu, Yao; Zhou, Yu; Du, Sidan
2016-01-01
In this paper, we present a novel automatic pipeline to build personalized parametric models of dynamic people using a single RGB camera. Compared to previous approaches that use monocular RGB images, our system can model a 3D human body automatically and incrementally, taking advantage of human motion. Based on coarse 2D and 3D poses estimated from image sequences, we first perform a kinematic classification of human body parts to refine the poses and obtain reconstructed body parts. Next, a personalized parametric human model is generated by driving a general template to fit the body parts and calculating the non-rigid deformation. Experimental results show that our shape estimation method achieves comparable accuracy with reconstructed models using depth cameras, yet requires neither user interaction nor any dedicated devices, leading to the feasibility of using this method on widely available smart phones. PMID:26999159
A Self-Assessment Stereo Capture Model Applicable to the Internet of Things
Lin, Yancong; Yang, Jiachen; Lv, Zhihan; Wei, Wei; Song, Houbing
2015-01-01
The realization of the Internet of Things greatly depends on the information communication among physical terminal devices and informationalized platforms, such as smart sensors, embedded systems and intelligent networks. Playing an important role in information acquisition, sensors for stereo capture have gained extensive attention in various fields. In this paper, we concentrate on promoting such sensors in an intelligent system with self-assessment capability to deal with the distortion and impairment in long-distance shooting applications. The core design is the establishment of the objective evaluation criteria that can reliably predict shooting quality with different camera configurations. Two types of stereo capture systems—toed-in camera configuration and parallel camera configuration—are taken into consideration respectively. The experimental results show that the proposed evaluation criteria can effectively predict the visual perception of stereo capture quality for long-distance shooting. PMID:26308004
Digital Earth Watch: Investigating the World with Digital Cameras
NASA Astrophysics Data System (ADS)
Gould, A. D.; Schloss, A. L.; Beaudry, J.; Pickle, J.
2015-12-01
Every digital camera including the smart phone camera can be a scientific tool. Pictures contain millions of color intensity measurements organized spatially allowing us to measure properties of objects in the images. This presentation will demonstrate how digital pictures can be used for a variety of studies with a special emphasis on using repeat digital photographs to study change-over-time in outdoor settings with a Picture Post. Demonstrations will include using inexpensive color filters to take pictures that enhance features in images such as unhealthy leaves on plants, or clouds in the sky. Software available at no cost from the Digital Earth Watch (DEW) website that lets students explore light, color and pixels, manipulate color in images and make measurements, will be demonstrated. DEW and Picture Post were developed with support from NASA. Please visit our websites: DEW: http://dew.globalsystemsscience.orgPicture Post: http://picturepost.unh.edu
Khandoobhai, Anand; Leadon, Kim
2012-01-01
Objective. To determine whether a 2-year continuing professional development (CPD) training program improved first-year (P1) and second-year (P2) pharmacy students’ ability to write SMART (specific, measurable, achievable, relevant, and timed) learning objectives. Design. First-year students completed live or online CPD training, including creating portfolios and writing SMART objectives prior to their summer introductory pharmacy practice experience (IPPE). In year 2, P1 and P2 students were included. SMART learning objectives were graded and analyzed. Assessment. On several objectives, the 2011 P1 students (n = 130) scored higher than did the P2 cohort (n = 105). In 2011, P2 students outscored their own performance in 2010. In 2011, P1 students who had been trained in online modules performed the same as did live-session trainees with respect to SMART objectives. Conclusion. With focused online or live training, students are capable of incorporating principles of CPD by writing SMART learning objectives. PMID:22611277
Septic safe interactions with smart glasses in health care.
Czuszynski, K; Ruminski, J; Kocejko, T; Wtorek, J
2015-08-01
In this paper, septic safe methods of interaction with smart glasses, due to the health care environment applications consideration, are presented. The main focus is on capabilities of an optical, proximity-based gesture sensor and eye-tracker input systems. The design of both interfaces is being adapted to the open smart glasses platform that is being developed under the eGlasses project. Preliminary results obtained from the proximity sensor show that the recognition of different static and dynamic hand gestures is promising. The experiments performed for the eye-tracker module shown the possibility of interaction with simple Graphical User Interface provided by the near-to-eye display. Research leads to the conclusion of attractiveness of collaborative interfaces for interaction with smart glasses.
Chen, Yu-Xian
2018-01-01
This study designed a radio-frequency identification (RFID)-based Internet of Things (IoT) platform to create the core of a smart nest box. At the sensing level, we have deployed RFID-based sensors and egg detection sensors. A low-frequency RFID reader is installed in the bottom of the nest box and a foot ring RFID tag is worn on the leg of individual hens. The RFID-based sensors detect when a hen enters or exits the nest box. The egg-detection sensors are implemented with a resistance strain gauge pressure sensor, which weights the egg in the egg-collection tube. Thus, the smart nest box makes it possible to analyze the laying performance and behavior of individual hens. An evaluative experiment was performed using an enriched cage, a smart nest box, web camera, and monitoring console. The hens were allowed 14 days to become accustomed to the experimental environment before monitoring began. The proposed IoT platform makes it possible to analyze the egg yield of individual hens in real time, thereby enabling the replacement of hens with egg yield below a pre-defined level in order to meet the overall target egg yield rate. The results of this experiment demonstrate the efficacy of the proposed RFID-based smart nest box in monitoring the egg yield and laying behavior of individual hens. PMID:29538334
Chien, Ying-Ren; Chen, Yu-Xian
2018-03-14
This study designed a radio-frequency identification (RFID)-based Internet of Things (IoT) platform to create the core of a smart nest box. At the sensing level, we have deployed RFID-based sensors and egg detection sensors. A low-frequency RFID reader is installed in the bottom of the nest box and a foot ring RFID tag is worn on the leg of individual hens. The RFID-based sensors detect when a hen enters or exits the nest box. The egg-detection sensors are implemented with a resistance strain gauge pressure sensor, which weights the egg in the egg-collection tube. Thus, the smart nest box makes it possible to analyze the laying performance and behavior of individual hens. An evaluative experiment was performed using an enriched cage, a smart nest box, web camera, and monitoring console. The hens were allowed 14 days to become accustomed to the experimental environment before monitoring began. The proposed IoT platform makes it possible to analyze the egg yield of individual hens in real time, thereby enabling the replacement of hens with egg yield below a pre-defined level in order to meet the overall target egg yield rate. The results of this experiment demonstrate the efficacy of the proposed RFID-based smart nest box in monitoring the egg yield and laying behavior of individual hens.
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos; Farhadi, Hamed
2017-04-01
This study, reports on the analysis of appropriately designed fluvial experiments investigating the transport of coarse bed material using two approaches: particle tracking velocimetry (PTV) to extract bulk transport parameters and inertia sensor data (via the use of "smart-pebbles") to obtain refined statistics for the transport of the particle. The purpose of this study is to provide further insight on the use of technologies (optical techniques and inertial sensors) that are complementary one to another, towards producing improved estimates of bedload transport in natural rivers. The experiments are conducted in the Water Engineering Lab at the University of Glasgow on a tilting recirculating flume with 90 cm width. Ten different discharges have been implemented in this study. A couple of fake beds, made of well-packed beads of three different sizes have been set up in the flume. The particle motion is captured by two high-speed commercial cameras, responsible for recording the top view covering the full length of the fake beds over which the "smart-pebble" is allowed to be transported. "Smart-pebbles" of four different densities are initially located at the upstream end of the configuration, fully exposed to the instream flow. These are instrumented with appropriate inertial sensors that allow recording the particle's motion, in the Langrangian frame, in high resolution. Specifically, the "smart-pebble" employ a tri-axial gyroscope, magnetometer and accelerometer, which are utilized to obtain minute linear and angular displacements in high frequency (up to 200Hz). However, these are not enough to accurately reconstruct the full trajectory of the particles rolling downstream. To that goal optical methods are used. In particular, by using particle tracking velocimetry data and image processing techniques, the location, orientation and velocities of the "smart-pebble" are derived. Specific consideration is given to appropriately preprocess the obtained video, as the captured frames need to be flatted and calibrated due to lens distortion. Special effort is made to ensure the center of mass of the "smart-pebble" in each frame is well identified (using image thresholding techniques to improve colour contrast), so that its trajectory comprising of concequtive displacements is accurately defined. It is sensible to follow a probabilistic analytical approach, considering the stochastic nature of particle transport at low transport rates. By using the output data from the camera and inertial sensor, particle transport velocity and acceleration time-series, are produced for each fluvial transport experiment. To that goal empirical probability distribution functions (PDFs) are derived for the particle's motion features from both techniques and best fits for these are estimated. The parameters of the probability distribution functions are plotted against the Reynolds particle number for all the transport experiments, to identify any trends. Such information can help calibrate the "smart-pebble" for sediment transport studies and can also offer novel insights on the mechanisms of particle transport, from a Lagnrangian perspective.
Data rate enhancement of optical camera communications by compensating inter-frame gaps
NASA Astrophysics Data System (ADS)
Nguyen, Duy Thong; Park, Youngil
2017-07-01
Optical camera communications (OCC) is a convenient way of transmitting data between LED lamps and image sensors that are included in most smart devices. Although many schemes have been suggested to increase the data rate of the OCC system, it is still much lower than that of the photodiode-based LiFi system. One major reason of this low data rate is attributed to the inter-frame gap (IFG) of image sensor system, that is, the time gap between consecutive image frames. In this paper, we propose a way to compensate for this IFG efficiently by an interleaved Hamming coding scheme. The proposed scheme is implemented and the performance is measured.
Optical stereo video signal processor
NASA Technical Reports Server (NTRS)
Craig, G. D. (Inventor)
1985-01-01
An otpical video signal processor is described which produces a two-dimensional cross-correlation in real time of images received by a stereo camera system. The optical image of each camera is projected on respective liquid crystal light valves. The images on the liquid crystal valves modulate light produced by an extended light source. This modulated light output becomes the two-dimensional cross-correlation when focused onto a video detector and is a function of the range of a target with respect to the stereo camera. Alternate embodiments utilize the two-dimensional cross-correlation to determine target movement and target identification.
Neutronics Analysis of SMART Small Modular Reactor using SRAC 2006 Code
NASA Astrophysics Data System (ADS)
Ramdhani, Rahmi N.; Prastyo, Puguh A.; Waris, Abdul; Widayani; Kurniadi, Rizal
2017-07-01
Small modular reactors (SMRs) are part of a new generation of nuclear reactor being developed worldwide. One of the advantages of SMR is the flexibility to adopt the advanced design concepts and technology. SMART (System integrated Modular Advanced ReacTor) is a small sized integral type PWR with a thermal power of 330 MW that has been developed by KAERI (Korea Atomic Energy Research Institute). SMART core consists of 57 fuel assemblies which are based on the well proven 17×17 array that has been used in Korean commercial PWRs. SMART is soluble boron free, and the high initial reactivity is mainly controlled by burnable absorbers. The goal of this study is to perform neutronics evaluation of SMART core with UO2 as main fuel. Neutronics calculation was performed by using PIJ and CITATION modules of SRAC 2006 code with JENDL 3.3 as nuclear data library.
Simultaneous modulated accelerated radiation therapy for esophageal cancer: a feasibility study.
Zhang, Wu-Zhe; Chen, Jian-Zhou; Li, De-Rui; Chen, Zhi-Jian; Guo, Hong; Zhuang, Ting-Ting; Li, Dong-Sheng; Zhou, Ming-Zhen; Chen, Chuang-Zhen
2014-10-14
To establish the feasibility of simultaneous modulated accelerated radiation therapy (SMART) in esophageal cancer (EC). Computed tomography (CT) datasets of 10 patients with upper or middle thoracic squamous cell EC undergoing chemoradiotherapy were used to generate SMART, conventionally-fractionated three-dimensional conformal radiotherapy (3DCRT) and intensity-modulated radiation therapy (cf-IMRT) plans, respectively. The gross target volume (GTV) of the esophagus, positive regional lymph nodes (LN), and suspected lymph nodes (LN ±) were contoured for each patient. The clinical target volume (CTV) was delineated with 2-cm longitudinal and 0.5- to 1.0-cm radial margins with respect to the GTV and with 0.5-cm uniform margins for LN and LN(±). For the SMART plans, there were two planning target volumes (PTVs): PTV66 = (GTV + LN) + 0.5 cm and PTV54 = CTV + 0.5 cm. For the 3DCRT and cf-IMRT plans, there was only a single PTV: PTV60 = CTV + 0.5 cm. The prescribed dose for the SMART plans was 66 Gy/30 F to PTV66 and 54 Gy/30 F to PTV54. The dose prescription to the PTV60 for both the 3DCRT and cf-IMRT plans was set to 60 Gy/30 F. All the plans were generated on the Eclipse 10.0 treatment planning system. Fulfillment of the dose criteria for the PTVs received the highest priority, followed by the spinal cord, heart, and lungs. The dose-volume histograms were compared. Clinically acceptable plans were achieved for all the SMART, cf-IMRT, and 3DCRT plans. Compared with the 3DCRT plans, the SMART plans increased the dose delivered to the primary tumor (66 Gy vs 60 Gy), with improved sparing of normal tissues in all patients. The Dmax of the spinal cord, V20 of the lungs, and Dmean and V50 of the heart for the SMART and 3DCRT plans were as follows: 38.5 ± 2.0 vs 44.7 ± 0.8 (P = 0.002), 17.1 ± 4.0 vs 25.8 ± 5.0 (P = 0.000), 14.4 ± 7.5 vs 21.4 ± 11.1 (P = 0.000), and 4.9 ± 3.4 vs 12.9 ± 7.6 (P = 0.000), respectively. In contrast to the cf-IMRT plans, the SMART plans permitted a simultaneous dose escalation (6 Gy) to the primary tumor while demonstrating a significant trend of a lower irradiation dose to all organs at risk except the spinal cord, for which no significant difference was found. SMART offers the potential for a 6 Gy simultaneous escalation in the irradiation dose delivered to the primary tumor of EC and improves the sparing of normal tissues.
Simultaneous modulated accelerated radiation therapy for esophageal cancer: A feasibility study
Zhang, Wu-Zhe; Chen, Jian-Zhou; Li, De-Rui; Chen, Zhi-Jian; Guo, Hong; Zhuang, Ting-Ting; Li, Dong-Sheng; Zhou, Ming-Zhen; Chen, Chuang-Zhen
2014-01-01
AIM: To establish the feasibility of simultaneous modulated accelerated radiation therapy (SMART) in esophageal cancer (EC). METHODS: Computed tomography (CT) datasets of 10 patients with upper or middle thoracic squamous cell EC undergoing chemoradiotherapy were used to generate SMART, conventionally-fractionated three-dimensional conformal radiotherapy (3DCRT) and intensity-modulated radiation therapy (cf-IMRT) plans, respectively. The gross target volume (GTV) of the esophagus, positive regional lymph nodes (LN), and suspected lymph nodes (LN±) were contoured for each patient. The clinical target volume (CTV) was delineated with 2-cm longitudinal and 0.5- to 1.0-cm radial margins with respect to the GTV and with 0.5-cm uniform margins for LN and LN(±). For the SMART plans, there were two planning target volumes (PTVs): PTV66 = (GTV + LN) + 0.5 cm and PTV54 = CTV + 0.5 cm. For the 3DCRT and cf-IMRT plans, there was only a single PTV: PTV60 = CTV + 0.5 cm. The prescribed dose for the SMART plans was 66 Gy/30 F to PTV66 and 54 Gy/30 F to PTV54. The dose prescription to the PTV60 for both the 3DCRT and cf-IMRT plans was set to 60 Gy/30 F. All the plans were generated on the Eclipse 10.0 treatment planning system. Fulfillment of the dose criteria for the PTVs received the highest priority, followed by the spinal cord, heart, and lungs. The dose-volume histograms were compared. RESULTS: Clinically acceptable plans were achieved for all the SMART, cf-IMRT, and 3DCRT plans. Compared with the 3DCRT plans, the SMART plans increased the dose delivered to the primary tumor (66 Gy vs 60 Gy), with improved sparing of normal tissues in all patients. The Dmax of the spinal cord, V20 of the lungs, and Dmean and V50 of the heart for the SMART and 3DCRT plans were as follows: 38.5 ± 2.0 vs 44.7 ± 0.8 (P = 0.002), 17.1 ± 4.0 vs 25.8 ± 5.0 (P = 0.000), 14.4 ± 7.5 vs 21.4 ± 11.1 (P = 0.000), and 4.9 ± 3.4 vs 12.9 ± 7.6 (P = 0.000), respectively. In contrast to the cf-IMRT plans, the SMART plans permitted a simultaneous dose escalation (6 Gy) to the primary tumor while demonstrating a significant trend of a lower irradiation dose to all organs at risk except the spinal cord, for which no significant difference was found. CONCLUSION: SMART offers the potential for a 6 Gy simultaneous escalation in the irradiation dose delivered to the primary tumor of EC and improves the sparing of normal tissues. PMID:25320535
Generic Module for Collecting Data in Smart Cities
NASA Astrophysics Data System (ADS)
Martinez, A.; Ramirez, F.; Estrada, H.; Torres, L. A.
2017-09-01
The Future Internet brings new technologies to the common life of people, such as Internet of Things, Cloud Computing or Big Data. All this technologies have change the way people communicate and also the way the devices interact with the context, giving rise to new paradigms, as the case of smart cities. Currently, the mobile devices represent one of main sources of information for new applications that take into account the user context, such as apps for mobility, health, of security. Several platforms have been proposed that consider the development of Future Internet applications, however, no generic modules can be found that implement the collection of context data from smartphones. In this research work we present a generic module to collect data from different sensors of the mobile devices and also to send, in a standard manner, this data to the Open FIWARE Cloud to be stored or analyzed by software tools. The proposed module enables the human-as-a-sensor approach for FIWARE Platform.
Color correction pipeline optimization for digital cameras
NASA Astrophysics Data System (ADS)
Bianco, Simone; Bruna, Arcangelo R.; Naccari, Filippo; Schettini, Raimondo
2013-04-01
The processing pipeline of a digital camera converts the RAW image acquired by the sensor to a representation of the original scene that should be as faithful as possible. There are mainly two modules responsible for the color-rendering accuracy of a digital camera: the former is the illuminant estimation and correction module, and the latter is the color matrix transformation aimed to adapt the color response of the sensor to a standard color space. These two modules together form what may be called the color correction pipeline. We design and test new color correction pipelines that exploit different illuminant estimation and correction algorithms that are tuned and automatically selected on the basis of the image content. Since the illuminant estimation is an ill-posed problem, illuminant correction is not error-free. An adaptive color matrix transformation module is optimized, taking into account the behavior of the first module in order to alleviate the amplification of color errors. The proposed pipelines are tested on a publicly available dataset of RAW images. Experimental results show that exploiting the cross-talks between the modules of the pipeline can lead to a higher color-rendition accuracy.
SMART-1 - the lunar adventure begins
NASA Astrophysics Data System (ADS)
2003-08-01
On the one hand, SMART-1 will test new state-of-the art instruments and techniques essential to ambitious future interplanetary missions, such as a solar-electric primary propulsion system. On the other, SMART-1 will answer pending scientific questions, addressing key issues such as the Moon's formation, its precise mineralogical composition, and the presence and quantity of water. These data will help scientists to understand the Earth-Moon system and Earth-like planets, and will also provide invaluable information when considering a long-lasting human presence on the Moon. On 15 July 2003, SMART 1 was shipped to the European launch base in Kourou, French Guiana, where it is being prepared for its launch, due to take place on an Ariane-5 rocket on 29 August 2003 (Central European Summer Time). For the first time, SMART-1 will combine the power obtained by solar-electric propulsion - never used before by Europe as a main propulsion system - with lunar gravity. It will not follow a direct path to cross the 400 000 kilometres distance between the Earth and the Moon. Instead, from an elliptical orbit around the Earth where it is placed by the rocket, SMART-1 will gradually expand the orbit in a spiral pathway that will bring it closer to the Moon every month. Finally, the Moon’s gravitational field will capture the spacecraft. SMART-1 will not land on the Moon, but will make its observations from orbit, obtaining a global view. When it reaches its destination, in December 2004, it will enter orbit around the Moon and make measurements for a period of six months possibly extended to one year. Why the Moon? Water, minerals, and a violent origin “Our knowledge of the Moon is still surprisingly incomplete,” says Bernard Foing, ESA’s SMART-1 Project Scientist. “We still want to know how the Earth-Moon system formed and evolved, as well as the role of geophysical processes such as volcanism, tectonics, cratering, or erosion in shaping the Moon. And, of course, in preparation for future lunar and planetary exploration, we need to find resources and landing sites.” So, there are many unsolved questions about the Moon, even though six NASA Apollo missions and three unmanned Soviet spacecraft have landed on it and brought back rock samples. The far side of the Moon --the one that never faces Earth-- and the polar regions remain fairly unexplored. The existence of water on the Moon has also never been confirmed, although two orbiters in the 1990s found indirect evidence. We are not even sure how the Moon was formed. According to the most accepted theory, 4500 million years ago an asteroid the size of Mars collided with our planet, and the vapourised debris that went into space condensed to form the Moon. SMART-1 will map the Moon's topography, as well as the surface distribution of minerals such as pyroxenes, olivines, and feldspars. Also, an X-ray detector will identify key chemical elements in the lunar surface. These data will allow scientists to reconstruct the geological evolution of the Moon, and to search for traces of the impact with the giant asteroid. If the collision theory is right, the Moon should contain less iron than the Earth, in proportion to lighter elements such as magnesium and aluminium. By gauging the relative amounts of chemical elements comprehensively for the very first time, SMART-1 can make a significant contribution in resolving this issue. As for water, if it exists, it must be in the form of ice in places always hidden from the Sun. In such places, the temperature will never rise above -170ºC. Dark places like that could exist in the bottoms of small craters in the polar regions. Peering into these craters is maybe the trickiest task that the SMART-1 scientists have set themselves. They will look for the infrared signature of water- ice. It will be difficult because no direct light falls in those areas, but rays from nearby crater rims, catching the sunshine, may light the ice sufficiently for SMART-1 instruments to see it. New technologies to prepare for future interplanetary missions Future scientific missions will greatly profit from the technologies being tested on SMART-1. Solar-electric primary propulsion is a new propulsion technique based on so-called 'ion engines' that feed on electricity derived from solar panels. It is a technique that has only ever been used once before. These engines provide a very gentle thrust, but they work for years while conventional, more powerful chemical rockets burn for only a few minutes. Ion engines offer key advantages. They need considerably less propellant than chemical propulsion, which means less weight at launch and more mass available for scientific instruments and payload. Ion engines open the door to truly deep space exploration. They slash the time for interplanetary flight: although they provide less thrust they can last for years. The ion tortoise will therefore eventually overtake the chemical hare. Moreover, another application of the gentle thrust provided by electric propulsion allows very accurate spacecraft attitude control, a skill that will be useful for scientific missions that require highly precise and undisturbed pointing. Future ESA science missions will rely on ion engines. SMART-1 will also test new miniaturisation techniques that save space and economise on mass: in space, less mass per instrument enables scientists to have more instruments on board, so more science. The SMART-1 payload consists of a dozen technological and scientific investigations performed by seven instruments weighing only 19 kilograms in total. For example, the X-ray telescope D-CIXS, consists of a cube just 15 centimetres wide and weighing less than 5 kilograms. The ultra-compact electronic camera, AMIE, weighs no more than an amateur’s camera. New navigation and space-communication techniques will also be tested. An experiment called OBAN, based on images from the miniature camera AMIE and the star trackers, is the first step towards future 'autonomous' spacecraft. In a not-too-distant future, scientific satellites will be able to 'find their way' with a minimum of ground control, just by using stars and other celestial objects to guide themselves along predefined paths. As for communications, engineers need to develop new and efficient ways to communicate with Earth from deep space, for interplanetary missions that are long or go far. SMART-1 will test both very short radio waves (called Ka band, with the instrument KaTE) and a laser experiment to try to communicate with the Earth using a laser beam, instead of traditional radio frequencies. ESA already has laser links with telecommunications satellites from an optical ground station on Tenerife, in Spain’s Canary Islands. Aiming the beam becomes much more difficult if, like SMART-1, the spacecraft is far away and moving rapidly. Scientists hope that the on-board camera AMIE will see Tenerife aglow with laser light.
AMIE SMART-1: review of results and legacy 10 years after launch
NASA Astrophysics Data System (ADS)
Josset, Jean-Luc; Souchon, Audrey; Josset, Marie; Foing, Bernard
2014-05-01
The Advanced Moon micro-Imager Experiment (AMIE) camera was launched in September 2003 onboard the ESA SMART-1 spacecraft. We review the technical characteristics, scientific objectives and results of the instrument, 10 years after its launch. The AMIE camera is an ultra-compact imaging system that includes a tele-objective with a 5.3° x 5.3° field of view and an imaging sensor of 1024 x 1024 pixels. It is dedicated to spectral imaging with three spectral filters (750, 915 and 960 nm filters), photometric measurements (filter free CCD area), and Laser-link experiment (laser filter at 847 nm). The AMIE camera was designed to acquire high-resolution images of the lunar surface, in white light and for specific spectral bands, under a number of different viewing conditions and geometries. Specifically, its main scientific objectives included: (i) imaging of high latitude regions in the southern hemisphere, in particular the South Pole Aitken basin and the permanently shadowed regions close to the South Pole; (ii) determination of the photometric properties of the lunar surface from observations at different phase angles (physical properties of the regolith); (iii) multi-band imaging for constraining the chemical and mineral composition of the surface; (iv) detection and characterisation of lunar non-mare volcanic units; (v) study of lithological variations from impact craters and implications for crustal heterogeneity. The study of AMIE images enhanced the knowledge of the lunar surface, in particular regarding photometric modelling and surface physical properties of localized lunar areas and geological units. References: http://scholar.google.nl/scholar?q=smart-1+amie We acknowledge ESA, member states, industry and institutes for their contribution, and the members of the AMIE Team: J.-L. Josset, P. Plancke, Y. Langevin, P. Cerroni, M. C. De Sanctis, P. Pinet, S. Chevrel, S. Beauvivre, B.A. Hofmann, M. Josset, D. Koschny, M. Almeida, K. Muinonen, J. Piironen, M. A. Barucci, P. Ehrenfreund, Yu. Shkuratov, V. Shevchenko, Z. Sodnik, S. Mancuso, F. Ankersen, B.H. Foing, and other associated scientists, collaborators, students and colleagues.
Characterization of the LBNL PEM Camera
NASA Astrophysics Data System (ADS)
Wang, G.-C.; Huber, J. S.; Moses, W. W.; Qi, J.; Choong, W.-S.
2006-06-01
We present the tomographic images and performance measurements of the LBNL positron emission mammography (PEM) camera, a specially designed positron emission tomography (PET) camera that utilizes PET detector modules with depth of interaction measurement capability to achieve both high sensitivity and high resolution for breast cancer detection. The camera currently consists of 24 detector modules positioned as four detector banks to cover a rectangular patient port that is 8.2/spl times/6 cm/sup 2/ with a 5 cm axial extent. Each LBNL PEM detector module consists of 64 3/spl times/3/spl times/30 mm/sup 3/ LSO crystals coupled to a single photomultiplier tube (PMT) and an 8/spl times/8 silicon photodiode array (PD). The PMT provides accurate timing, the PD identifies the crystal of interaction, the sum of the PD and PMT signals (PD+PMT) provides the total energy, and the PD/(PD+PMT) ratio determines the depth of interaction. The performance of the camera has been evaluated by imaging various phantoms. The full-width-at-half-maximum (FWHM) spatial resolution changes slightly from 1.9 mm to 2.1 mm when measured at the center and corner of the field of the view, respectively, using a 6 ns coincidence timing window and a 300-750 keV energy window. With the same setup, the peak sensitivity of the camera is 1.83 kcps//spl mu/Ci.
NASA Astrophysics Data System (ADS)
Scianna, A.; La Guardia, M.
2018-05-01
Recently, the diffusion of knowledge on Cultural Heritage (CH) has become an element of primary importance for its valorization. At the same time, the diffusion of surveys based on UAV Unmanned Aerial Vehicles (UAV) technologies and new methods of photogrammetric reconstruction have opened new possibilities for 3D CH representation. Furthermore the recent development of faster and more stable internet connections leads people to increase the use of mobile devices. In the light of all this, the importance of the development of Virtual Reality (VR) environments applied to CH is strategic for the diffusion of knowledge in a smart solution. In particular, the present work shows how, starting from a basic survey and the further photogrammetric reconstruction of a cultural good, is possible to built a 3D CH interactive information system useful for desktop and mobile devices. For this experimentation the Arab-Norman church of the Trinity of Delia (in Castelvetrano-Sicily-Italy) has been adopted as case study. The survey operations have been carried out considering different rapid methods of acquisition (UAV camera, SLR camera and smartphone camera). The web platform to publish the 3D information has been built using HTML5 markup language and WebGL JavaScript libraries (Three.js libraries). This work presents the construction of a 3D navigation system for a web-browsing of a virtual CH environment, with the integration of first person controls and 3D popup links. This contribution adds a further step to enrich the possibilities of open-source technologies applied to the world of CH valorization on web.
Unstructured Facility Navigation by Applying the NIST 4D/RCS Architecture
2006-07-01
control, and the planner); wire- less data and emergency stop radios; GPS receiver; inertial navigation unit; dual stereo cameras; infrared sensors...current Actuators Wheel motors, camera controls Scale & filter signals status commands commands commands GPS Antenna Dual stereo cameras...used in the sensory processing module include the two pairs of stereo color cameras, the physical bumper and infrared bumper sensors, the motor
Pipe inspection and repair system
NASA Technical Reports Server (NTRS)
Schempf, Hagen (Inventor); Mutschler, Edward (Inventor); Chemel, Brian (Inventor); Boehmke, Scott (Inventor); Crowley, William (Inventor)
2004-01-01
A multi-module pipe inspection and repair device. The device includes a base module, a camera module, a sensor module, an MFL module, a brush module, a patch set/test module, and a marker module. Each of the modules may be interconnected to construct one of an inspection device, a preparation device, a marking device, and a repair device.
Multi-channel measurement for hetero-core optical fiber sensor by using CMOS camera
NASA Astrophysics Data System (ADS)
Koyama, Yuya; Nishiyama, Michiko; Watanabe, Kazuhiro
2015-07-01
Fiber optic smart structures have been developed over several decades by the recent fiber optic sensor technology. Optical intensity-based sensors, which use LD or LEDs, can be suitable for the monitor system to be simple and cost effective. In this paper, a novel fiber optic smart structure with human-like perception has been demonstrated by using intensity-based hetero-core optical fiber sensors system with the CMOS detector. The optical intensity from the hetero-core optical fiber bend sensor is obtained as luminance spots indicated by the optical power distributions. A number of optical intensity spots are simultaneously readout by taking a picture of luminance pattern. To recognize the state of fiber optic smart structure with the hetero-core optical fibers, the template matching process is employed with Sum of Absolute Differences (SAD). A fiber optic smart glove having five optic fiber nerves have been employed to monitor hand postures. Three kinds of hand postures have been recognized by means of the template matching process. A body posture monitoring has also been developed by placing the wearable hetero-core optical fiber bend sensors on the body segments. In order for the CMOS system to be a human brain-like, the luminescent spots in the obtained picture were arranged to make the pattern corresponding to the position of body segments. As a result, it was successfully demonstrated that the proposed fiber optic smart structure could recognize eight kinds of body postures. The developed system will give a capability of human brain-like processing to the existing fiber optic smart structures.
NASA Technical Reports Server (NTRS)
2002-01-01
Goddard Space Flight Center and Triangle Research & Development Corporation collaborated to create "Smart Eyes," a charge coupled device camera that, for the first time, could read and measure bar codes without the use of lasers. The camera operated in conjunction with software and algorithms created by Goddard and Triangle R&D that could track bar code position and direction with speed and precision, as well as with software that could control robotic actions based on vision system input. This accomplishment was intended for robotic assembly of the International Space Station, helping NASA to increase production while using less manpower. After successfully completing the two- phase SBIR project with Goddard, Triangle R&D was awarded a separate contract from the U.S. Department of Transportation (DOT), which was interested in using the newly developed NASA camera technology to heighten automotive safety standards. In 1990, Triangle R&D and the DOT developed a mask made from a synthetic, plastic skin covering to measure facial lacerations resulting from automobile accidents. By pairing NASA's camera technology with Triangle R&D's and the DOT's newly developed mask, a system that could provide repeatable, computerized evaluations of laceration injury was born.
View of Scientific Instrument Module to be flown on Apollo 15
1971-06-27
S71-2250X (June 1971) --- A close-up view of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 lunar landing mission. Mounted in a previously vacant sector of the Apollo Service Module (SM), the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data. SIM equipment includes a laser altimeter for accurate measurement of height above the lunar surface; a large-format panoramic camera for mapping, correlated with a metric camera and the laser altimeter for surface mapping; a gamma ray spectrometer on a 25-feet extendible boom; a mass spectrometer on a 21-feet extendible boom; X-ray and alpha particle spectrometers; and a subsatellite which will be injected into lunar orbit carrying a particle and magnetometer, and the S-Band transponder.
Laser guide star pointing camera for ESO LGS Facilities
NASA Astrophysics Data System (ADS)
Bonaccini Calia, D.; Centrone, M.; Pedichini, F.; Ricciardi, A.; Cerruto, A.; Ambrosino, F.
2014-08-01
Every observatory using LGS-AO routinely has the experience of the long time needed to bring and acquire the laser guide star in the wavefront sensor field of view. This is mostly due to the difficulty of creating LGS pointing models, because of the opto-mechanical flexures and hysteresis in the launch and receiver telescope structures. The launch telescopes are normally sitting on the mechanical structure of the larger receiver telescope. The LGS acquisition time is even longer in case of multiple LGS systems. In this framework the optimization of the LGS systems absolute pointing accuracy is relevant to boost the time efficiency of both science and technical observations. In this paper we show the rationale, the design and the feasibility tests of a LGS Pointing Camera (LPC), which has been conceived for the VLT Adaptive Optics Facility 4LGSF project. The LPC would assist in pointing the four LGS, while the VLT is doing the initial active optics cycles to adjust its own optics on a natural star target, after a preset. The LPC allows minimizing the needed accuracy for LGS pointing model calibrations, while allowing to reach sub-arcsec LGS absolute pointing accuracy. This considerably reduces the LGS acquisition time and observations operation overheads. The LPC is a smart CCD camera, fed by a 150mm diameter aperture of a Maksutov telescope, mounted on the top ring of the VLT UT4, running Linux and acting as server for the client 4LGSF. The smart camera is able to recognize within few seconds the sky field using astrometric software, determining the stars and the LGS absolute positions. Upon request it returns the offsets to give to the LGS, to position them at the required sky coordinates. As byproduct goal, once calibrated the LPC can calculate upon request for each LGS, its return flux, its fwhm and the uplink beam scattering levels.
Development of SPIES (Space Intelligent Eyeing System) for smart vehicle tracing and tracking
NASA Astrophysics Data System (ADS)
Abdullah, Suzanah; Ariffin Osoman, Muhammad; Guan Liyong, Chua; Zulfadhli Mohd Noor, Mohd; Mohamed, Ikhwan
2016-06-01
SPIES or Space-based Intelligent Eyeing System is an intelligent technology which can be utilized for various applications such as gathering spatial information of features on Earth, tracking system for the movement of an object, tracing system to trace the history information, monitoring driving behavior, security and alarm system as an observer in real time and many more. SPIES as will be developed and supplied modularly will encourage the usage based on needs and affordability of users. SPIES are a complete system with camera, GSM, GPS/GNSS and G-Sensor modules with intelligent function and capabilities. Mainly the camera is used to capture pictures and video and sometimes with audio of an event. Its usage is not limited to normal use for nostalgic purpose but can be used as a reference for security and material of evidence when an undesirable event such as crime occurs. When integrated with space based technology of the Global Navigational Satellite System (GNSS), photos and videos can be recorded together with positioning information. A product of the integration of these technologies when integrated with Information, Communication and Technology (ICT) and Geographic Information System (GIS) will produce innovation in the form of information gathering methods in still picture or video with positioning information that can be conveyed in real time via the web to display location on the map hence creating an intelligent eyeing system based on space technology. The importance of providing global positioning information is a challenge but overcome by SPIES even in areas without GNSS signal reception for the purpose of continuous tracking and tracing capability
NASA Astrophysics Data System (ADS)
Li, Xingfeng; Gan, Chaoqin; Liu, Zongkang; Yan, Yuqi; Qiao, HuBao
2018-01-01
In this paper, a novel architecture of hybrid PON for smart grid is proposed by introducing a wavelength-routing module (WRM). By using conventional optical passive components, a WRM with M ports is designed. The symmetry and passivity of the WRM makes it be easily integrated and very cheap in practice. Via the WRM, two types of network based on different ONU-interconnected manner can realize online access. Depending on optical switches and interconnecting fibers, full-fiber-fault protection and dynamic bandwidth allocation are realized in these networks. With the help of amplitude modulation, DPSK modulation and RSOA technology, wavelength triple-reuse is achieved. By means of injecting signals into left and right branches in access ring simultaneously, the transmission delay is decreased. Finally, the performance analysis and simulation of the network verifies the feasibility of the proposed architecture.
Android platform based smartphones for a logistical remote association repair framework.
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-06-25
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use.
Surveyor 3: Bacterium isolated from lunar retrieved television camera
NASA Technical Reports Server (NTRS)
Mitchell, F. J.; Ellis, W. L.
1972-01-01
Microbial analysis was the first of several studies of the retrieved camera and was performed immediately after the camera was opened. The emphasis of the analysis was placed upon isolating microorganisms that could be potentially pathogenic for man. Every step in the retrieval of the Surveyor 3 television camera was analyzed for possible contamination sources, including camera contact by the astronauts, ingassing in the lunar and command module during the mission or at splashdown, and handling during quarantine, disassembly, and analysis at the Lunar Receiving Laboratory
Apollo 17 Command/Service modules photographed from lunar module in orbit
1972-12-14
AS17-145-22254 (14 Dec. 1972) --- An excellent view of the Apollo 17 Command and Service Modules (CSM) photographed from the Lunar Module (LM) "Challenger" during rendezvous and docking maneuvers in lunar orbit. The LM ascent stage, with astronauts Eugene A. Cernan and Harrison H. Schmitt aboard, had just returned from the Taurus-Littrow landing site on the lunar surface. Astronaut Ronald E. Evans remained with the CSM in lunar orbit. Note the exposed Scientific Instrument Module (SIM) Bay in Sector 1 of the Service Module (SM). Three experiments are carried in the SIM bay: S-209 lunar sounder, S-171 infrared scanning spectrometer, and the S-169 far-ultraviolet spectrometer. Also mounted in the SIM bay are the panoramic camera, mapping camera and laser altimeter used in service module photographic tasks. A portion of the LM is on the right.
Lab-on-a-chip for the isolation and characterization of circulating tumor cells.
Stakenborg, Tim; Liu, Chengxu; Henry, Olivier; O'Sullivan, Ciara K; Fermer, Christian; Roeser, Tina; Ritzi-Lehnert, Marion; Hauch, Sigfried; Borgen, Elin; Laddach, Nadja; Lagae, Liesbet
2010-01-01
A smart miniaturized system is being proposed for the isolation and characterization of circulating tumor cells (CTCs) directly from blood. Different microfluidic modules have been designed for cell enrichment and -counting, multiplex mRNA amplification as well as DNA detection. With the different modules at hand, future effort will focus on the integration of the modules in a fully automated, single platform.
An Open Source "Smart Lamp" for the Optimization of Plant Systems and Thermal Comfort of Offices.
Salamone, Francesco; Belussi, Lorenzo; Danza, Ludovico; Ghellere, Matteo; Meroni, Italo
2016-03-07
The article describes the design phase, development and practical application of a smart object integrated in a desk lamp and called "Smart Lamp", useful to optimize the indoor thermal comfort and energy savings that are two important workplace issues where the comfort of the workers and the consumption of the building strongly affect the economic balance of a company. The Smart Lamp was built using a microcontroller, an integrated temperature and relative humidity sensor, some other modules and a 3D printer. This smart device is similar to the desk lamps that are usually found in offices but it allows one to adjust the indoor thermal comfort, by interacting directly with the air conditioner. After the construction phase, the Smart Lamp was installed in an office normally occupied by four workers to evaluate the indoor thermal comfort and the cooling consumption in summer. The results showed how the application of the Smart Lamp effectively reduced the energy consumption, optimizing the thermal comfort. The use of DIY approach combined with read-write functionality of websites, blog and social platforms, also allowed to customize, improve, share, reproduce and interconnect technologies so that anybody could use them in any occupied environment.
Automated visual inspection system based on HAVNET architecture
NASA Astrophysics Data System (ADS)
Burkett, K.; Ozbayoglu, Murat A.; Dagli, Cihan H.
1994-10-01
In this study, the HAusdorff-Voronoi NETwork (HAVNET) developed at the UMR Smart Engineering Systems Lab is tested in the recognition of mounted circuit components commonly used in printed circuit board assembly systems. The automated visual inspection system used consists of a CCD camera, a neural network based image processing software and a data acquisition card connected to a PC. The experiments are run in the Smart Engineering Systems Lab in the Engineering Management Dept. of the University of Missouri-Rolla. The performance analysis shows that the vision system is capable of recognizing different components under uncontrolled lighting conditions without being effected by rotation or scale differences. The results obtained are promising and the system can be used in real manufacturing environments. Currently the system is being customized for a specific manufacturing application.
Li, Tianlong; Chang, Xiaocong; Wu, Zhiguang; Li, Jinxing; Shao, Guangbin; Deng, Xinghong; Qiu, Jianbin; Guo, Bin; Zhang, Guangyu; He, Qiang; Li, Longqiu; Wang, Joseph
2017-09-26
Self-propelled micro- and nanoscale robots represent a rapidly emerging and fascinating robotics research area. However, designing autonomous and adaptive control systems for operating micro/nanorobotics in complex and dynamically changing environments, which is a highly demanding feature, is still an unmet challenge. Here we describe a smart microvehicle for precise autonomous navigation in complicated environments and traffic scenarios. The fully autonomous navigation system of the smart microvehicle is composed of a microscope-coupled CCD camera, an artificial intelligence planner, and a magnetic field generator. The microscope-coupled CCD camera provides real-time localization of the chemically powered Janus microsphere vehicle and environmental detection for path planning to generate optimal collision-free routes, while the moving direction of the microrobot toward a reference position is determined by the external electromagnetic torque. Real-time object detection offers adaptive path planning in response to dynamically changing environments. We demonstrate that the autonomous navigation system can guide the vehicle movement in complex patterns, in the presence of dynamically changing obstacles, and in complex biological environments. Such a navigation system for micro/nanoscale vehicles, relying on vision-based close-loop control and path planning, is highly promising for their autonomous operation in complex dynamic settings and unpredictable scenarios expected in a variety of realistic nanoscale scenarios.
Power smart in-door optical wireless link design
NASA Astrophysics Data System (ADS)
Marraccini, P. J.; Riza, N. A.
2011-12-01
Presented for the first time, to the best of the authors´ knowledge, is the design of a power smart in-door optical wireless link that provides lossless beam propagation between Transmitter (T) and Receiver (R) for changing link distances. Each T/R unit uses a combination of fixed and variable focal length optics to smartly adjust the laser beam propagation parameters of minimum beam waist size and its location to produce the optimal zero propagation loss coupling condition at the R for that link distance. An Electronically Controlled Variable Focus Lens (ECVFL) is used to form the wide field-of-view search beam and change the beam size at R to form a low loss beam. The T/R unit can also deploy camera optics and thermal energy harvesting electronics to improve link operational smartness and efficiency. To demonstrate the principles of the beam conditioned low loss indoor link, a visible 633 nm laser link using an electro-wetting technology liquid ECVFL is demonstrated for a variable 1 to 4 m link range. Measurements indicate a 53% improvement over an unconditioned laser link at 4 m. Applications for this power efficient wireless link includes mobile computer platform communications and agile server rack interconnections in data centres.
A universal data access and protocol integration mechanism for smart home
NASA Astrophysics Data System (ADS)
Shao, Pengfei; Yang, Qi; Zhang, Xuan
2013-03-01
With the lack of standardized or completely missing communication interfaces in home electronics, there is no perfect solution to address every aspect in smart homes based on existing protocols and technologies. In addition, the central control unit (CCU) of smart home system working point-to-point between the multiple application interfaces and the underlying hardware interfaces leads to its complicated architecture and unpleasant performance. A flexible data access and protocol integration mechanism is required. The current paper offers a universal, comprehensive data access and protocol integration mechanism for a smart home. The universal mechanism works as a middleware adapter with unified agreements of the communication interfaces and protocols, offers an abstraction of the application level from the hardware specific and decoupling the hardware interface modules from the application level. Further abstraction for the application interfaces and the underlying hardware interfaces are executed based on adaption layer to provide unified interfaces for more flexible user applications and hardware protocol integration. This new universal mechanism fundamentally changes the architecture of the smart home and in some way meets the practical requirement of smart homes more flexible and desirable.
NASA Astrophysics Data System (ADS)
Smith, C. W.; Broad, L.; Chen, L.; Farrugia, C. J.; Frederick-Frost, K.; Goelzer, S.; Kucharek, H.; Messeder, R.; Moebius, E.; Puhl-Quinn, P. A.; Torbert, R. B.
2009-12-01
For the past 19 years the University of New Hampshire has offered a unique research and education opportunity to motivated high-school students called Project SMART (Science and Mathematics Achievement through Research Training). The Space Science module is strongly research based. Students work in teams of two on real research projects carved from the research programs of the faculty. The projects are carefully chosen to match the abilities of the students. The students receive classes in basic physics as well as lectures in space science to help them with their work. This year the research included the analysis of magnetic reconnection observations and Crater FTE observation, both by the CLUSTER spacecraft, the building of Faraday cups for thermal ion measurements in our thermal vacuum facility, and analysis of the IBEX star sensor. In addition to this, the students work on one combined project and for the past several years this project has been the building of a payload for a high-altitude balloon. The students learn to integrate telemetry and GPS location hardware while they build several small experiments that they then fly to the upper reaches of the Earth's atmosphere. This year the payload included a small video camera and the payload flew to 96,000 feet, capturing images of weather patterns as well as the curvature of the Earth, thickness of the atmosphere, and black space. In addition to still photos, we will be showing 2- and 7-minute versions of the 90-minute flight video that include footage from peak altitude, the bursting of the balloon, and initial descent.
Qualification Tests of Micro-camera Modules for Space Applications
NASA Astrophysics Data System (ADS)
Kimura, Shinichi; Miyasaka, Akira
Visual capability is very important for space-based activities, for which small, low-cost space cameras are desired. Although cameras for terrestrial applications are continually being improved, little progress has been made on cameras used in space, which must be extremely robust to withstand harsh environments. This study focuses on commercial off-the-shelf (COTS) CMOS digital cameras because they are very small and are based on an established mass-market technology. Radiation and ultrahigh-vacuum tests were conducted on a small COTS camera that weighs less than 100 mg (including optics). This paper presents the results of the qualification tests for COTS cameras and for a small, low-cost COTS-based space camera.
Electronic recording of holograms with applications to holographic displays
NASA Technical Reports Server (NTRS)
Claspy, P. C.; Merat, F. L.
1979-01-01
The paper describes an electronic heterodyne recording which uses electrooptic modulation to introduce a sinusoidal phase shift between the object and reference wave. The resulting temporally modulated holographic interference pattern is scanned by a commercial image dissector camera, and the rejection of the self-interference terms is accomplished by heterodyne detection at the camera output. The electrical signal representing this processed hologram can then be used to modify the properties of a liquid crystal light valve or a similar device. Such display devices transform the displayed interference pattern into a phase modulated wave front rendering a three-dimensional image.
Lu, Ji-Yun; Liang, Da-Kai; Zhang, Xiao-Li; Zhu, Zhu
2009-12-01
Spectrum of fiber bragg grating (FBG) sensor modulated by double long period grating (LPFG) is proposed in the paper. Double LPFG consists of two LPFGS whose center wavelengths are the same and reflection spectrum of FBG sensor is located in linear range of double LPFG transmission spectrum. Based on spectral analysis of FBG and double LPFG, reflection spectrum of FBG modulated by double LPFG is obtained and studied by use of band-hider filter characteristics for double LPFG. An FBG sensor is attached on the surface of thin steel beam, which is strained by bending, and the center wavelength of FBG sensor will shift. The spectral peak of FBG sensor modulated by double LPFG is changed correspondingly, and the spectral change will lead to variation in exit light intensity from double LPFG. Experiment demonstrates that the relation of filtering light intensity from double LPFG monitored by optical power meter to center wavelength change of FBG sensor is linear and the minimum strain of material (steel beam) detected by the modulation and demodulation system is 1.05 microepsilon. This solution is used in impact monitoring of optical fibre smart structure, and FBG sensor is applied for impulse response signal monitoring induced by low-velocity impact, when impact pendulum is loaded to carbon fiber-reinforced plastics (CFP). The acquired impact response signal and fast Fourier transform of the signal detected by FBG sensor agree with the measurement results of eddy current displacement meter attached to the FBG sensor. From the results, the present method using FBG sensor is found to be effective for monitoring the impact. The research provides a practical reference in dynamic monitoring of optical fiber smart structure field.
Smart Cameras for Remote Science Survey
NASA Technical Reports Server (NTRS)
Thompson, David R.; Abbey, William; Allwood, Abigail; Bekker, Dmitriy; Bornstein, Benjamin; Cabrol, Nathalie A.; Castano, Rebecca; Estlin, Tara; Fuchs, Thomas; Wagstaff, Kiri L.
2012-01-01
Communication with remote exploration spacecraft is often intermittent and bandwidth is highly constrained. Future missions could use onboard science data understanding to prioritize downlink of critical features [1], draft summary maps of visited terrain [2], or identify targets of opportunity for followup measurements [3]. We describe a generic approach to classify geologic surfaces for autonomous science operations, suitable for parallelized implementations in FPGA hardware. We map these surfaces with texture channels - distinctive numerical signatures that differentiate properties such as roughness, pavement coatings, regolith characteristics, sedimentary fabrics and differential outcrop weathering. This work describes our basic image analysis approach and reports an initial performance evaluation using surface images from the Mars Exploration Rovers. Future work will incorporate these methods into camera hardware for real-time processing.
Apollo 12 crew assisted with egressing command module after landing
1969-11-24
S69-22271 (24 Nov. 1969) --- A United States Navy Underwater Demolition Team swimmer assists the Apollo 12 crew during recovery operations in the Pacific Ocean. In the life raft are astronauts Charles Conrad Jr. (facing camera), commander; Richard F. Gordon Jr. (middle), command module pilot; and Alan L. Bean (nearest camera), lunar module pilot. The three crew men of the second lunar landing mission were picked up by helicopter and flown to the prime recovery ship, USS Hornet. Apollo 12 splashed down at 2:58 p.m. (CST), Nov. 24, 1969, near American Samoa. While astronauts Conrad and Bean descended in the Lunar Module (LM) "Intrepid" to explore the Ocean of Storms region of the moon, astronaut Gordon remained with the Command and Service Modules (CSM) "Yankee Clipper" in lunar orbit.
Li, Jin; Liu, Zilong; Liu, Si
2017-02-20
In on-board photographing processes of satellite cameras, the platform vibration can generate image motion, distortion, and smear, which seriously affect the image quality and image positioning. In this paper, we create a mathematical model of a vibrating modulate transfer function (VMTF) for a remote-sensing camera. The total MTF of a camera is reduced by the VMTF, which means the image quality is degraded. In order to avoid the degeneration of the total MTF caused by vibrations, we use an Mn-20Cu-5Ni-2Fe (M2052) manganese copper alloy material to fabricate a vibration-isolation mechanism (VIM). The VIM can transform platform vibration energy into irreversible thermal energy with its internal twin crystals structure. Our experiment shows the M2052 manganese copper alloy material is good enough to suppress image motion below 125 Hz, which is the vibration frequency of satellite platforms. The camera optical system has a higher MTF after suppressing the vibration of the M2052 material than before.
NASA Astrophysics Data System (ADS)
Duan, Yaxuan; Xu, Songbo; Yuan, Suochao; Chen, Yongquan; Li, Hongguang; Da, Zhengshang; Gao, Limin
2018-01-01
ISO 12233 slanted-edge method experiences errors using fast Fourier transform (FFT) in the camera modulation transfer function (MTF) measurement due to tilt angle errors in the knife-edge resulting in nonuniform sampling of the edge spread function (ESF). In order to resolve this problem, a modified slanted-edge method using nonuniform fast Fourier transform (NUFFT) for camera MTF measurement is proposed. Theoretical simulations for images with noise at a different nonuniform sampling rate of ESF are performed using the proposed modified slanted-edge method. It is shown that the proposed method successfully eliminates the error due to the nonuniform sampling of the ESF. An experimental setup for camera MTF measurement is established to verify the accuracy of the proposed method. The experiment results show that under different nonuniform sampling rates of ESF, the proposed modified slanted-edge method has improved accuracy for the camera MTF measurement compared to the ISO 12233 slanted-edge method.
Polymorphic robotic system controlled by an observing camera
NASA Astrophysics Data System (ADS)
Koçer, Bilge; Yüksel, Tugçe; Yümer, M. Ersin; Özen, C. Alper; Yaman, Ulas
2010-02-01
Polymorphic robotic systems, which are composed of many modular robots that act in coordination to achieve a goal defined on the system level, have been drawing attention of industrial and research communities since they bring additional flexibility in many applications. This paper introduces a new polymorphic robotic system, in which the detection and control of the modules are attained by a stationary observing camera. The modules do not have any sensory equipment for positioning or detecting each other. They are self-powered, geared with means of wireless communication and locking mechanisms, and are marked to enable the image processing algorithm detect the position and orientation of each of them in a two dimensional space. Since the system does not depend on the modules for positioning and commanding others, in a circumstance where one or more of the modules malfunction, the system will be able to continue operating with the rest of the modules. Moreover, to enhance the compatibility and robustness of the system under different illumination conditions, stationary reference markers are employed together with global positioning markers, and an adaptive filtering parameter decision methodology is enclosed. To the best of authors' knowledge, this is the first study to introduce a remote camera observer to control modules of a polymorphic robotic system.
Design of CMOS imaging system based on FPGA
NASA Astrophysics Data System (ADS)
Hu, Bo; Chen, Xiaolai
2017-10-01
In order to meet the needs of engineering applications for high dynamic range CMOS camera under the rolling shutter mode, a complete imaging system is designed based on the CMOS imaging sensor NSC1105. The paper decides CMOS+ADC+FPGA+Camera Link as processing architecture and introduces the design and implementation of the hardware system. As for camera software system, which consists of CMOS timing drive module, image acquisition module and transmission control module, the paper designs in Verilog language and drives it to work properly based on Xilinx FPGA. The ISE 14.6 emulator ISim is used in the simulation of signals. The imaging experimental results show that the system exhibits a 1280*1024 pixel resolution, has a frame frequency of 25 fps and a dynamic range more than 120dB. The imaging quality of the system satisfies the requirement of the index.
Ultrafast Imaging using Spectral Resonance Modulation
NASA Astrophysics Data System (ADS)
Huang, Eric; Ma, Qian; Liu, Zhaowei
2016-04-01
CCD cameras are ubiquitous in research labs, industry, and hospitals for a huge variety of applications, but there are many dynamic processes in nature that unfold too quickly to be captured. Although tradeoffs can be made between exposure time, sensitivity, and area of interest, ultimately the speed limit of a CCD camera is constrained by the electronic readout rate of the sensors. One potential way to improve the imaging speed is with compressive sensing (CS), a technique that allows for a reduction in the number of measurements needed to record an image. However, most CS imaging methods require spatial light modulators (SLMs), which are subject to mechanical speed limitations. Here, we demonstrate an etalon array based SLM without any moving elements that is unconstrained by either mechanical or electronic speed limitations. This novel spectral resonance modulator (SRM) shows great potential in an ultrafast compressive single pixel camera.
Smart manufacturing of complex shaped pipe components
NASA Astrophysics Data System (ADS)
Salchak, Y. A.; Kotelnikov, A. A.; Sednev, D. A.; Borikov, V. N.
2018-03-01
Manufacturing industry is constantly improving. Nowadays the most relevant trend is widespread automation and optimization of the production process. This paper represents a novel approach for smart manufacturing of steel pipe valves. The system includes two main parts: mechanical treatment and quality assurance units. Mechanical treatment is performed by application of the milling machine with implementation of computerized numerical control, whilst the quality assurance unit contains three testing modules for different tasks, such as X-ray testing, optical scanning and ultrasound testing modules. The advances of each of them provide reliable results that contain information about any failures of the technological process, any deviations of geometrical parameters of the valves. The system also allows detecting defects on the surface or in the inner structure of the component.
Gestion de stockage d'energie thermique d'un parc de chauffe-eaux par une commande a champ moyen
NASA Astrophysics Data System (ADS)
Bourdel, Benoit
In today's energy transition, smart grids and electrical load control are very active research fields. This master's thesis is an offshoot of the SmartDesc project which aims at using energy storage capability of electric household appliances, such as water heaters and electric heaters to mitigate the fluctuations of system loads and renewable generation. The smartDESC project aims at demonstrating that the mean field game theory (MFG), as new mathematical theory, can be used to convert and control water heaters (and possibly space heater) into smart thermal capacities. Thus, a set of "modules" has been developed. These modules are used to generate the optimal control and locally interpret it, to simulate the water-heater thermophysics or water draw event, or to virtualize a telecommunication mesh network. The different aspects of the project have been first studied and developed separately. During the course of this master's research, the modules have been integrated, tested, interfaced and tuned in a common simulator. This simulator is designed to make complete electrical network simulations with a multi-scale approach (from individual water heater to global electric load and production). Firstly, the modules are precisely described theoretically and practically. Then, different types of control are applied to an uniform population of houses fitted with water heaters and controllers. The results of these controls are analysed and compared in order to understand their strengths and weaknesses. Finally, a study was conducted to analyse the resilience of a mean field control. This report demonstrates that mean field game theory in coordination with a system level aggregate model based optimization program, is able to effectively control a large population of water heaters to smooth the overall electrical load. This control offers good resilience to unforeseen circumstances that can disrupt the network. It is also demonstrated that a mean field control is able to absorb fluctuations due to wind power production. Thus, by reducing the variability of the residential sector's electrical charge, the mean field control plays a role in increasing power system stability in the face of high levels of renewable energy penetration. The next stage of smartDESC project is now to set up an intelligent electric water heater prototype. This prototype, in progress since January 2016 at Ecole Polytechnique in Montreal, is aimed at proving concretely the theories developed in the project.
Deployable Soft Composite Structures.
Wang, Wei; Rodrigue, Hugo; Ahn, Sung-Hoon
2016-02-19
Deployable structure composed of smart materials based actuators can reconcile its inherently conflicting requirements of low mass, good shape adaptability, and high load-bearing capability. This work describes the fabrication of deployable structures using smart soft composite actuators combining a soft matrix with variable stiffness properties and hinge-like movement through a rigid skeleton. The hinge actuator has the advantage of being simple to fabricate, inexpensive, lightweight and simple to actuate. This basic actuator can then be used to form modules capable of different types of deformations, which can then be assembled into deployable structures. The design of deployable structures is based on three principles: design of basic hinge actuators, assembly of modules and assembly of modules into large-scale deployable structures. Various deployable structures such as a segmented triangular mast, a planar structure comprised of single-loop hexagonal modules and a ring structure comprised of single-loop quadrilateral modules were designed and fabricated to verify this approach. Finally, a prototype for a deployable mirror was developed by attaching a foldable reflective membrane to the designed ring structure and its functionality was tested by using it to reflect sunlight onto to a small-scale solar panel.
Deployable Soft Composite Structures
Wang, Wei; Rodrigue, Hugo; Ahn, Sung-Hoon
2016-01-01
Deployable structure composed of smart materials based actuators can reconcile its inherently conflicting requirements of low mass, good shape adaptability, and high load-bearing capability. This work describes the fabrication of deployable structures using smart soft composite actuators combining a soft matrix with variable stiffness properties and hinge-like movement through a rigid skeleton. The hinge actuator has the advantage of being simple to fabricate, inexpensive, lightweight and simple to actuate. This basic actuator can then be used to form modules capable of different types of deformations, which can then be assembled into deployable structures. The design of deployable structures is based on three principles: design of basic hinge actuators, assembly of modules and assembly of modules into large-scale deployable structures. Various deployable structures such as a segmented triangular mast, a planar structure comprised of single-loop hexagonal modules and a ring structure comprised of single-loop quadrilateral modules were designed and fabricated to verify this approach. Finally, a prototype for a deployable mirror was developed by attaching a foldable reflective membrane to the designed ring structure and its functionality was tested by using it to reflect sunlight onto to a small-scale solar panel. PMID:26892762
2017-01-01
Background EDUCERE (“Ubiquitous Detection Ecosystem to Care and Early Stimulation for Children with Developmental Disorders”) is an ecosystem for ubiquitous detection, care, and early stimulation of children with developmental disorders. The objectives of this Spanish government-funded research and development project are to investigate, develop, and evaluate innovative solutions to detect changes in psychomotor development through the natural interaction of children with toys and everyday objects, and perform stimulation and early attention activities in real environments such as home and school. Thirty multidisciplinary professionals and three nursery schools worked in the EDUCERE project between 2014 and 2017 and they obtained satisfactory results. Related to EDUCERE, we found studies based on providing networks of connected smart objects and the interaction between toys and social networks. Objective This research includes the design, implementation, and validation of an EDUCERE smart toy aimed to automatically detect delays in psychomotor development. The results from initial tests led to enhancing the effectiveness of the original design and deployment. The smart toy, based on stackable cubes, has a data collector module and a smart system for detection of developmental delays, called the EDUCERE developmental delay screening system (DDSS). Methods The pilot study involved 65 toddlers aged between 23 and 37 months (mean=29.02, SD 3.81) who built a tower with five stackable cubes, designed by following the EDUCERE smart toy model. As toddlers made the tower, sensors in the cubes sent data to a collector module through a wireless connection. All trials were video-recorded for further analysis by child development experts. After watching the videos, experts scored the performance of the trials to compare and fine-tune the interpretation of the data automatically gathered by the toy-embedded sensors. Results Judges were highly reliable in an interrater agreement analysis (intraclass correlation 0.961, 95% CI 0.937-0.967), suggesting that the process was successful to separate different levels of performance. A factor analysis of collected data showed that three factors, trembling, speed, and accuracy, accounted for 76.79% of the total variance, but only two of them were predictors of performance in a regression analysis: accuracy (P=.001) and speed (P=.002). The other factor, trembling (P=.79), did not have a significant effect on this dependent variable. Conclusions The EDUCERE DDSS is ready to use the regression equation obtained for the dependent variable “performance” as an algorithm for the automatic detection of psychomotor developmental delays. The results of the factor analysis are valuable to simplify the design of the smart toy by taking into account only the significant variables in the collector module. The fine-tuning of the toy process module will be carried out by following the specifications resulting from the analysis of the data to improve the efficiency and effectiveness of the product. PMID:28526666
Design and construction of smart cane using infrared laser-based tracking system
NASA Astrophysics Data System (ADS)
Wong, Chi Fung; Phitagragsakul, Narikorn; Jornsamer, Patcharaporn; Kaewmeesri, Pimsin; Jantakot, Pimsunan; Locharoenrat, Kitsakorn
2018-06-01
Our work is aimed to design and construct the smart cane. The infrared laser-based sensor was used as a distance detector and Arduino board was used as a microcontroller. On the other hand, Bluetooth was used as a wireless communicator and MP3 module together with the headset were used as a voice alert player. Our smart cane is a very effective device for the users under the indoor guidance. That is, the obstacle was detectable 3,000 cm away from the blind people. The white cane was assembled with the laser distance sensor and distance alert sensor served as the compact and light-weight device. Distance detection was very fast and precise when the smart cane was tested for the different obstacles, such as human, wall and wooden table under the indoor area.
Welsh, Christopher
2016-01-01
As part of a comprehensive plan to attempt to minimize the diversion of prescribed controlled substances, many professional organization and licensing boards are recommending the use of "pill counts." This study sought to evaluate acceptability of the use of cellular phone and computer pictures/video for "pill counts" by patients in buprenorphine maintenance treatment. Patients prescribed buprenorphine/naloxone were asked a series of questions related to the type(s) of electronic communication to which they had access as well as their willingness to use these for the purpose of performing a "pill/film count." Of the 80 patients, 4 (5 percent) did not have a phone at all. Only 28 (35 percent) had a "smart phone" with some sort of data plan and Internet access. Forty (50 percent) of the patients had a phone with no camera and 10 (12.5 percent) had a phone with a camera but no video capability. All patients said that they would be willing to periodically use the video or camera on their phone or computer to have buprenorphine/naloxone pills or film counted as long as the communication was protected from electronic tampering. With the advent of applications for smart phones that allow for Health Insurance Portability and Accountability Act of 1996-compliant picture/video communication, a number of things can now be done that can enhance patient care as well as reduce the chances of misuse/diversion of prescribed medications. This could be used in settings where a larger proportion of controlled substances are prescribed including medication assisted therapy for opioid use disorders and pain management programs.
Multi-camera synchronization core implemented on USB3 based FPGA platform
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Centered on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a new technique to synchronize up to 8 individual self-timed cameras with minimal error. Small form factor self-timed camera modules of 1 mm x 1 mm or smaller do not normally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge of synchronizing multiple selftimed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames, a Master-Slave interface was implemented. A single camera is defined as the Master, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the implementation of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
Image synchronization for 3D application using the NanEye sensor
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Dias, Morgado
2015-03-01
Based on Awaiba's NanEye CMOS image sensor family and a FPGA platform with USB3 interface, the aim of this paper is to demonstrate a novel technique to perfectly synchronize up to 8 individual self-timed cameras. Minimal form factor self-timed camera modules of 1 mm x 1 mm or smaller do not generally allow external synchronization. However, for stereo vision or 3D reconstruction with multiple cameras as well as for applications requiring pulsed illumination it is required to synchronize multiple cameras. In this work, the challenge to synchronize multiple self-timed cameras with only 4 wire interface has been solved by adaptively regulating the power supply for each of the cameras to synchronize their frame rate and frame phase. To that effect, a control core was created to constantly monitor the operating frequency of each camera by measuring the line period in each frame based on a well-defined sampling signal. The frequency is adjusted by varying the voltage level applied to the sensor based on the error between the measured line period and the desired line period. To ensure phase synchronization between frames of multiple cameras, a Master-Slave interface was implemented. A single camera is defined as the Master entity, with its operating frequency being controlled directly through a PC based interface. The remaining cameras are setup in Slave mode and are interfaced directly with the Master camera control module. This enables the remaining cameras to monitor its line and frame period and adjust their own to achieve phase and frequency synchronization. The result of this work will allow the realization of smaller than 3mm diameter 3D stereo vision equipment in medical endoscopic context, such as endoscopic surgical robotic or micro invasive surgery.
1972-04-07
S72-35971 (21 April 1972) --- A 360-degree field of view of the Apollo 16 Descartes landing site area composed of individual scenes taken from color transmission made by the color RCA TV camera mounted on the Lunar Roving Vehicle (LRV). This panorama was made while the LRV was parked at the rim of North Ray Crater (Stations 11 & 12) during the third Apollo 16 lunar surface extravehicular activity (EVA) by astronauts John W. Young and Charles M. Duke Jr. The overlay identifies the directions and the key lunar terrain features. The camera panned across the rear portion of the LRV in its 360-degree sweep. Note Young and Duke walking along the edge of the crater in one of the scenes. The TV camera was remotely controlled from a console in the Mission Control Center (MCC). Astronauts Young, commander; and Duke, lunar module pilot; descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon. Astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
Close-up view of RCA color television camera mounted on the LRV
1972-04-23
AS16-117-18754 (23 April 1972) --- A view of the smooth terrain in the general area of the North Ray Crater geological site, photographed by the Apollo 16 crew from the Lunar Roving Vehicle (LRV) shortly after leaving the immediate area of the geology site. The RCA color television camera is mounted on the front of the LRV and can be seen in the foreground, along with a small part of the high gain antenna, upper left. The tracks were made on the earlier trip to the North Ray Crater site. Astronaut Charles M. Duke Jr., lunar module pilot, exposed this view with his 70mm Hasselblad camera. Astronaut John W. Young, commander, said that this area was much smoother than the region around South Ray Crater. While astronauts Young and Duke descended in the Apollo 16 Lunar Module (LM) "Orion" to explore the Descartes highlands landing site on the moon, astronaut Thomas K. Mattingly II, command module pilot, remained with the Command and Service Modules (CSM) "Casper" in lunar orbit.
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-01-01
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots. PMID:27023556
A Simple Interface for 3D Position Estimation of a Mobile Robot with Single Camera.
Chao, Chun-Tang; Chung, Ming-Hsuan; Chiou, Juing-Shian; Wang, Chi-Jo
2016-03-25
In recent years, there has been an increase in the number of mobile robots controlled by a smart phone or tablet. This paper proposes a visual control interface for a mobile robot with a single camera to easily control the robot actions and estimate the 3D position of a target. In this proposal, the mobile robot employed an Arduino Yun as the core processor and was remote-controlled by a tablet with an Android operating system. In addition, the robot was fitted with a three-axis robotic arm for grasping. Both the real-time control signal and video transmission are transmitted via Wi-Fi. We show that with a properly calibrated camera and the proposed prototype procedures, the users can click on a desired position or object on the touchscreen and estimate its 3D coordinates in the real world by simple analytic geometry instead of a complicated algorithm. The results of the measurement verification demonstrates that this approach has great potential for mobile robots.
Study on real-time images compounded using spatial light modulator
NASA Astrophysics Data System (ADS)
Xu, Jin; Chen, Zhebo; Ni, Xuxiang; Lu, Zukang
2007-01-01
Image compounded technology is often used on film and its facture. In common, image compounded use image processing arithmetic, get useful object, details, background or some other things from the images firstly, then compounding all these information into one image. When using this method, the film system needs a powerful processor, for the process function is very complex, we get the compounded image for a few time delay. In this paper, we introduce a new method of image real-time compounded, use this method, we can do image composite at the same time with movie shot. The whole system is made up of two camera-lens, spatial light modulator array and image sensor. In system, the spatial light modulator could be liquid crystal display (LCD), liquid crystal on silicon (LCoS), thin film transistor liquid crystal display (TFTLCD), Deformable Micro-mirror Device (DMD), and so on. Firstly, one camera-lens images the object on the spatial light modulator's panel, we call this camera-lens as first image lens. Secondly, we output an image to the panel of spatial light modulator. Then, the image of the object and image that output by spatial light modulator will be spatial compounded on the panel of spatial light modulator. Thirdly, the other camera-lens images the compounded image to the image sensor, and we call this camera-lens as second image lens. After these three steps, we will gain the compound images by image sensor. For the spatial light modulator could output the image continuously, then the image will be compounding continuously too, and the compounding procedure is completed in real-time. When using this method to compounding image, if we will put real object into invented background, we can output the invented background scene on the spatial light modulator, and the real object will be imaged by first image lens. Then, we get the compounded images by image sensor in real time. The same way, if we will put real background to an invented object, we can output the invented object on the spatial light modulator and the real background will be imaged by first image lens. Then, we can also get the compounded images by image sensor real time. Commonly, most spatial light modulator only can do modulate light intensity, so we can only do compounding BW images if use only one panel which without color filter. If we will get colorful compounded image, we need use the system like three spatial light modulator panel projection. In the paper, the system's optical system framework we will give out. In all experiment, the spatial light modulator used liquid crystal on silicon (LCoS). At the end of the paper, some original pictures and compounded pictures will be given on it. Although the system has a few shortcomings, we can conclude that, using this system to compounding images has no delay to do mathematic compounding process, it is a really real time images compounding system.
NASA Astrophysics Data System (ADS)
Harvey, Nate
2016-08-01
Extending results from previous work by Bandikova et al. (2012) and Inacio et al. (2015), this paper analyzes Gravity Recovery and Climate Experiment (GRACE) star camera attitude measurement noise by processing inter-camera quaternions from 2003 to 2015. We describe a correction to star camera data, which will eliminate a several-arcsec twice-per-rev error with daily modulation, currently visible in the auto-covariance function of the inter-camera quaternion, from future GRACE Level-1B product releases. We also present evidence supporting the argument that thermal conditions/settings affect long-term inter-camera attitude biases by at least tens-of-arcsecs, and that several-to-tens-of-arcsecs per-rev star camera errors depend largely on field-of-view.
Android Platform Based Smartphones for a Logistical Remote Association Repair Framework
Lien, Shao-Fan; Wang, Chun-Chieh; Su, Juhng-Perng; Chen, Hong-Ming; Wu, Chein-Hsing
2014-01-01
The maintenance of large-scale systems is an important issue for logistics support planning. In this paper, we developed a Logistical Remote Association Repair Framework (LRARF) to aid repairmen in keeping the system available. LRARF includes four subsystems: smart mobile phones, a Database Management System (DBMS), a Maintenance Support Center (MSC) and wireless networks. The repairman uses smart mobile phones to capture QR-codes and the images of faulty circuit boards. The captured QR-codes and images are transmitted to the DBMS so the invalid modules can be recognized via the proposed algorithm. In this paper, the Linear Projective Transform (LPT) is employed for fast QR-code calibration. Moreover, the ANFIS-based data mining system is used for module identification and searching automatically for the maintenance manual corresponding to the invalid modules. The inputs of the ANFIS-based data mining system are the QR-codes and image features; the output is the module ID. DBMS also transmits the maintenance manual back to the maintenance staff. If modules are not recognizable, the repairmen and center engineers can obtain the relevant information about the invalid modules through live video. The experimental results validate the applicability of the Android-based platform in the recognition of invalid modules. In addition, the live video can also be recorded synchronously on the MSC for later use. PMID:24967603
2007-08-03
KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, from left in blue flight suits, STS-120 Mission Specialist Stephanie D. Wilson, Commander Pamela A. Melroy, Pilot George D. Zamka, Mission Specialist Scott E. Parazynski (back to camera), Mission Specialist Douglas H. Wheelock and Mission Specialist Paolo A. Nespoli (holding camera), a European Space Agency astronaut from Italy, are given the opportunity to operate the cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton
NASA Technical Reports Server (NTRS)
Abou-Khousa, M. A.
2009-01-01
A novel modulated slot design has been proposed and tested. The proposed slot is aimed to replace the inefficient small dipoles used in conventional MST-based imaging systems. The developed slot is very attractive as MST array element due to its small size and high efficiency/modulation depth. In fact, the developed slot has been successfully used to implement the first prototype of a microwave camera operating at 24 GHZ. It is also being used in the design of the second generation of the camera. Finally, the designed elliptical slot can be used as an electronically controlled waveguide iris for many other purposes (for instance in constructing waveguide reflective phase shifters and multiplexers/switches).
Electronic heterodyne recording of interference patterns
NASA Technical Reports Server (NTRS)
Merat, F. L.; Claspy, P. C.
1979-01-01
An electronic heterodyne technique is being investigated for video (i.e., television rate and format) recording of interference patterns. In the heterodyne technique electro-optic modulation is used to introduce a sinusoidal phase shift between the beams of an interferometer. For phase modulation frequencies between 0.1 and 15 MHz an image dissector camera may be used to scan the resulting temporally modulated interference pattern. Heterodyne detection of the camera output is used to selectively record the interference pattern. An advantage of such synchronous recording is that it permits recording of low-contrast fringes in high ambient light conditions. The application of this technique to the recording of holograms is discussed.
An Open Source “Smart Lamp” for the Optimization of Plant Systems and Thermal Comfort of Offices
Salamone, Francesco; Belussi, Lorenzo; Danza, Ludovico; Ghellere, Matteo; Meroni, Italo
2016-01-01
The article describes the design phase, development and practical application of a smart object integrated in a desk lamp and called “Smart Lamp”, useful to optimize the indoor thermal comfort and energy savings that are two important workplace issues where the comfort of the workers and the consumption of the building strongly affect the economic balance of a company. The Smart Lamp was built using a microcontroller, an integrated temperature and relative humidity sensor, some other modules and a 3D printer. This smart device is similar to the desk lamps that are usually found in offices but it allows one to adjust the indoor thermal comfort, by interacting directly with the air conditioner. After the construction phase, the Smart Lamp was installed in an office normally occupied by four workers to evaluate the indoor thermal comfort and the cooling consumption in summer. The results showed how the application of the Smart Lamp effectively reduced the energy consumption, optimizing the thermal comfort. The use of DIY approach combined with read-write functionality of websites, blog and social platforms, also allowed to customize, improve, share, reproduce and interconnect technologies so that anybody could use them in any occupied environment. PMID:26959035
Privacy versus autonomy: a tradeoff model for smart home monitoring technologies.
Townsend, Daphne; Knoefel, Frank; Goubran, Rafik
2011-01-01
Smart homes are proposed as a new location for the delivery of healthcare services. They provide healthcare monitoring and communication services, by using integrated sensor network technologies. We validate a hypothesis regarding older adults' adoption of home monitoring technologies by conducting a literature review of articles studying older adults' attitudes and perceptions of sensor technologies. Using current literature to support the hypothesis, this paper applies the tradeoff model to decisions about sensor acceptance. Older adults are willing to trade privacy (by accepting a monitoring technology), for autonomy. As the information captured by the sensor becomes more intrusive and the infringement on privacy increases, sensors are accepted if the loss in privacy is traded for autonomy. Even video cameras, the most intrusive sensor type were accepted in exchange for the height of autonomy which is to remain in the home.
Improved Feature Matching for Mobile Devices with IMU.
Masiero, Andrea; Vettore, Antonio
2016-08-05
Thanks to the recent diffusion of low-cost high-resolution digital cameras and to the development of mostly automated procedures for image-based 3D reconstruction, the popularity of photogrammetry for environment surveys is constantly increasing in the last years. Automatic feature matching is an important step in order to successfully complete the photogrammetric 3D reconstruction: this step is the fundamental basis for the subsequent estimation of the geometry of the scene. This paper reconsiders the feature matching problem when dealing with smart mobile devices (e.g., when using the standard camera embedded in a smartphone as imaging sensor). More specifically, this paper aims at exploiting the information on camera movements provided by the inertial navigation system (INS) in order to make the feature matching step more robust and, possibly, computationally more efficient. First, a revised version of the affine scale-invariant feature transform (ASIFT) is considered: this version reduces the computational complexity of the original ASIFT, while still ensuring an increase of correct feature matches with respect to the SIFT. Furthermore, a new two-step procedure for the estimation of the essential matrix E (and the camera pose) is proposed in order to increase its estimation robustness and computational efficiency.
Development of a mini-mobile digital radiography system by using wireless smart devices.
Jeong, Chang-Won; Joo, Su-Chong; Ryu, Jong-Hyun; Lee, Jinseok; Kim, Kyong-Woo; Yoon, Kwon-Ha
2014-08-01
The current technologies that trend in digital radiology (DR) are toward systems using portable smart mobile as patient-centered care. We aimed to develop a mini-mobile DR system by using smart devices for wireless connection into medical information systems. We developed a mini-mobile DR system consisting of an X-ray source and a Complementary Metal-Oxide Semiconductor (CMOS) sensor based on a flat panel detector for small-field diagnostics in patients. It is used instead of the systems that are difficult to perform with a fixed traditional device. We also designed a method for embedded systems in the development of portable DR systems. The external interface used the fast and stable IEEE 802.11n wireless protocol, and we adapted the device for connections with Picture Archiving and Communication System (PACS) and smart devices. The smart device could display images on an external monitor other than the monitor in the DR system. The communication modules, main control board, and external interface supporting smart devices were implemented. Further, a smart viewer based on the external interface was developed to display image files on various smart devices. In addition, the advantage of operators is to reduce radiation dose when using remote smart devices. It is integrated with smart devices that can provide X-ray imaging services anywhere. With this technology, it can permit image observation on a smart device from a remote location by connecting to the external interface. We evaluated the response time of the mini-mobile DR system to compare to mobile PACS. The experimental results show that our system outperforms conventional mobile PACS in this regard.
Smart Electrospun Nanofibers for Controlled Drug Release: Recent Advances and New Perspectives
Weng, Lin; Xie, Jingwei
2017-01-01
In biological systems, chemical molecules or ions often release upon certain conditions, at a specific location, and over a desired period of time. Electrospun nanofibers that undergo alterations in the physicochemical characteristics corresponding to environmental changes have gained considerable interest for various applications. Inspired by biological systems, therapeutic molecules have been integrated with these smart electrospun nanofibers, presenting activation-modulated or feedback-regulated control of drug release. Compared to other materials like smart hydrogels, environment-responsive nanofiber-based drug delivery systems are relatively new but possess incomparable advantages due to their greater permeability, which allows shorter response time and more precise control over the release rate. In this article, we review the mechanisms of various environmental parameters functioning as stimuli to tailor the release rates of smart electrospun nanofibers. We also illustrate several typical examples in specific applications. We conclude this article with a discussion on perspectives and future possibilities in this field. PMID:25732665
Smart electrospun nanofibers for controlled drug release: recent advances and new perspectives.
Weng, Lin; Xie, Jingwei
2015-01-01
In biological systems, chemical molecules or ions often release upon certain conditions, at a specific location, and over a desired period of time. Electrospun nanofibers that undergo alterations in the physicochemical characteristics corresponding to environmental changes have gained considerable interest for various applications. Inspired by biological systems, therapeutic molecules have been integrated with these smart electrospun nanofibers, presenting activation-modulated or feedback-regulated control of drug release. Compared to other materials like smart hydrogels, environment-responsive nanofiber-based drug delivery systems are relatively new but possess incomparable advantages due to their greater permeability, which allows shorter response time and more precise control over the release rate. In this article, we review the mechanisms of various environmental parameters functioning as stimuli to tailor the release rates of smart electrospun nanofibers. We also illustrate several typical examples in specific applications. We conclude this article with a discussion on perspectives and future possibilities in this field.
NASA Astrophysics Data System (ADS)
Lu, Yuan; Xiao, Xiudi; Cao, Ziyi; Zhan, Yongjun; Cheng, Haoliang; Xu, Gang
2017-12-01
The monoclinic phase vanadium dioxide VO2 (M) based transparent thermochromic smart films were firstly fabricated through heat treatment of opaque VO2-based composite nanofibrous mats, which were deposited on the glass substrate via electrospinning technique. Noteworthily, the anti-oxidation property of VO2 smart film was improved due to inner distribution of VO2 in the polymethylmethacrylate (PMMA) nanofibers, and the composite mats having water contact angle of 165° determined itself good superhydrophobic property. Besides, PMMA nanofibrous mats with different polymer concentrations demonstrated changeable morphology and fiber diameter. The VO2 nanoparticles having diameter of 30-50 nm gathered and exhibited ellipse-like or belt-like structure. Additionally, the solar modulation ability of PMMA-VO2 composite smart film was 6.88% according to UV-Vis-NIR spectra. The research offered a new notion for fabricating transparent VO2 thermochromic material.
ERIC Educational Resources Information Center
Kinoshita, Sachiko; Forster, Kenneth I.; Mozer, Michael C.
2008-01-01
Masked repetition primes produce greater facilitation in naming in a block containing a high, rather than low proportion of repetition trials. [Bodner, G. E., & Masson, M. E. J. (2004). "Beyond binary judgments: Prime-validity modulates masked repetition priming in the naming task". "Memory & Cognition", 32, 1-11] suggested this phenomenon…
Integrating Fingerprint Verification into the Smart Card-Based Healthcare Information System
NASA Astrophysics Data System (ADS)
Moon, Daesung; Chung, Yongwha; Pan, Sung Bum; Park, Jin-Won
2009-12-01
As VLSI technology has been improved, a smart card employing 32-bit processors has been released, and more personal information such as medical, financial data can be stored in the card. Thus, it becomes important to protect personal information stored in the card. Verification of the card holder's identity using a fingerprint has advantages over the present practices of Personal Identification Numbers (PINs) and passwords. However, the computational workload of fingerprint verification is much heavier than that of the typical PIN-based solution. In this paper, we consider three strategies to implement fingerprint verification in a smart card environment and how to distribute the modules of fingerprint verification between the smart card and the card reader. We first evaluate the number of instructions of each step of a typical fingerprint verification algorithm, and estimate the execution time of several cryptographic algorithms to guarantee the security/privacy of the fingerprint data transmitted in the smart card with the client-server environment. Based on the evaluation results, we analyze each scenario with respect to the security level and the real-time execution requirements in order to implement fingerprint verification in the smart card with the client-server environment.
Nair, Akshay Gopinathan; Potdar, Nayana A; Dadia, Suchit; Aulakh, Simranjeet; Ali, Mohammad Javed; Shinde, Chhaya A
2018-03-06
To assess patient perceptions regarding medical photography and the use of smart devices, namely mobile phones and tablets for medical photography. A questionnaire-based survey was conducted among 280 consecutive adult patients who presented to the oculoplastics clinic at a tertiary eye care centre. The responses were tabulated and analysed. Of the 280 patients surveyed, 68% felt that medical photography had a positive impact on their understanding of their illnesses and 72% felt that the use of smartphones for medical photography was acceptable. Respondents below the age of 40 years were more likely to approve of the use of mobile phones for photography as compared to those over 40. Most patients (74%) preferred a doctor to be the person photographing them. While a majority approved of doctors and trainee physicians having access to their photographs, they felt non-physician healthcare personnel should not have access to clinical photographs. Also, 72% of the respondents felt that the patient's consent should be taken before using their photographs. It was noted that patient identification and breach of confidentiality could be some of the potential issues with using smart devices as cameras in the clinic. Clinical photography in general and, specifically, using smart devices for clinical photographs have gained acceptance among patients. The outcomes of this study may be utilized to create policy guidelines for the use of smart devices as photography tools in the clinics. The findings of this survey can also help to create standardized, uniform patient consent forms for clinical photography.
Parallel phase-sensitive three-dimensional imaging camera
Smithpeter, Colin L.; Hoover, Eddie R.; Pain, Bedabrata; Hancock, Bruce R.; Nellums, Robert O.
2007-09-25
An apparatus is disclosed for generating a three-dimensional (3-D) image of a scene illuminated by a pulsed light source (e.g. a laser or light-emitting diode). The apparatus, referred to as a phase-sensitive 3-D imaging camera utilizes a two-dimensional (2-D) array of photodetectors to receive light that is reflected or scattered from the scene and processes an electrical output signal from each photodetector in the 2-D array in parallel using multiple modulators, each having inputs of the photodetector output signal and a reference signal, with the reference signal provided to each modulator having a different phase delay. The output from each modulator is provided to a computational unit which can be used to generate intensity and range information for use in generating a 3-D image of the scene. The 3-D camera is capable of generating a 3-D image using a single pulse of light, or alternately can be used to generate subsequent 3-D images with each additional pulse of light.
SMART-1 Technology and Science Experiments in Preparation of Future Missions and ESA Cornerstones
NASA Astrophysics Data System (ADS)
Marini, A. E.; Racca, G. D.; Foing, B. H.; SMART-1 Project
1999-12-01
SMART-1 is the first ESA Small Mission for Advanced Research in Technology, aimed at the demonstration of enabling technologies for future scientific missions. SMART-1's prime technology objective is the demonstration of the solar primary electric propulsion, a key for future interplanetary missions. SMART-1 will use a Stationary Plasma Thruster engine, cruising 15 months to capture a Moon polar orbit. A gallery of images of the spacecraft is available at the web site: http://www.estec.esa.nl/spdwww/smart1/html/11742.html SMART-1 payload aims at monitoring the electric propulsion and its spacecraft environment and to test novel instrument technologies. The Diagnostic Instruments include SPEDE, a spacecraft potential plasma and charged particles detector, to characterise both spacecraft and planetary environment, together with EPDP, a suite of sensors monitoring secondary thrust-ions, charging and deposition effects. Innovative spacecraft technologies will be tested on SMART-1 : Lithium batteries and KATE, an experimental X/Ka-band deep-space transponder, to support radio-science, to monitor the accelerations of the electric propulsion and to test turbo-code technique, enhancing the return of scientific data. The scientific instruments for imaging and spectrometry are: \\begin{itemize} D-CIXS, a compact X-ray spectrometer based on novel SCD detectors and micro-structure optics, to observe X-ray celectial objects and to perform lunar chemistry measurements. SIR, a miniaturised quasi-monolithic point-spectrometer, operating in the Near-IR (0.9 ÷ 2.4 micron), to survey the lunar crust in previously uncovered optical regions. AMIE, a miniature camera based on 3-D integrated electronics, imaging the Moon, and other bodies and supporting LASER-LINK and RSIS. RSIS and LASER-LINK are investigations performed with the SMART-1 Payload: \\begin{itemize} RSIS: A radio-science Experiment to validate in-orbit determination of the libration of the celestial target, based on high-accuracy tracking in Ka-band and imaging of a surface landmark LASER-LINK: a demonstration of acquisition of a deep-space laser-link from the ESA Optical Ground Station at Tenerife, validating also the novel sub-apertured telescope designed for the mitigation of atmospheric scintillation disturbances.
VizieR Online Data Catalog: Spectroscopic and photometric properties of Tombaugh 1 (Sales+, 2016)
NASA Astrophysics Data System (ADS)
Sales Silva, J. V.; Carraro, G.; Anthony-Twarog, B. J.; Moni Bidin, C.; Costa, E.; Twarog, B. A.
2018-03-01
Photometry for Tombaugh 1 was secured in 2010 December during a five-night run using the Cerro Tololo Inter-American Observatory 1.0 m telescope operated by the SMARTS consortium (http://www.astro.yale.edu/smarts). The telescope is equipped with an STA 4064x4064 CCD camera (http://www.astronomy.ohio-state.edu/Y4KCam/detector) with 15 μm pixels, yielding a scale of 0.289"/pixel and a field of view (FOV) of 20'x20' at the Cassegrain focus of the telescope. Over the night of 2010 January 5, we observed 10 potential cluster stars (nine clump stars and one Cepheid; see Section 4.1) with the Inamori-Magellan Areal Camera & Spectrograph (IMACS; Dressler et al. 2006SPIE.6269E..0FD) attached to the Magellan telescope (6.5 m) located at Las Campanas, Chile. The spectra were obtained using the Multi-Object Echelle (MOE) mode with two exposures, one of 900 s and the other of 1200 s. Our spectra have a resolution of R~20000, while the spectral coverage depends on the location of the star on the multislit mask, but it generally goes from 4200 to 9100 Å. The detector consists of a mosaic with eight CCDs with gaps of about 0.93 mm between the CCDs, causing small gaps in stellar spectra. (7 data files).
Medically relevant assays with a simple smartphone and tablet based fluorescence detection system.
Wargocki, Piotr; Deng, Wei; Anwer, Ayad G; Goldys, Ewa M
2015-05-20
Cell phones and smart phones can be reconfigured as biomedical sensor devices but this requires specialized add-ons. In this paper we present a simple cell phone-based portable bioassay platform, which can be used with fluorescent assays in solution. The system consists of a tablet, a polarizer, a smart phone (camera) and a box that provides dark readout conditions. The assay in a well plate is placed on the tablet screen acting as an excitation source. A polarizer on top of the well plate separates excitation light from assay fluorescence emission enabling assay readout with a smartphone camera. The assay result is obtained by analysing the intensity of image pixels in an appropriate colour channel. With this device we carried out two assays, for collagenase and trypsin using fluorescein as the detected fluorophore. The results of collagenase assay with the lowest measured concentration of 3.75 µg/mL and 0.938 µg in total in the sample were comparable to those obtained by a microplate reader. The lowest measured amount of trypsin was 930 pg, which is comparable to the low detection limit of 400 pg for this assay obtained in a microplate reader. The device is sensitive enough to be used in point-of-care medical diagnostics of clinically relevant conditions, including arthritis, cystic fibrosis and acute pancreatitis.
NASA Astrophysics Data System (ADS)
Brocks, Sebastian; Bendig, Juliane; Bareth, Georg
2016-10-01
Crop surface models (CSMs) representing plant height above ground level are a useful tool for monitoring in-field crop growth variability and enabling precision agriculture applications. A semiautomated system for generating CSMs was implemented. It combines an Android application running on a set of smart cameras for image acquisition and transmission and a set of Python scripts automating the structure-from-motion (SfM) software package Agisoft Photoscan and ArcGIS. Only ground-control-point (GCP) marking was performed manually. This system was set up on a barley field experiment with nine different barley cultivars in the growing period of 2014. Images were acquired three times a day for a period of two months. CSMs were successfully generated for 95 out of 98 acquisitions between May 2 and June 30. The best linear regressions of the CSM-derived plot-wise averaged plant-heights compared to manual plant height measurements taken at four dates resulted in a coefficient of determination R2 of 0.87 and a root-mean-square error (RMSE) of 0.08 m, with Willmott's refined index of model performance dr equaling 0.78. In total, 103 mean plot heights were used in the regression based on the noon acquisition time. The presented system succeeded in semiautomatedly monitoring crop height on a plot scale to field scale.
Li, Jin; Liu, Zilong
2017-07-24
Remote sensing cameras in the visible/near infrared range are essential tools in Earth-observation, deep-space exploration, and celestial navigation. Their imaging performance, i.e. image quality here, directly determines the target-observation performance of a spacecraft, and even the successful completion of a space mission. Unfortunately, the camera itself, such as a optical system, a image sensor, and a electronic system, limits the on-orbit imaging performance. Here, we demonstrate an on-orbit high-resolution imaging method based on the invariable modulation transfer function (IMTF) of cameras. The IMTF, which is stable and invariable to the changing of ground targets, atmosphere, and environment on orbit or on the ground, depending on the camera itself, is extracted using a pixel optical focal-plane (PFP). The PFP produces multiple spatial frequency targets, which are used to calculate the IMTF at different frequencies. The resulting IMTF in combination with a constrained least-squares filter compensates for the IMTF, which represents the removal of the imaging effects limited by the camera itself. This method is experimentally confirmed. Experiments on an on-orbit panchromatic camera indicate that the proposed method increases 6.5 times of the average gradient, 3.3 times of the edge intensity, and 1.56 times of the MTF value compared to the case when IMTF is not used. This opens a door to push the limitation of a camera itself, enabling high-resolution on-orbit optical imaging.
Gutiérrez García, María Angeles; Martín Ruiz, María Luisa; Rivera, Diego; Vadillo, Laura; Valero Duboy, Miguel Angel
2017-05-19
EDUCERE ("Ubiquitous Detection Ecosystem to Care and Early Stimulation for Children with Developmental Disorders") is an ecosystem for ubiquitous detection, care, and early stimulation of children with developmental disorders. The objectives of this Spanish government-funded research and development project are to investigate, develop, and evaluate innovative solutions to detect changes in psychomotor development through the natural interaction of children with toys and everyday objects, and perform stimulation and early attention activities in real environments such as home and school. Thirty multidisciplinary professionals and three nursery schools worked in the EDUCERE project between 2014 and 2017 and they obtained satisfactory results. Related to EDUCERE, we found studies based on providing networks of connected smart objects and the interaction between toys and social networks. This research includes the design, implementation, and validation of an EDUCERE smart toy aimed to automatically detect delays in psychomotor development. The results from initial tests led to enhancing the effectiveness of the original design and deployment. The smart toy, based on stackable cubes, has a data collector module and a smart system for detection of developmental delays, called the EDUCERE developmental delay screening system (DDSS). The pilot study involved 65 toddlers aged between 23 and 37 months (mean=29.02, SD 3.81) who built a tower with five stackable cubes, designed by following the EDUCERE smart toy model. As toddlers made the tower, sensors in the cubes sent data to a collector module through a wireless connection. All trials were video-recorded for further analysis by child development experts. After watching the videos, experts scored the performance of the trials to compare and fine-tune the interpretation of the data automatically gathered by the toy-embedded sensors. Judges were highly reliable in an interrater agreement analysis (intraclass correlation 0.961, 95% CI 0.937-0.967), suggesting that the process was successful to separate different levels of performance. A factor analysis of collected data showed that three factors, trembling, speed, and accuracy, accounted for 76.79% of the total variance, but only two of them were predictors of performance in a regression analysis: accuracy (P=.001) and speed (P=.002). The other factor, trembling (P=.79), did not have a significant effect on this dependent variable. The EDUCERE DDSS is ready to use the regression equation obtained for the dependent variable "performance" as an algorithm for the automatic detection of psychomotor developmental delays. The results of the factor analysis are valuable to simplify the design of the smart toy by taking into account only the significant variables in the collector module. The fine-tuning of the toy process module will be carried out by following the specifications resulting from the analysis of the data to improve the efficiency and effectiveness of the product. ©María Angeles Gutiérrez García, María Luisa Martín Ruiz, Diego Rivera, Laura Vadillo, Miguel Angel Valero Duboy. Originally published in the Journal of Medical Internet Research (http://www.jmir.org), 19.05.2017.
Development of the SEASIS instrument for SEDSAT
NASA Technical Reports Server (NTRS)
Maier, Mark W.
1996-01-01
Two SEASIS experiment objectives are key: take images that allow three axis attitude determination and take multi-spectral images of the earth. During the tether mission it is also desirable to capture images for the recoiling tether from the endmass perspective (which has never been observed). SEASIS must store all its imagery taken during the tether mission until the earth downlink can be established. SEASIS determines attitude with a panoramic camera and performs earth observation with a telephoto lens camera. Camera video is digitized, compressed, and stored in solid state memory. These objectives are addressed through the following architectural choices: (1) A camera system using a Panoramic Annular Lens (PAL). This lens has a 360 deg. azimuthal field of view by a +45 degree vertical field measured from a plan normal to the lens boresight axis. It has been shown in Mr. Mark Steadham's UAH M.S. thesis that his camera can determine three axis attitude anytime the earth and one other recognizable celestial object (for example, the sun) is in the field of view. This will be essentially all the time during tether deployment. (2) A second camera system using telephoto lens and filter wheel. The camera is a black and white standard video camera. The filters are chosen to cover the visible spectral bands of remote sensing interest. (3) A processor and mass memory arrangement linked to the cameras. Video signals from the cameras are digitized, compressed in the processor, and stored in a large static RAM bank. The processor is a multi-chip module consisting of a T800 Transputer and three Zoran floating point Digital Signal Processors. This processor module was supplied under ARPA contract by the Space Computer Corporation to demonstrate its use in space.
Foale in Base Block with camera
1997-11-03
STS086-405-008 (25 Sept-6 Oct 1997) --- Astronaut C. Michael Foale, sporting attire representing the STS-86 crew after four months aboard Russia?s Mir Space Station in Russian wear, operates a video camera in Mir?s Base Block Module. Photo credit: NASA
Wakata and Barratt with cameras at SM window
2009-04-19
ISS019-E-008935 (19 April 2009) --- Japan Aerospace Exploration Agency (JAXA) astronaut Koichi Wakata (left) and NASA astronaut Michael Barratt, both Expedition 19/20 flight engineers, use still cameras at a window in the Zvezda Service Module of the International Space Station.
Line drawing Scientific Instrument Module and lunar orbital science package
NASA Technical Reports Server (NTRS)
1970-01-01
A line drawing of the Scientific Instrument Module (SIM) with its lunar orbital science package. The SIM will be mounted in a previously vacant sector of the Apollo Service Module. It will carry specialized cameras and instrumentation for gathering lunar orbit scientific data.
Radar based autonomous sensor module
NASA Astrophysics Data System (ADS)
Styles, Tim
2016-10-01
Most surveillance systems combine camera sensors with other detection sensors that trigger an alert to a human operator when an object is detected. The detection sensors typically require careful installation and configuration for each application and there is a significant burden on the operator to react to each alert by viewing camera video feeds. A demonstration system known as Sensing for Asset Protection with Integrated Electronic Networked Technology (SAPIENT) has been developed to address these issues using Autonomous Sensor Modules (ASM) and a central High Level Decision Making Module (HLDMM) that can fuse the detections from multiple sensors. This paper describes the 24 GHz radar based ASM, which provides an all-weather, low power and license exempt solution to the problem of wide area surveillance. The radar module autonomously configures itself in response to tasks provided by the HLDMM, steering the transmit beam and setting range resolution and power levels for optimum performance. The results show the detection and classification performance for pedestrians and vehicles in an area of interest, which can be modified by the HLDMM without physical adjustment. The module uses range-Doppler processing for reliable detection of moving objects and combines Radar Cross Section and micro-Doppler characteristics for object classification. Objects are classified as pedestrian or vehicle, with vehicle sub classes based on size. Detections are reported only if the object is detected in a task coverage area and it is classified as an object of interest. The system was shown in a perimeter protection scenario using multiple radar ASMs, laser scanners, thermal cameras and visible band cameras. This combination of sensors enabled the HLDMM to generate reliable alerts with improved discrimination of objects and behaviours of interest.
Smart radio: spectrum access for first responders
NASA Astrophysics Data System (ADS)
Silvius, Mark D.; Ge, Feng; Young, Alex; MacKenzie, Allen B.; Bostian, Charles W.
2008-04-01
This paper details the Wireless at Virginia Tech Center for Wireless Telecommunications' (CWT) design and implementation of its Smart Radio (SR) communication platform. The CWT SR can identify available spectrum within a pre-defined band, rendezvous with an intended receiver, and transmit voice and data using a selected quality of service (QoS). This system builds upon previous cognitive technologies developed by CWT for the public safety community, with the goal of providing a prototype mobile communications package for military and public safety First Responders. A master control (MC) enables spectrum awareness by characterizing the radio environment with a power spectrum sensor and an innovative signal detection and classification module. The MC also enables spectrum and signal memory by storing sensor results in a knowledge database. By utilizing a family radio service (FRS) waveform database, the CWT SR can create a new communication link on any designated FRS channel frequency using FM, BPSK, QPSK, or 8PSK modulations. With FM, it supports analog voice communications with legacy hand-held FRS radios. With digital modulations, it supports IP data services, including a CWT developed CVSD-based VoIP protocol. The CWT SR coordinates spectrum sharing between analog primary users and digital secondary users by applying a simple but effective channel-change protocol. It also demonstrates a novel rendezvous protocol to facilitate the detection and initialization of communications links with neighboring SR nodes through the transmission of frequency-hopped rendezvous beacons. By leveraging the GNU Radio toolkit, writing key modules entirely in Python, and utilizing the USRP hardware front-end, the CWT SR provides a dynamic spectrum test bed for future smart and cognitive radio research.
ERIC Educational Resources Information Center
Kindle, Joan
Information and exercises are provided in this learning module to increase students' awareness of and effectiveness in their role as consumers. The module, which is written at an elementary level, covers eight topics related to consumer affairs: (1) finding an apartment through newspaper classified advertisements and other sources and signing a…
Banisadr, Seyedali; Chen, Jian
2017-12-13
Cephalopods, such as cuttlefish, demonstrate remarkable adaptability to the coloration and texture of their surroundings by modulating their skin color and surface morphology simultaneously, for the purpose of adaptive camouflage and signal communication. Inspired by this unique feature of cuttlefish skins, we present a general approach to remote-controlled, smart films that undergo simultaneous changes of surface color and morphology upon infrared (IR) actuation. The smart film has a reconfigurable laminated structure that comprises an IR-responsive nanocomposite actuator layer and a mechanochromic elastomeric photonic crystal layer. Upon global or localized IR irradiation, the actuator layer exhibits fast, large, and reversible strain in the irradiated region, which causes a synergistically coupled change in the shape of the laminated film and color of the mechanochromic elastomeric photonic crystal layer in the same region. Bending and twisting deformations can be created under IR irradiation, through modulating the strain direction in the actuator layer of the laminated film. Furthermore, the laminated film has been used in a remote-controlled inchworm walker that can directly couple a color-changing skin with the robotic movements. Such remote-controlled, smart films may open up new application possibilities in soft robotics and wearable devices.
A direct-view customer-oriented digital holographic camera
NASA Astrophysics Data System (ADS)
Besaga, Vira R.; Gerhardt, Nils C.; Maksimyak, Peter P.; Hofmann, Martin R.
2018-01-01
In this paper, we propose a direct-view digital holographic camera system consisting mostly of customer-oriented components. The camera system is based on standard photographic units such as camera sensor and objective and is adapted to operate under off-axis external white-light illumination. The common-path geometry of the holographic module of the system ensures direct-view operation. The system can operate in both self-reference and self-interference modes. As a proof of system operability, we present reconstructed amplitude and phase information of a test sample.
Vanadium dioxide nanogrid films for high transparency smart architectural window applications.
Liu, Chang; Balin, Igal; Magdassi, Shlomo; Abdulhalim, Ibrahim; Long, Yi
2015-02-09
This study presents a novel approach towards achieving high luminous transmittance (T(lum)) for vanadium dioxide (VO(2)) thermochromic nanogrid films whilst maintaining the solar modulation ability (ΔT(sol)). The perforated VO(2)-based films employ orderly-patterned nano-holes, which are able to favorably transmit visible light dramatically but retain large near-infrared modulation, thereby enhancing ΔT(sol). Numerical optimizations using parameter search algorithms have implemented through a series of Finite Difference Time Domain (FDTD) simulations by varying film thickness, cell periodicity, grid dimensions and variations of grid arrangement. The best performing results of T(lum) (76.5%) and ΔT(sol) (14.0%) are comparable, if not superior, to the results calculated from nanothermochromism, nanoporosity and biomimic nanostructuring. It opens up a new approach for thermochromic smart window applications.
Fabrication of multi-focal microlens array on curved surface for wide-angle camera module
NASA Astrophysics Data System (ADS)
Pan, Jun-Gu; Su, Guo-Dung J.
2017-08-01
In this paper, we present a wide-angle and compact camera module that consists of microlens array with different focal lengths on curved surface. The design integrates the principle of an insect's compound eye and the human eye. It contains a curved hexagonal microlens array and a spherical lens. Compared with normal mobile phone cameras which usually need no less than four lenses, but our proposed system only uses one lens. Furthermore, the thickness of our proposed system is only 2.08 mm and diagonal full field of view is about 100 degrees. In order to make the critical microlens array, we used the inkjet printing to control the surface shape of each microlens for achieving different focal lengths and use replication method to form curved hexagonal microlens array.
1998-12-07
S88-E-5057 (12-07-98) --- Astronaut James H. Newman, waves at camera as he holds onto one of the hand rails on the Unity connecting module during the early stages of a 7-hour, 21-minute spacewalk. Astronauts Newman and Jerry L. Ross, both mission specialists, went on to mate 40 cables and connectors running 76 feet from the Zarya control module to Unity, with the 35-ton complex towering over Endeavour's cargo bay. This photo was taken with an electronic still camera (ESC) at 23:37:40 GMT, Dec. 7.
SMART-1 Results and Lessons for Future Exploration
NASA Astrophysics Data System (ADS)
Foing, B. H.
2009-04-01
We summarise SMART-1 lunar highlights relevant for future lunar exploration. SMART-1 has been useful in the preparation of Selene Kaguya, the Indian lunar mission Chandrayaan-1, Chinese Chang'E 1 , the US Lunar Reconnaissance Orbiter, LCROSS, and subsequent lunar landers (Google Lunar X-prize, International Lunar Network, Moon-NEXT, cargo and manned landers). SMART-1 is contributing to prepare the next steps for exploration: survey of resources, search for ice, monitoring polar illumination, and mapping of sites for potential landings, international robotic villages and for future human activities and lunar bases. Overview of SMART-1 mission and payload: SMART-1 is the first in the programme of ESA's Small Missions for Advanced Research and Technology [1,2,3]. Its first objective has been achieved to demonstrate Solar Electric Primary Propulsion (SEP) for future Cornerstones (such as Bepi-Colombo) and to test new technologies for spacecraft and instruments. The SMART-1 spacecraft has been launched on 27 Sept. 2003, as an Ariane-5 auxiliary passenger and injected in GTO Geostationary Transfer Orbit. The SMART-1 spacecraft reached on 15 March 2005 a lunar orbit 400-3000 km for a nominal science period of six months, with 1 year extension until impact on 3 September 2006. SMART-1 science payload, with a total mass of some 19 kg, featured many innovative instruments and advanced technologies [1], with a miniaturised high-resolution camera (AMIE) for lunar surface imaging, a near-infrared point-spectrometer (SIR) for lunar mineralogy investigation, and a very compact X-ray spectrometer (D-CIXS) [4-6] for fluorescence spectroscopy and imagery of the Moon's sur-face elemental composition. The payload also included two plasma experiments: SPEDE (Spacecraft Potential, Electron and Dust Experiment) and EPDP (Electric propulsion diagnostic Package), an experiment (KaTE) that demonstrated deep-space telemetry and telecommand communications in the X and Ka-bands, a radio-science experiment (RSIS), a deep space optical link (Laser-Link Experiment), using the ESA Optical Ground station in Tenerife, and the validation of a system of autonomous navigation (OBAN) based on image processing. SMART-1 lunar science and exploration results: A package of three multiband mapping instruments has performed science and exploration at the Moon. AMIE (Advanced-Moon micro-Imager Experiment). AMIE is a miniature high resolution (35 m pixel at 350 km perilune height) camera, equipped with a fixed panchromatic and 3-colour filter, for Moon topography and imaging support to other experiments [7,10,11]. The micro camera AMIE has provided high-resolution CCD images of selected lunar areas. It included filters deposited on the CCD in white light + three filters for colour analyses, with bands at 750 nm, 900 nm and 950 nm (measuring the absorption of pyroxene and olivine). Lunar North polar maps and South pole repeated high resolution images have been obtained, giving a monitoring of illumination to map potential sites relevant for future exploration . AMIE images provided a geological context for SIR and D-CIXS data, and colour or multi-phase angle complement. AMIE has been used to map sites of interest in the South Pole -Aitken basin relevant to the study of cataclysm bombardment, and to preview future sites for sampling return. SMART-1 studied also volcanic processes, and the coupling between impacts and volcanism. D-CIXS (Demonstration of a Compact Imaging X-ray Spectrometer). DCIXS is based on novel detector and filter/collimator technologies, and has performing the first lunar X-ray fluorescence global mapping in the 0.5-10 keV range [4,5,9], in order to map the lunar elemental composition. It was supported in its operation by XSM (X-ray Solar Monitor) which also moni-tored coronal X-ray emission and solar flares [6]. For instance, D-CIXS measurements of Si, Mg, Al, Si, Ca & Fe lines at 1.25, 1.49, 1.74, 3.7 & 6.4 keV, were made over North of Mare Crisium during the 15 Jan 2005 solar flare, permitting the first detection of Calcium from lunar orbit [9]. Bulk crustal composition has bearing on theories of origin and evolution of the Moon. D-CIXS produced the first global measurements of the lunar surface in X-ray fluorescence (XRF), elemental abundances of Mg, Al and Si (and Fe when solar activity permitted) across the whole Moon. The South Pole-Aitken Basin (SPA) and large lunar impact basins have been also measured with D-CIXS. D-CIXS has been improved for the C1XS instrument adapted to ISRO Chandrayaan-1. SIR (Smart-1 Infra-Red Spectrometer). SIR has been operating in the 0.9-2.6 μm wavelength range and carrying out mineralogical survey of the lunar crust. SIR had high enough spectral resolution to separate the pyroxene and olivine signatures in lunar soils. SIR data with spatial resolution as good as 400 m permitted to distinguish units on central peaks, walls, rims and ejecta blankets of large impact craters, allowing for stratigraphic studies of the lunar crust. SIR has been improved for the Chandrayaan-1 SIR2 instrument. SMART-1 overall planetary science: SMART-1 science investigations included studies of the chemical composition of the Moon, of geophysical processes (volcanism, tectonics, cratering, erosion, deposition of ices and volatiles) for comparative planetology, and high resolution studies in preparation for future steps of lunar exploration. The mission addressed several topics such as the accretional processes that led to the formation of rocky planets, and the origin and evolution of the Earth-Moon system [8]. SMART-1 operations and coordination: The Experiments have been run according to illumination and altitude conditions during the nominal science phase of 6-months and 1 yr extension, in elliptical Moon orbit. The planning and co-ordination of the Technology and science experiments operations was carried out at ESA/ESTEC (SMART-1 STOC). The data archiving is based on the PDS (Planetary Data System) Standard. The SMART-1 observations have been coordinated with follow-up missions. References: [1] Foing, B. et al (2001) Earth Moon Planets, 85, 523 . [2] Racca, G.D. et al. (2002) Earth Moon Planets, 85, 379. [3] Racca, G.D. et al. (2002) P&SS, 50, 1323. [4] Grande, M. et al. (2003) P&SS, 51, 427. [5] Dunkin, S. et al. (2003) P&SS, 51, 435. [6] Huovelin, J. et al. (2002) P&SS, 50, 1345. [7] Shkuratov, Y. et al (2003) JGRE 108, E4, 1. [8] Foing, B.H. et al (2003) Adv. Space Res., 31, 2323. [9] Grande, M. et al (2007) P&SS 55, 494. [10] Pinet, P. et al (2005) P&SS, 53, 1309. [11] Josset J.L. et al (2006) Adv Space Res, 37, 14. [12] Foing B.H. et al (2006) Adv Space Res, 37, 6. Links: http://sci.esa.int/smart-1/, http://sci.esa.int/ilewg/
Photometric Studies of Orbital Debris at GEO
NASA Technical Reports Server (NTRS)
Seitzer, Patrick; Cowardin, Heather M.; Barker, Ed; Abercromby, Kira J.; Foreman, Gary; Hortsman, Matt
2009-01-01
Orbital debris represents a significant and increasing risk to operational spacecraft. Here we report on photometric observations made in standard BVRI filters at the Cerro Tololo Inter-American Observatory (CTIO) in an effort to determine the physical characteristics of optically faint debris at geosynchronous Earth orbit (GEO). Our sample is taken from GEO objects discovered in a survey with the University of Michigan s 0.6-m Curtis-Schmidt telescope (known as MODEST, for Michigan Orbital DEbris Survey Telescope), and then followed up in real-time with the CTIO/SMARTS 0.9-m for orbits and photometry. For a sample of 50 objects, calibrated sequences in RB- V-I-R filters have been obtained with the CTIO/SMARTS 0.9-m. For objects that do not show large brightness variations, the colors are largely redder than solar in both B-R and R-I. The width of the color distribution may be intrinsic to the nature of the surfaces, but also could imply that we are seeing irregularly shaped objects and measuring the colors at different times with just one telescope. For irregularly shaped objects tumbling at unknown orientations and rates, such sequential filter measurements using one telescope are subject to large errors for interpretation. If all observations in all filters in a particular sequence are of the same surface at the same solar and viewing angles, then the colors are meaningful. Where this is not the case, interpretation of the observed colors is impossible. For a smaller sample of objects we have observed with synchronized CCD cameras on the two telescopes. The CTIO/SMARTS 0.9-m observes in B, and the Schmidt in R. The CCD cameras are electronically linked together so that the start time and duration of observations are both the same to better than 50 milliseconds. Now the observed B-R color is a true measure of the scattered illuminated area of the debris piece for that observation.
Duque Domingo, Jaime; Cerrada, Carlos; Valero, Enrique; Cerrada, Jose A
2017-10-20
This work presents an Indoor Positioning System to estimate the location of people navigating in complex indoor environments. The developed technique combines WiFi Positioning Systems and depth maps , delivering promising results in complex inhabited environments, consisting of various connected rooms, where people are freely moving. This is a non-intrusive system in which personal information about subjects is not needed and, although RGB-D cameras are installed in the sensing area, users are only required to carry their smart-phones. In this article, the methods developed to combine the above-mentioned technologies and the experiments performed to test the system are detailed. The obtained results show a significant improvement in terms of accuracy and performance with respect to previous WiFi-based solutions as well as an extension in the range of operation.
View of Scientific Instrument Module to be flown on Apollo 15
NASA Technical Reports Server (NTRS)
1971-01-01
Close-up view of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 mission. Mounted in a previously vacant sector of the Apollo Service Module, the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data.
A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.
Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C
2017-02-07
The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.
Film dosimetry using a smart device camera: a feasibility study for point dose measurements
NASA Astrophysics Data System (ADS)
Aland, Trent; Jhala, Ekta; Kairn, Tanya; Trapp, Jamie
2017-10-01
In this work, a methodology for using a smartphone camera, in conjunction with a light-tight box operating in reflective transmission mode, is investigated as a proof of concept for use as a film dosimetry system. An imaging system was designed to allow the camera of a smartphone to be used as a pseudo densitometer. Ten pieces of Gafchromic EBT3 film were irradiated to doses up to 16.89 Gy and used to evaluate the effects of reproducibility and orientation, as well as the ability to create an accurate dose response curve for the smartphone based dosimetry system, using all three colour channels. Results were compared to a flatbed scanner system. Overall uncertainty was found to be best for the red channel with an uncertainty of 2.4% identified for film irradiated to 2.5 Gy and digitised using the smartphone system. This proof of concept exercise showed that although uncertainties still exceed a flatbed scanner system, the smartphone system may be useful for providing point dose measurements in situations where conventional flatbed scanners (or other dosimetry systems) are unavailable or unaffordable.
Qi, Liming; Xia, Yong; Qi, Wenjing; Gao, Wenyue; Wu, Fengxia; Xu, Guobao
2016-01-19
Both a wireless electrochemiluminescence (ECL) electrode microarray chip and the dramatic increase in ECL by embedding a diode in an electromagnetic receiver coil have been first reported. The newly designed device consists of a chip and a transmitter. The chip has an electromagnetic receiver coil, a mini-diode, and a gold electrode array. The mini-diode can rectify alternating current into direct current and thus enhance ECL intensities by 18 thousand times, enabling a sensitive visual detection using common cameras or smart phones as low cost detectors. The detection limit of hydrogen peroxide using a digital camera is comparable to that using photomultiplier tube (PMT)-based detectors. Coupled with a PMT-based detector, the device can detect luminol with higher sensitivity with linear ranges from 10 nM to 1 mM. Because of the advantages including high sensitivity, high throughput, low cost, high portability, and simplicity, it is promising in point of care testing, drug screening, and high throughput analysis.
Film dosimetry using a smart device camera: a feasibility study for point dose measurements.
Aland, Trent; Jhala, Ekta; Kairn, Tanya; Trapp, Jamie
2017-10-03
In this work, a methodology for using a smartphone camera, in conjunction with a light-tight box operating in reflective transmission mode, is investigated as a proof of concept for use as a film dosimetry system. An imaging system was designed to allow the camera of a smartphone to be used as a pseudo densitometer. Ten pieces of Gafchromic EBT3 film were irradiated to doses up to 16.89 Gy and used to evaluate the effects of reproducibility and orientation, as well as the ability to create an accurate dose response curve for the smartphone based dosimetry system, using all three colour channels. Results were compared to a flatbed scanner system. Overall uncertainty was found to be best for the red channel with an uncertainty of 2.4% identified for film irradiated to 2.5 Gy and digitised using the smartphone system. This proof of concept exercise showed that although uncertainties still exceed a flatbed scanner system, the smartphone system may be useful for providing point dose measurements in situations where conventional flatbed scanners (or other dosimetry systems) are unavailable or unaffordable.
Voss in hatch at aft end of Service module
2001-03-22
ISS002-E-5702 (22 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, translates through the forward hatch of the Zvezda Service Module. The image was recorded with a digital still camera.
Voss in Service module with cycle ergometer
2001-03-23
ISS002-E-5732 (23 March 2001) --- James S. Voss, Expedition Two flight engineer, prepares to exercise on the cycle ergometer in the Zvezda Service Module. The image was taken with a digital still camera.
Usachev on cycle ergometer in Service Module
2001-04-27
ISS002-E-6136 (27 April 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, exercises on the cycle ergometer in the Zvezda Service Module. The image was taken with a digital still camera.
Usachev tests Vozdukh in Service module
2001-05-11
ISS002-E-6111 (11 May 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, tests the Vozdukh Air Purification System in the Zvezda Service Module. The image was taken with a digital still camera.
Recent technology and usage of plastic lenses in image taking objectives
NASA Astrophysics Data System (ADS)
Yamaguchi, Susumu; Sato, Hiroshi; Mori, Nobuyoshi; Kiriki, Toshihiko
2005-09-01
Recently, plastic lenses produced by injection molding are widely used in image taking objectives for digital cameras, camcorders, and mobile phone cameras, because of their suitability for volume production and ease of obtaining an advantage of aspherical surfaces. For digital camera and camcorder objectives, it is desirable that there is no image point variation with the temperature change in spite of employing several plastic lenses. At the same time, due to the shrinking pixel size of solid-state image sensor, there is now a requirement to assemble lenses with high accuracy. In order to satisfy these requirements, we have developed 16 times compact zoom objective for camcorder and 3 times class folded zoom objectives for digital camera, incorporating cemented plastic doublet consisting of a positive lens and a negative lens. Over the last few years, production volumes of camera-equipped mobile phones have increased substantially. Therefore, for mobile phone cameras, the consideration of productivity is more important than ever. For this application, we have developed a 1.3-mega pixels compact camera module with macro function utilizing the advantage of a plastic lens that can be given mechanically functional shape to outer flange part. Its objective consists of three plastic lenses and all critical dimensions related to optical performance can be determined by high precise optical elements. Therefore this camera module is manufactured without optical adjustment in automatic assembling line, and achieves both high productivity and high performance. Reported here are the constructions and the technical topics of image taking objectives described above.
A system for tracking and recognizing pedestrian faces using a network of loosely coupled cameras
NASA Astrophysics Data System (ADS)
Gagnon, L.; Laliberté, F.; Foucher, S.; Branzan Albu, A.; Laurendeau, D.
2006-05-01
A face recognition module has been developed for an intelligent multi-camera video surveillance system. The module can recognize a pedestrian face in terms of six basic emotions and the neutral state. Face and facial features detection (eyes, nasal root, nose and mouth) are first performed using cascades of boosted classifiers. These features are used to normalize the pose and dimension of the face image. Gabor filters are then sampled on a regular grid covering the face image to build a facial feature vector that feeds a nearest neighbor classifier with a cosine distance similarity measure for facial expression interpretation and face model construction. A graphical user interface allows the user to adjust the module parameters.
Smarter Instruments, Smarter Archives: Machine Learning for Tactical Science
NASA Astrophysics Data System (ADS)
Thompson, D. R.; Kiran, R.; Allwood, A.; Altinok, A.; Estlin, T.; Flannery, D.
2014-12-01
There has been a growing interest by Earth and Planetary Sciences in machine learning, visualization and cyberinfrastructure to interpret ever-increasing volumes of instrument data. Such tools are commonly used to analyze archival datasets, but they can also play a valuable real-time role during missions. Here we discuss ways that machine learning can benefit tactical science decisions during Earth and Planetary Exploration. Machine learning's potential begins at the instrument itself. Smart instruments endowed with pattern recognition can immediately recognize science features of interest. This allows robotic explorers to optimize their limited communications bandwidth, triaging science products and prioritizing the most relevant data. Smart instruments can also target their data collection on the fly, using principles of experimental design to reduce redundancy and generally improve sampling efficiency for time-limited operations. Moreover, smart instruments can respond immediately to transient or unexpected phenomena. Examples include detections of cometary plumes, terrestrial floods, or volcanism. We show recent examples of smart instruments from 2014 tests including: aircraft and spacecraft remote sensing instruments that recognize cloud contamination, field tests of a "smart camera" for robotic surface geology, and adaptive data collection by X-Ray fluorescence spectrometers. Machine learning can also assist human operators when tactical decision making is required. Terrestrial scenarios include airborne remote sensing, where the decision to re-fly a transect must be made immediately. Planetary scenarios include deep space encounters or planetary surface exploration, where the number of command cycles is limited and operators make rapid daily decisions about where next to collect measurements. Visualization and modeling can reveal trends, clusters, and outliers in new data. This can help operators recognize instrument artifacts or spot anomalies in real time. We show recent examples from science data pipelines deployed onboard aircraft as well as tactical visualizations for non-image instrument data.
Design and development of a smart aerial platform for surface hydrological measurements
NASA Astrophysics Data System (ADS)
Tauro, F.; Pagano, C.; Porfiri, M.; Grimaldi, S.
2013-12-01
Currently available experimental methodologies for surface hydrological monitoring rely on the use of intrusive sensing technologies which tend to provide local rather than distributed information on the flow physics. In this context, drawbacks deriving from the use of invasive instrumentation are partially alleviated by Large Scale Particle Image Velocimetry (LSPIV). LSPIV is based on the use of cameras mounted on masts along river banks which capture images of artificial tracers or naturally occurring objects floating on water surfaces. Images are then georeferenced and the displacement of groups of floating tracers statistically analyzed to reconstruct flow velocity maps at specific river cross-sections. In this work, we mitigate LSPIV spatial limitations and inaccuracies due to image calibration by designing and developing a smart platform which integrates digital acquisition system and laser calibration units onboard of a custom-built quadricopter. The quadricopter is designed to be lightweight, low cost as compared to kits available on the market, highly customizable, and stable to guarantee minimal vibrations during image acquisition. The onboard digital system includes an encased GoPro Hero 3 camera whose axis is constantly kept orthogonal to the water surface by means of an in-house developed gimbal. The gimbal is connected to the quadricopter through a shock absorber damping device which further reduces eventual vibrations. Image calibration is performed through laser units mounted at known distances on the quadricopter landing apparatus. The vehicle can be remotely controlled by the open-source Ardupilot microcontroller. Calibration tests and field experiments are conducted in outdoor environments to assess the feasibility of using the smart platform for acquisition of high quality images of natural streams. Captured images are processed by LSPIV algorithms and average flow velocities are compared to independently acquired flow estimates. Further, videos are presented where the smart platform captures the motion of environmentally-friendly buoyant fluorescent particle tracers floating on the surface of water bodies. Such fluorescent particles are in-house synthesized and their visibility and accuracy in tracing complex flows have been previously tested in laboratory and outdoor settings. Experimental results demonstrate the potential of the methodology in monitoring severely accessible and spatially extended environments. Improved accuracy in flow monitoring is accomplished by minimizing image orthorectification and introducing highly visible particle tracers. Future developments will aim at the autonomy of the vehicle through machine learning procedures for unmanned monitoring in the environment.
Astronaut Vance Brand seen in hatchway leading to Apollo Docking module
NASA Technical Reports Server (NTRS)
1975-01-01
Astronaut Vance D. Brand, command module pilot of the American Apollo Soyuz Test Project (ASTP) crew, is seen in the hatchway leading from the Apollo Command Module (CM) into the Apollo Docking Module (DM) during joint U.S.-USSR ASTP docking in Earth orbit mission. The 35mm camera is looking from the DM into the CM.
Perspective and potential of smart optical materials
NASA Astrophysics Data System (ADS)
Choi, Sang H.; Duzik, Adam J.; Kim, Hyun-Jung; Park, Yeonjoon; Kim, Jaehwan; Ko, Hyun-U.; Kim, Hyun-Chan; Yun, Sungryul; Kyung, Ki-Uk
2017-09-01
The increasing requirements of hyperspectral imaging optics, electro/photo-chromic materials, negative refractive index metamaterial optics, and miniaturized optical components from micro-scale to quantum-scale optics have all contributed to new features and advancements in optics technology. Development of multifunctional capable optics has pushed the boundaries of optics into new fields that require new disciplines and materials to maximize the potential benefits. The purpose of this study is to understand and show the fundamental materials and fabrication technology for field-controlled spectrally active optics (referred to as smart optics) that are essential for future industrial, scientific, military, and space applications, such as membrane optics, filters, windows for sensors and probes, telescopes, spectroscopes, cameras, light valves, light switches, and flat-panel displays. The proposed smart optics are based on the Stark and Zeeman effects in materials tailored with quantum dot arrays and thin films made from readily polarizable materials via ferroelectricity or ferromagnetism. Bound excitonic states of organic crystals are also capable of optical adaptability, tunability, and reconfigurability. To show the benefits of smart optics, this paper reviews spectral characteristics of smart optical materials and device technology. Experiments testing the quantum-confined Stark effect, arising from rare earth element doping effects in semiconductors, and applied electric field effects on spectral and refractive index are discussed. Other bulk and dopant materials were also discovered to have the same aspect of shifts in spectrum and refractive index. Other efforts focus on materials for creating field-controlled spectrally smart active optics on a selected spectral range. Surface plasmon polariton transmission of light through apertures is also discussed, along with potential applications. New breakthroughs in micro scale multiple zone plate optics as a micro convex lens are reviewed, along with the newly discovered pseudo-focal point not predicted with conventional optics modeling. Micron-sized solid state beam scanner chips for laser waveguides are reviewed as well.
A DS-UWB Cognitive Radio System Based on Bridge Function Smart Codes
NASA Astrophysics Data System (ADS)
Xu, Yafei; Hong, Sheng; Zhao, Guodong; Zhang, Fengyuan; di, Jinshan; Zhang, Qishan
This paper proposes a direct-sequence UWB Gaussian pulse of cognitive radio systems based on bridge function smart sequence matrix and the Gaussian pulse. As the system uses the spreading sequence code, that is the bridge function smart code sequence, the zero correlation zones (ZCZs) which the bridge function sequences' auto-correlation functions had, could reduce multipath fading of the pulse interference. The Modulated channel signal was sent into the IEEE 802.15.3a UWB channel. We analysis the ZCZs's inhibition to the interference multipath interference (MPI), as one of the main system sources interferences. The simulation in SIMULINK/MATLAB is described in detail. The result shows the system has better performance by comparison with that employing Walsh sequence square matrix, and it was verified by the formula in principle.
Combining Sense and Intelligence for Smart Structures
NASA Technical Reports Server (NTRS)
2002-01-01
IFOS developed the I*Sense technology with assistance from a NASA Langley Research Center SBIR contract. NASA and IFOS collaborated to create sensing network designs that have high sensitivity, low power consumption, and significant potential for mass production. The joint- research effort led to the development of a module that is rugged, compact and light-weight, and immune to electromagnetic interference. These features make the I*Sense multisensor arrays favorable for smart structure applications, including smart buildings, bridges, highways, dams, power plants, ships, and oil tankers, as well as space vehicles, space stations, and other space structures. For instance, the system can be used as an early warning and detection device, with alarms being set to monitor the maximum allowable strain and stress values at various points of a given structure.
Space missions for automation and robotics technologies (SMART) program
NASA Technical Reports Server (NTRS)
Ciffone, D. L.; Lum, H., Jr.
1985-01-01
The motivations, features and expected benefits and applications of the NASA SMART program are summarized. SMART is intended to push the state of the art in automation and robotics, a goal that Public Law 98-371 mandated be an inherent part of the Space Station program. The effort would first require tests of sensors, manipulators, computers and other subsystems as seeds for the evolution of flight-qualified subsystems. Consideration is currently being given to robotics systems as add-ons to the RMS, MMU and OMV and a self-contained automation and robotics module which would be tended by astronaut visits. Probable experimentation and development paths that would be pursued with the equipment are discussed, along with the management structure and procedures for the program. The first hardware flight is projected for 1989.
Smart Rehabilitation Garment for posture monitoring.
Wang, Q; Chen, W; Timmermans, A A A; Karachristos, C; Martens, J B; Markopoulos, P
2015-08-01
Posture monitoring and correction technologies can support prevention and treatment of spinal pain or can help detect and avoid compensatory movements during the neurological rehabilitation of upper extremities, which can be very important to ensure their effectiveness. We describe the design and development of Smart Rehabilitation Garment (SRG) a wearable system designed to support posture correction. The SRG combines a number of inertial measurement units (IMUs), controlled by an Arduino processor. It provides feedback with vibration on the garment, audible alarm signals and visual instruction through a Bluetooth connected smartphone. We discuss the placement of sensing modules, the garment design, the feedback design and the integration of smart textiles and wearable electronics which aimed at achieving wearability and ease of use. We report on the system's accuracy as compared to optical tracker method.
Concept of electro-optical sensor module for sniper detection system
NASA Astrophysics Data System (ADS)
Trzaskawka, Piotr; Dulski, Rafal; Kastek, Mariusz
2010-10-01
The paper presents an initial concept of the electro-optical sensor unit for sniper detection purposes. This unit, comprising of thermal and daylight cameras, can operate as a standalone device but its primary application is a multi-sensor sniper and shot detection system. Being a part of a larger system it should contribute to greater overall system efficiency and lower false alarm rate thanks to data and sensor fusion techniques. Additionally, it is expected to provide some pre-shot detection capabilities. Generally acoustic (or radar) systems used for shot detection offer only "after-the-shot" information and they cannot prevent enemy attack, which in case of a skilled sniper opponent usually means trouble. The passive imaging sensors presented in this paper, together with active systems detecting pointed optics, are capable of detecting specific shooter signatures or at least the presence of suspected objects in the vicinity. The proposed sensor unit use thermal camera as a primary sniper and shot detection tool. The basic camera parameters such as focal plane array size and type, focal length and aperture were chosen on the basis of assumed tactical characteristics of the system (mainly detection range) and current technology level. In order to provide costeffective solution the commercially available daylight camera modules and infrared focal plane arrays were tested, including fast cooled infrared array modules capable of 1000 fps image acquisition rate. The daylight camera operates as a support, providing corresponding visual image, easier to comprehend for a human operator. The initial assumptions concerning sensor operation were verified during laboratory and field test and some example shot recording sequences are presented.
Kinematic control of male Allen's Hummingbird wing trill over a range of flight speeds.
Clark, Christopher J; Mistick, Emily A
2018-05-18
Wing trills are pulsed sounds produced by modified wing feathers at one or more specific points in time during a wingbeat. Male Allen's Hummingbird ( Selasphorus sasin ) produce a sexually dimorphic 9 kHz wing trill in flight. Here we investigate the kinematic basis for trill production. The wingtip velocity hypothesis posits that trill production is modulated by the airspeed of the wingtip at some point during the wingbeat, whereas the wing rotation hypothesis posits that trill production is instead modulated by wing rotation kinematics. To test these hypotheses, we flew six male Allen's Hummingbirds in an open jet wind tunnel at flight speeds of 0, 3, 6, 9, 12 and 14 m s -1 , and recorded their flight with two 'acoustic cameras' placed below and behind, or below and lateral to the flying bird. The acoustic cameras are phased arrays of 40 microphones that used beamforming to spatially locate sound sources within a camera image. Trill Sound Pressure Level (SPL) exhibited a U-shaped relationship with flight speed in all three camera positions. SPL was greatest perpendicular to the stroke plane. Acoustic camera videos suggest that the trill is produced during supination. The trill was up to 20 dB louder during maneuvers than it was during steady state flight in the wind tunnel, across all airspeeds tested. These data provide partial support for the wing rotation hypothesis. Altered wing rotation kinematics could allow male Allen's Hummingbird to modulate trill production in social contexts such as courtship displays. © 2018. Published by The Company of Biologists Ltd.
2007-08-03
KENNEDY SPACE CENTER, FLA. - The STS-120 crew is at Kennedy for a crew equipment interface test, or CEIT. In Orbiter Processing Facility bay 3, from left in blue flight suits, STS-120 Mission Specialist Stephanie D. Wilson, Pilot George D. Zamka, Commander Pamela A. Melroy, Mission Specialist Scott E. Parazynski (holding camera) and Mission Specialist Douglas H. Wheelock are given the opportunity to operate the cameras that will fly on their mission. Among the activities standard to a CEIT are harness training, inspection of the thermal protection system and camera operation for planned extravehicular activities, or EVAs. The STS-120 mission will deliver the Harmony module, christened after a school contest, which will provide attachment points for European and Japanese laboratory modules on the International Space Station. Known in technical circles as Node 2, it is similar to the six-sided Unity module that links the U.S. and Russian sections of the station. Built in Italy for the United States, Harmony will be the first new U.S. pressurized component to be added. The STS-120 mission is targeted to launch on Oct. 20. Photo credit: NASA/George Shelton
Integrated Rapid-Diagnostic-Test Reader Platform on a Cellphone
Mudanyali, Onur; Dimitrov, Stoyan; Sikora, Uzair; Padmanabhan, Swati; Navruz, Isa; Ozcan, Aydogan
2012-01-01
We demonstrate a cellphone based Rapid-Diagnostic-Test (RDT) reader platform that can work with various lateral flow immuno-chromatographic assays and similar tests to sense the presence of a target analyte in a sample. This compact and cost-effective digital RDT reader, weighing only ~65 grams, mechanically attaches to the existing camera unit of a cellphone, where various types of RDTs can be inserted to be imaged in reflection or transmission modes under light-emitting-diode (LED) based illumination. Captured raw images of these tests are then digitally processed (within less than 0.2 sec/image) through a smart application running on the cellphone for validation of the RDT as well as for automated reading of its diagnostic result. The same smart application running on the cellphone then transmits the resulting data, together with the RDT images and other related information (e.g., demographic data) to a central server, which presents the diagnostic results on a world-map through geo-tagging. This dynamic spatio-temporal map of various RDT results can then be viewed and shared using internet browsers or through the same cellphone application. We tested this platform using malaria, tuberculosis (TB) as well as HIV RDTs by installing it on both Android based smart-phones as well as an iPhone. Providing real-time spatio-temporal statistics for the prevalence of various infectious diseases, this smart RDT reader platform running on cellphones might assist health-care professionals and policy makers to track emerging epidemics worldwide and help epidemic preparedness. PMID:22596243
Providing IoT Services in Smart Cities through Dynamic Augmented Reality Markers.
Chaves-Diéguez, David; Pellitero-Rivero, Alexandre; García-Coego, Daniel; González-Castaño, Francisco Javier; Rodríguez-Hernández, Pedro Salvador; Piñeiro-Gómez, Óscar; Gil-Castiñeira, Felipe; Costa-Montenegro, Enrique
2015-07-03
Smart cities are expected to improve the quality of life of citizens by relying on new paradigms, such as the Internet of Things (IoT) and its capacity to manage and interconnect thousands of sensors and actuators scattered across the city. At the same time, mobile devices widely assist professional and personal everyday activities. A very good example of the potential of these devices for smart cities is their powerful support for intuitive service interfaces (such as those based on augmented reality (AR)) for non-expert users. In our work, we consider a scenario that combines IoT and AR within a smart city maintenance service to improve the accessibility of sensor and actuator devices in the field, where responsiveness is crucial. In it, depending on the location and needs of each service, data and commands will be transported by an urban communications network or consulted on the spot. Direct AR interaction with urban objects has already been described; it usually relies on 2D visual codes to deliver object identifiers (IDs) to the rendering device to identify object resources. These IDs allow information about the objects to be retrieved from a remote server. In this work, we present a novel solution that replaces static AR markers with dynamic markers based on LED communication, which can be decoded through cameras embedded in smartphones. These dynamic markers can directly deliver sensor information to the rendering device, on top of the object ID, without further network interaction.
Providing IoT Services in Smart Cities through Dynamic Augmented Reality Markers
Chaves-Diéguez, David; Pellitero-Rivero, Alexandre; García-Coego, Daniel; González-Castaño, Francisco Javier; Rodríguez-Hernández, Pedro Salvador; Piñeiro-Gómez, Óscar; Gil-Castiñeira, Felipe; Costa-Montenegro, Enrique
2015-01-01
Smart cities are expected to improve the quality of life of citizens by relying on new paradigms, such as the Internet of Things (IoT) and its capacity to manage and interconnect thousands of sensors and actuators scattered across the city. At the same time, mobile devices widely assist professional and personal everyday activities. A very good example of the potential of these devices for smart cities is their powerful support for intuitive service interfaces (such as those based on augmented reality (AR)) for non-expert users. In our work, we consider a scenario that combines IoT and AR within a smart city maintenance service to improve the accessibility of sensor and actuator devices in the field, where responsiveness is crucial. In it, depending on the location and needs of each service, data and commands will be transported by an urban communications network or consulted on the spot. Direct AR interaction with urban objects has already been described; it usually relies on 2D visual codes to deliver object identifiers (IDs) to the rendering device to identify object resources. These IDs allow information about the objects to be retrieved from a remote server. In this work, we present a novel solution that replaces static AR markers with dynamic markers based on LED communication, which can be decoded through cameras embedded in smartphones. These dynamic markers can directly deliver sensor information to the rendering device, on top of the object ID, without further network interaction. PMID:26151215
Usachev takes notes in Service Module
2001-03-26
ISS002-E-5773 (28 March 2001) --- Yury V. Usachev of Rosaviakosmos, Expedtion Two mission commander, scribbles down some notes at the computer workstation in the Zvezda Service Module. The image was taken with a digital still camera.
Method to implement the CCD timing generator based on FPGA
NASA Astrophysics Data System (ADS)
Li, Binhua; Song, Qian; He, Chun; Jin, Jianhui; He, Lin
2010-07-01
With the advance of the PFPA technology, the design methodology of digital systems is changing. In recent years we develop a method to implement the CCD timing generator based on FPGA and VHDL. This paper presents the principles and implementation skills of the method. Taking a developed camera as an example, we introduce the structure, input and output clocks/signals of a timing generator implemented in the camera. The generator is composed of a top module and a bottom module. The bottom one is made up of 4 sub-modules which correspond to 4 different operation modes. The modules are implemented by 5 VHDL programs. Frame charts of the architecture of these programs are shown in the paper. We also describe implementation steps of the timing generator in Quartus II, and the interconnections between the generator and a Nios soft core processor which is the controller of this generator. Some test results are presented in the end.
SMART-on-FHIR implemented over i2b2
Mandel, Joshua C; Klann, Jeffery G; Wattanasin, Nich; Mendis, Michael; Chute, Christopher G; Mandl, Kenneth D; Murphy, Shawn N
2017-01-01
We have developed an interface to serve patient data from Informatics for Integrating Biology and the Bedside (i2b2) repositories in the Fast Healthcare Interoperability Resources (FHIR) format, referred to as a SMART-on-FHIR cell. The cell serves FHIR resources on a per-patient basis, and supports the “substitutable” modular third-party applications (SMART) OAuth2 specification for authorization of client applications. It is implemented as an i2b2 server plug-in, consisting of 6 modules: authentication, REST, i2b2-to-FHIR converter, resource enrichment, query engine, and cache. The source code is freely available as open source. We tested the cell by accessing resources from a test i2b2 installation, demonstrating that a SMART app can be launched from the cell that accesses patient data stored in i2b2. We successfully retrieved demographics, medications, labs, and diagnoses for test patients. The SMART-on-FHIR cell will enable i2b2 sites to provide simplified but secure data access in FHIR format, and will spur innovation and interoperability. Further, it transforms i2b2 into an apps platform. PMID:27274012
Human tracking over camera networks: a review
NASA Astrophysics Data System (ADS)
Hou, Li; Wan, Wanggen; Hwang, Jenq-Neng; Muhammad, Rizwan; Yang, Mingyang; Han, Kang
2017-12-01
In recent years, automated human tracking over camera networks is getting essential for video surveillance. The tasks of tracking human over camera networks are not only inherently challenging due to changing human appearance, but also have enormous potentials for a wide range of practical applications, ranging from security surveillance to retail and health care. This review paper surveys the most widely used techniques and recent advances for human tracking over camera networks. Two important functional modules for the human tracking over camera networks are addressed, including human tracking within a camera and human tracking across non-overlapping cameras. The core techniques of human tracking within a camera are discussed based on two aspects, i.e., generative trackers and discriminative trackers. The core techniques of human tracking across non-overlapping cameras are then discussed based on the aspects of human re-identification, camera-link model-based tracking and graph model-based tracking. Our survey aims to address existing problems, challenges, and future research directions based on the analyses of the current progress made toward human tracking techniques over camera networks.
View of model of Scientific Instrument Module to be flown on Apollo 15
NASA Technical Reports Server (NTRS)
1970-01-01
Close-up view of a scale model of the Scientific Instrument Module (SIM) to be flown for the first time on the Apollo 15 mission. Mounted in a previously vacant sector of the Apollo service module, the SIM carries specialized cameras and instrumentation for gathering lunar orbit scientific data.
Development of a custom-made "smart-sphere" to assess incipient entrainment by rolling
NASA Astrophysics Data System (ADS)
Valyrakis, Manousos; Kitsikoudis, Vasileios; Alexakis, Athanasios; Trinder, Jon
2017-04-01
The most widely applied criterion for sediment incipient motion in engineering applications is the time- and space-averaged approach of critical Shields shear stress. Nonetheless, in the recent years published research has highlighted the importance of turbulence fluctuations in sediment incipient motion and its stochastic character. The present experimental study investigates statistically the link of the response of a "smart-pebble" to hydrodynamics in near-critical flow conditions and discusses how such a device can be utilized in engineering design. A set of specifically designed fluvial experiments monitoring the entrainment conditions for a "smart-pebble", were carried out in a tilting, recirculating flume in turbulent flow conditions while three-dimensional flow measurements were obtained with an acoustic Doppler velocimeter. The "smart-pebble" employed herein is a custom-made instrumented sphere with 7 cm diameter, which has a number of sensors embedded within its waterproof 3D-printed plastic shell. Specifically, the "smart-pebble" is equipped with miniaturized, off the shelf, low-cost, three-dimensional acceleration, orientation and angular displacement sensors. A 3D-printed local micro topography of known geometry was installed in the flume's test section and the "smart-pebble" was placed there in order to facilitate the analysis. Every time the "smart-sphere" is displaced by the flow a downstream located pin blocks its full entrainment. This allows for continuous recording of the entrainment events due to the passage of energetic events, after which the "smart-pebble" returns to its resting pocket. The "smart-pebble" device under such a configuration allows the recording of normally indiscernible (with the naked eye) vibrations, twitching motions, and full entrainments for the studied particle, allowing its analysis from a Langrangian framework. During the incipient motion experiments the retrieved data are stored in an internal memory unit or transferred online with short-range Wi-Fi antennas. In addition, two high-speed commercial cameras are used to monitor the process and provide additional information. The hydrodynamic force that the "smart-pebble" is subject to is expressed with the recently proposed impulse and energy criteria, which imply that a sufficient energetic turbulent flow structure requires not only a hydrodynamic force above a certain threshold but this force has to be exerted for sufficient time for momentum transfer to occur efficiently. It is found that the probability of entrainment for the "smart-pebble" is linked to the number of energetic flow events above a threshold level. The findings of this experimental study aim to shed more light in coarse sediment incipient motion and pave the way for the utilization of such devices in the field in actual engineering applications.
NASA Astrophysics Data System (ADS)
Gupta, S.; Lohani, B.
2014-05-01
Mobile augmented reality system is the next generation technology to visualise 3D real world intelligently. The technology is expanding at a fast pace to upgrade the status of a smart phone to an intelligent device. The research problem identified and presented in the current work is to view actual dimensions of various objects that are captured by a smart phone in real time. The methodology proposed first establishes correspondence between LiDAR point cloud, that are stored in a server, and the image t hat is captured by a mobile. This correspondence is established using the exterior and interior orientation parameters of the mobile camera and the coordinates of LiDAR data points which lie in the viewshed of the mobile camera. A pseudo intensity image is generated using LiDAR points and their intensity. Mobile image and pseudo intensity image are then registered using image registration method SIFT thereby generating a pipeline to locate a point in point cloud corresponding to a point (pixel) on the mobile image. The second part of the method uses point cloud data for computing dimensional information corresponding to the pairs of points selected on mobile image and fetch the dimensions on top of the image. This paper describes all steps of the proposed method. The paper uses an experimental setup to mimic the mobile phone and server system and presents some initial but encouraging results
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array.
Navruz, Isa; Coskun, Ahmet F; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-10-21
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ~9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ~3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also removes spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears.
Smart-phone based computational microscopy using multi-frame contact imaging on a fiber-optic array
Navruz, Isa; Coskun, Ahmet F.; Wong, Justin; Mohammad, Saqib; Tseng, Derek; Nagi, Richie; Phillips, Stephen; Ozcan, Aydogan
2013-01-01
We demonstrate a cellphone based contact microscopy platform, termed Contact Scope, which can image highly dense or connected samples in transmission mode. Weighing approximately 76 grams, this portable and compact microscope is installed on the existing camera unit of a cellphone using an opto-mechanical add-on, where planar samples of interest are placed in contact with the top facet of a tapered fiber-optic array. This glass-based tapered fiber array has ∼9 fold higher density of fiber optic cables on its top facet compared to the bottom one and is illuminated by an incoherent light source, e.g., a simple light-emitting-diode (LED). The transmitted light pattern through the object is then sampled by this array of fiber optic cables, delivering a transmission image of the sample onto the other side of the taper, with ∼3× magnification in each direction. This magnified image of the object, located at the bottom facet of the fiber array, is then projected onto the CMOS image sensor of the cellphone using two lenses. While keeping the sample and the cellphone camera at a fixed position, the fiber-optic array is then manually rotated with discrete angular increments of e.g., 1-2 degrees. At each angular position of the fiber-optic array, contact images are captured using the cellphone camera, creating a sequence of transmission images for the same sample. These multi-frame images are digitally fused together based on a shift-and-add algorithm through a custom-developed Android application running on the smart-phone, providing the final microscopic image of the sample, visualized through the screen of the phone. This final computation step improves the resolution and also gets rid of spatial artefacts that arise due to non-uniform sampling of the transmission intensity at the fiber optic array surface. We validated the performance of this cellphone based Contact Scope by imaging resolution test charts and blood smears. PMID:23939637
Gidzenko in Service Module with laptop computers
2001-03-30
ISS-01-E-5070 (December 2000) --- Astronaut Yuri P. Gidzenko, Expedition One Soyuz commander, works with computers in the Zvezda or Service Module aboard the Earth-orbiting International Space Station (ISS). The picture was taken with a digital still camera.
Floor Identification with Commercial Smartphones in Wifi-Based Indoor Localization System
NASA Astrophysics Data System (ADS)
Ai, H. J.; Liu, M. Y.; Shi, Y. M.; Zhao, J. Q.
2016-06-01
In this paper, we utilize novel sensors built-in commercial smart devices to propose a schema which can identify floors with high accuracy and efficiency. This schema can be divided into two modules: floor identifying and floor change detection. Floor identifying module starts at initial phase of positioning, and responsible for determining which floor the positioning start. We have estimated two methods to identify initial floor based on K-Nearest Neighbors (KNN) and BP Neural Network, respectively. In order to improve performance of KNN algorithm, we proposed a novel method based on weighting signal strength, which can identify floors robust and quickly. Floor change detection module turns on after entering into continues positioning procedure. In this module, sensors (such as accelerometer and barometer) of smart devices are used to determine whether the user is going up and down stairs or taking an elevator. This method has fused different kinds of sensor data and can adapt various motion pattern of users. We conduct our experiment with mobile client on Android Phone (Nexus 5) at a four-floors building with an open area between the second and third floor. The results demonstrate that our scheme can achieve an accuracy of 99% to identify floor and 97% to detecting floor changes as a whole.
Image quality evaluation of color displays using a Fovean color camera
NASA Astrophysics Data System (ADS)
Roehrig, Hans; Dallas, William J.; Fan, Jiahua; Krupinski, Elizabeth A.; Redford, Gary R.; Yoneda, Takahiro
2007-03-01
This paper presents preliminary data on the use of a color camera for the evaluation of Quality Control (QC) and Quality Analysis (QA) of a color LCD in comparison with that of a monochrome LCD. The color camera is a C-MOS camera with a pixel size of 9 µm and a pixel matrix of 2268 × 1512 × 3. The camera uses a sensor that has co-located pixels for all three primary colors. The imaging geometry used mostly was 12 × 12 camera pixels per display pixel even though it appears that an imaging geometry of 17.6 might provide results which are more accurate. The color camera is used as an imaging colorimeter, where each camera pixel is calibrated to serve as a colorimeter. This capability permits the camera to determine chromaticity of the color LCD at different sections of the display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. Modulation Transfer Function (MTF) as well as Noise in terms of the Noise Power Spectrum (NPS) of both LCDs were evaluated. The horizontal MTFs of both displays have a larger negative slope than the vertical MTFs, indicating that the horizontal MTFs are poorer than the vertical MTFs. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. These results contradict simulations regarding MTFs in the vertical direction. The spatial noise of the color display in both directions are larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
NASA Astrophysics Data System (ADS)
Javh, Jaka; Slavič, Janko; Boltežar, Miha
2018-02-01
Instantaneous full-field displacement fields can be measured using cameras. In fact, using high-speed cameras full-field spectral information up to a couple of kHz can be measured. The trouble is that high-speed cameras capable of measuring high-resolution fields-of-view at high frame rates prove to be very expensive (from tens to hundreds of thousands of euro per camera). This paper introduces a measurement set-up capable of measuring high-frequency vibrations using slow cameras such as DSLR, mirrorless and others. The high-frequency displacements are measured by harmonically blinking the lights at specified frequencies. This harmonic blinking of the lights modulates the intensity changes of the filmed scene and the camera-image acquisition makes the integration over time, thereby producing full-field Fourier coefficients of the filmed structure's displacements.
Dual beam optical interferometer
NASA Technical Reports Server (NTRS)
Gutierrez, Roman C. (Inventor)
2003-01-01
A dual beam interferometer device is disclosed that enables moving an optics module in a direction, which changes the path lengths of two beams of light. The two beams reflect off a surface of an object and generate different speckle patterns detected by an element, such as a camera. The camera detects a characteristic of the surface.
Status of the NectarCAM camera project
NASA Astrophysics Data System (ADS)
Glicenstein, J.-F.; Barcelo, M.; Barrio, J.-A.; Blanch, O.; Boix, J.; Bolmont, J.; Boutonnet, C.; Brun, P.; Chabanne, E.; Champion, C.; Colonges, S.; Corona, P.; Courty, B.; Delagnes, E.; Delgado, C.; Diaz, C.; Ernenwein, J.-P.; Fegan, S.; Ferreira, O.; Fesquet, M.; Fontaine, G.; Fouque, N.; Henault, F.; Gascón, D.; Giebels, B.; Herranz, D.; Hermel, R.; Hoffmann, D.; Horan, D.; Houles, J.; Jean, P.; Karkar, S.; Knödlseder, J.; Martinez, G.; Lamanna, G.; LeFlour, T.; Lévêque, A.; Lopez-Coto, R.; Louis, F.; Moudden, Y.; Moulin, E.; Nayman, P.; Nunio, F.; Olive, J.-F.; Panazol, J.-L.; Pavy, S.; Petrucci, P.-O.; Punch, M.; Prast, Julie; Ramon, P.; Rateau, S.; Ribó, M.; Rosier-Lees, S.; Sanuy, A.; Sizun, P.; Sieiro, J.; Sulanke, K.-H.; Tavernet, J.-P.; Tejedor, L. A.; Toussenel, F.; Vasileiadis, G.; Voisin, V.; Waegebert, V.; Zurbach, C.
2014-07-01
NectarCAM is a camera designed for the medium-sized telescopes of the Cherenkov Telescope Array (CTA) covering the central energy range 100 GeV to 30 TeV. It has a modular design based on the NECTAr chip, at the heart of which is a GHz sampling Switched Capacitor Array and 12-bit Analog to Digital converter. The camera will be equipped with 265 7-photomultiplier modules, covering a field of view of 7 to 8 degrees. Each module includes the photomultiplier bases, High Voltage supply, pre-amplifier, trigger, readout and Thernet transceiver. Events recorded last between a few nanoseconds and tens of nanoseconds. A flexible trigger scheme allows to read out very long events. NectarCAM can sustain a data rate of 10 kHz. The camera concept, the design and tests of the various subcomponents and results of thermal and electrical prototypes are presented. The design includes the mechanical structure, the cooling of electronics, read-out, clock distribution, slow control, data-acquisition, trigger, monitoring and services. A 133-pixel prototype with full scale mechanics, cooling, data acquisition and slow control will be built at the end of 2014.
From "Cellular" RNA to "Smart" RNA: Multiple Roles of RNA in Genome Stability and Beyond.
Michelini, Flavia; Jalihal, Ameya P; Francia, Sofia; Meers, Chance; Neeb, Zachary T; Rossiello, Francesca; Gioia, Ubaldo; Aguado, Julio; Jones-Weinert, Corey; Luke, Brian; Biamonti, Giuseppe; Nowacki, Mariusz; Storici, Francesca; Carninci, Piero; Walter, Nils G; Fagagna, Fabrizio d'Adda di
2018-04-25
Coding for proteins has been considered the main function of RNA since the "central dogma" of biology was proposed. The discovery of noncoding transcripts shed light on additional roles of RNA, ranging from the support of polypeptide synthesis, to the assembly of subnuclear structures, to gene expression modulation. Cellular RNA has therefore been recognized as a central player in often unanticipated biological processes, including genomic stability. This ever-expanding list of functions inspired us to think of RNA as a "smart" phone, which has replaced the older obsolete "cellular" phone. In this review, we summarize the last two decades of advances in research on the interface between RNA biology and genome stability. We start with an account of the emergence of noncoding RNA, and then we discuss the involvement of RNA in DNA damage signaling and repair, telomere maintenance, and genomic rearrangements. We continue with the depiction of single-molecule RNA detection techniques, and we conclude by illustrating the possibilities of RNA modulation in hopes of creating or improving new therapies. The widespread biological functions of RNA have made this molecule a reoccurring theme in basic and translational research, warranting it the transcendence from classically studied "cellular" RNA to "smart" RNA.
Voss on TVIS equipment in Zvezda module
2001-05-15
ISS002-E-06677 (15 May 2001) --- James S. Voss, Expedition Two flight engineer, wearing a safety harness, exercises on the Treadmill Vibration Isolation System (TVIS) equipment in the Zvezda Service Module. This image was taken with a digital still camera.
Usachev performs maintenance on TVIS in Zvezda module
2001-04-26
ISS002-E-7015 (26 April 2001) --- Cosmonaut Yury V. Usachev, Expedition Two commander representing Rosaviakosmos, conducts maintenance on the Treadmill Vibration Isolation System (TVIS) in the Zvezda/Service Module. A digital still camera was used to record this image.
Usachev with Solid Waste Container in Service Module
2001-04-10
ISS002-E-5336 (10 April 2001) --- As part of routine procedures, cosmonaut Yury V. Usachev, Expedition Two mission commander, changes out a solid waste container in the Zvezda / Service Module. This image was recorded with a digital still camera.
Using Lunar Module Shadows To Scale the Effects of Rocket Exhaust Plumes
NASA Technical Reports Server (NTRS)
2008-01-01
Excavating granular materials beneath a vertical jet of gas involves several physical mechanisms. These occur, for example, beneath the exhaust plume of a rocket landing on the soil of the Moon or Mars. We performed a series of experiments and simulations (Figure 1) to provide a detailed view of the complex gas-soil interactions. Measurements taken from the Apollo lunar landing videos (Figure 2) and from photographs of the resulting terrain helped demonstrate how the interactions extrapolate into the lunar environment. It is important to understand these processes at a fundamental level to support the ongoing design of higher fidelity numerical simulations and larger-scale experiments. These are needed to enable future lunar exploration wherein multiple hardware assets will be placed on the Moon within short distances of one another. The high-velocity spray of soil from the landing spacecraft must be accurately predicted and controlled or it could erode the surfaces of nearby hardware. This analysis indicated that the lunar dust is ejected at an angle of less than 3 degrees above the surface, the results of which can be mitigated by a modest berm of lunar soil. These results assume that future lunar landers will use a single engine. The analysis would need to be adjusted for a multiengine lander. Figure 3 is a detailed schematic of the Lunar Module camera calibration math model. In this chart, formulas relating the known quantities, such as sun angle and Lunar Module dimensions, to the unknown quantities are depicted. The camera angle PSI is determined by measurement of the imaged aspect ratio of a crater, where the crater is assumed to be circular. The final solution is the determination of the camera calibration factor, alpha. Figure 4 is a detailed schematic of the dust angle math model, which again relates known to unknown parameters. The known parameters now include the camera calibration factor and Lunar Module dimensions. The final computation is the ejected dust angle, as a function of Lunar Module altitude.
Apollo 12 stereo view of lunar surface upon which astronaut had stepped
1969-11-20
AS12-57-8448 (19-20 Nov. 1969) --- An Apollo 12 stereo view showing a three-inch square of the lunar surface upon which an astronaut had stepped. Taken during extravehicular activity of astronauts Charles Conrad Jr. and Alan L. Bean, the exposure of the boot imprint was made with an Apollo 35mm stereo close-up camera. The camera was developed to get the highest possible resolution of a small area. The three-inch square is photographed with a flash illumination and at a fixed distance. The camera is mounted on a walking stick, and the astronauts use it by holding it up against the object to be photographed and pulling the trigger. While astronauts Conrad and Bean descended in their Apollo 12 Lunar Module to explore the lunar surface, astronaut Richard F. Gordon Jr. remained with the Command and Service Modules in lunar orbit.
Maestre-Rendon, J. Rodolfo; Sierra-Hernandez, Juan M.; Contreras-Medina, Luis M.; Fernandez-Jaramillo, Arturo A.
2017-01-01
Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat. PMID:29165397
Maestre-Rendon, J Rodolfo; Rivera-Roman, Tomas A; Sierra-Hernandez, Juan M; Cruz-Aceves, Ivan; Contreras-Medina, Luis M; Duarte-Galvan, Carlos; Fernandez-Jaramillo, Arturo A
2017-11-22
Manual measurements of foot anthropometry can lead to errors since this task involves the experience of the specialist who performs them, resulting in different subjective measures from the same footprint. Moreover, some of the diagnoses that are given to classify a footprint deformity are based on a qualitative interpretation by the physician; there is no quantitative interpretation of the footprint. The importance of providing a correct and accurate diagnosis lies in the need to ensure that an appropriate treatment is provided for the improvement of the patient without risking his or her health. Therefore, this article presents a smart sensor that integrates the capture of the footprint, a low computational-cost analysis of the image and the interpretation of the results through a quantitative evaluation. The smart sensor implemented required the use of a camera (Logitech C920) connected to a Raspberry Pi 3, where a graphical interface was made for the capture and processing of the image, and it was adapted to a podoscope conventionally used by specialists such as orthopedist, physiotherapists and podiatrists. The footprint diagnosis smart sensor (FPDSS) has proven to be robust to different types of deformity, precise, sensitive and correlated in 0.99 with the measurements from the digitalized image of the ink mat.
Performance benefits and limitations of a camera network
NASA Astrophysics Data System (ADS)
Carr, Peter; Thomas, Paul J.; Hornsey, Richard
2005-06-01
Visual information is of vital significance to both animals and artificial systems. The majority of mammals rely on two images, each with a resolution of 107-108 'pixels' per image. At the other extreme are insect eyes where the field of view is segmented into 103-105 images, each comprising effectively one pixel/image. The great majority of artificial imaging systems lie nearer to the mammalian characteristics in this parameter space, although electronic compound eyes have been developed in this laboratory and elsewhere. If the definition of a vision system is expanded to include networks or swarms of sensor elements, then schools of fish, flocks of birds and ant or termite colonies occupy a region where the number of images and the pixels/image may be comparable. A useful system might then have 105 imagers, each with about 104-105 pixels. Artificial analogs to these situations include sensor webs, smart dust and co-ordinated robot clusters. As an extreme example, we might consider the collective vision system represented by the imminent existence of ~109 cellular telephones, each with a one-megapixel camera. Unoccupied regions in this resolution-segmentation parameter space suggest opportunities for innovative artificial sensor network systems. Essential for the full exploitation of these opportunities is the availability of custom CMOS image sensor chips whose characteristics can be tailored to the application. Key attributes of such a chip set might include integrated image processing and control, low cost, and low power. This paper compares selected experimentally determined system specifications for an inward-looking array of 12 cameras with the aid of a camera-network model developed to explore the tradeoff between camera resolution and the number of cameras.
Optical smart packaging to reduce transmitted information.
Cabezas, Luisa; Tebaldi, Myrian; Barrera, John Fredy; Bolognini, Néstor; Torroba, Roberto
2012-01-02
We demonstrate a smart image-packaging optical technique that uses what we believe is a new concept to save byte space when transmitting data. The technique supports a large set of images mapped into modulated speckle patterns. Then, they are multiplexed into a single package. This operation results in a substantial decreasing of the final amount of bytes of the package with respect to the amount resulting from the addition of the images without using the method. Besides, there are no requirements on the type of images to be processed. We present results that proof the potentiality of the technique.
Embedded Systems and TensorFlow Frameworks as Assistive Technology Solutions.
Mulfari, Davide; Palla, Alessandro; Fanucci, Luca
2017-01-01
In the field of deep learning, this paper presents the design of a wearable computer vision system for visually impaired users. The Assistive Technology solution exploits a powerful single board computer and smart glasses with a camera in order to allow its user to explore the objects within his surrounding environment, while it employs Google TensorFlow machine learning framework in order to real time classify the acquired stills. Therefore the proposed aid can increase the awareness of the explored environment and it interacts with its user by means of audio messages.
Culbertson cuts his hair in the Service Module during Expedition Three
2001-09-22
ISS003-E-6104 (22 September 2001) --- Astronaut Frank L. Culbertson, Jr., Expedition Three mission commander, cuts his hair in the Zvezda Service Module on the International Space Station (ISS). This picture was taken with a digital still camera.
Krikalev in Service module with tools
2001-03-30
ISS01-E-5150 (December 2000) --- Cosmonaut Sergei K. Krikalev, Expedition One flight engineer, retrieves a tool during an installation and set-up session in the Zvezda service module aboard the International Space Station (ISS). The picture was recorded with a digital still camera.
Usachev performs maintenance on TVIS system in Service module
2001-04-01
ISS002-E-5137 (April 2001) --- Cosmonaut Yury V. Usachev, Expedition Two mission commander, performs routine maintenance on the International Space Station's (ISS) Treadmill Vibration Isolation System (TVIS) in the Zvezda / Service Module. This image was recorded with a digital still camera.
Usachev in sleep station in Service Module
2001-04-22
ISS002-E-5360 (22 April 2001) --- Cosmonaut Yury V. Usachev, Expedition Two mission commander, writes down some notes in his sleeping compartment in the Zvezda / Service Module of the International Space Station (ISS). This image was recorded with a digital still camera.
Usachev at sleep station in Service Module
2001-04-28
ISS002-E-6337 (28 April 2001) --- Cosmonaut Yury V. Usachev, Expedition Two mission commander, writes down some notes in his sleeping compartment in the Zvezda / Service Module of the International Space Station (ISS). The image was taken with a digital still camera.
2001-04-07
ISS002-E-5511 (07 April 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, pauses from moving through the Node 1 / Unity module of the International Space Station (ISS) to pose for a photograph. This image was recorded with a digital still camera.
Voss with soldering tool in Service Module
2001-03-28
ISS002-E-5069 (28 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, uses a soldering tool for a maintenance task in the Zvezda Service Module onboard the International Space Station (ISS). The image was recorded with a digital still camera.
Radiometric calibration of an ultra-compact microbolometer thermal imaging module
NASA Astrophysics Data System (ADS)
Riesland, David W.; Nugent, Paul W.; Laurie, Seth; Shaw, Joseph A.
2017-05-01
As microbolometer focal plane array formats are steadily decreasing, new challenges arise in correcting for thermal drift in the calibration coefficients. As the thermal mass of the cameras decrease the focal plane becomes more sensitive to external thermal inputs. This paper shows results from a temperature compensation algorithm for characterizing and radiometrically calibrating a FLIR Lepton camera.
Trained neurons-based motion detection in optical camera communications
NASA Astrophysics Data System (ADS)
Teli, Shivani; Cahyadi, Willy Anugrah; Chung, Yeon Ho
2018-04-01
A concept of trained neurons-based motion detection (TNMD) in optical camera communications (OCC) is proposed. The proposed TNMD is based on neurons present in a neural network that perform repetitive analysis in order to provide efficient and reliable motion detection in OCC. This efficient motion detection can be considered another functionality of OCC in addition to two traditional functionalities of illumination and communication. To verify the proposed TNMD, the experiments were conducted in an indoor static downlink OCC, where a mobile phone front camera is employed as the receiver and an 8 × 8 red, green, and blue (RGB) light-emitting diode array as the transmitter. The motion is detected by observing the user's finger movement in the form of centroid through the OCC link via a camera. Unlike conventional trained neurons approaches, the proposed TNMD is trained not with motion itself but with centroid data samples, thus providing more accurate detection and far less complex detection algorithm. The experiment results demonstrate that the TNMD can detect all considered motions accurately with acceptable bit error rate (BER) performances at a transmission distance of up to 175 cm. In addition, while the TNMD is performed, a maximum data rate of 3.759 kbps over the OCC link is obtained. The OCC with the proposed TNMD combined can be considered an efficient indoor OCC system that provides illumination, communication, and motion detection in a convenient smart home environment.
Lock-in imaging with synchronous digital mirror demodulation
NASA Astrophysics Data System (ADS)
Bush, Michael G.
2010-04-01
Lock-in imaging enables high contrast imaging in adverse conditions by exploiting a modulated light source and homodyne detection. We report results on a patent pending lock-in imaging system fabricated from commercial-off-theshelf parts utilizing standard cameras and a spatial light modulator. By leveraging the capabilities of standard parts we are able to present a low cost, high resolution, high sensitivity camera with applications in search and rescue, friend or foe identification (IFF), and covert surveillance. Different operating modes allow the same instrument to be utilized for dual band multispectral imaging or high dynamic range imaging, increasing the flexibility in different operational settings.
NASA Astrophysics Data System (ADS)
Whyte, Refael; Streeter, Lee; Cree, Michael J.; Dorrington, Adrian A.
2015-11-01
Time of flight (ToF) range cameras illuminate the scene with an amplitude-modulated continuous wave light source and measure the returning modulation envelopes: phase and amplitude. The phase change of the modulation envelope encodes the distance travelled. This technology suffers from measurement errors caused by multiple propagation paths from the light source to the receiving pixel. The multiple paths can be represented as the summation of a direct return, which is the return from the shortest path length, and a global return, which includes all other returns. We develop the use of a sinusoidal pattern from which a closed form solution for the direct and global returns can be computed in nine frames with the constraint that the global return is a spatially lower frequency than the illuminated pattern. In a demonstration on a scene constructed to have strong multipath interference, we find the direct return is not significantly different from the ground truth in 33/136 pixels tested; where for the full-field measurement, it is significantly different for every pixel tested. The variance in the estimated direct phase and amplitude increases by a factor of eight compared with the standard time of flight range camera technique.
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
NASA Astrophysics Data System (ADS)
Melnikov, Alexander; Chen, Liangjie; Ramirez Venegas, Diego; Sivagurunathan, Koneswaran; Sun, Qiming; Mandelis, Andreas; Rodriguez, Ignacio Rojas
2018-04-01
Single-Frequency Thermal Wave Radar Imaging (SF-TWRI) was introduced and used to obtain quantitative thickness images of coatings on an aluminum block and on polyetherketone, and to image blind subsurface holes in a steel block. In SF-TWR, the starting and ending frequencies of a linear frequency modulation sweep are chosen to coincide. Using the highest available camera frame rate, SF-TWRI leads to a higher number of sampled points along the modulation waveform than conventional lock-in thermography imaging because it is not limited by conventional undersampling at high frequencies due to camera frame-rate limitations. This property leads to large reduction in measurement time, better quality of images, and higher signal-noise-ratio across wide frequency ranges. For quantitative thin-coating imaging applications, a two-layer photothermal model with lumped parameters was used to reconstruct the layer thickness from multi-frequency SF-TWR images. SF-TWRI represents a next-generation thermography method with superior features for imaging important classes of thin layers, materials, and components that require high-frequency thermal-wave probing well above today's available infrared camera technology frame rates.
Raspberry Pi camera with intervalometer used as crescograph
NASA Astrophysics Data System (ADS)
Albert, Stefan; Surducan, Vasile
2017-12-01
The intervalometer is an attachment or facility on a photo-camera that operates the shutter regularly at set intervals over a period. Professional cameras with built in intervalometers are expensive and quite difficult to find. The Canon CHDK open source operating system allows intervalometer implementation on Canon cameras only. However finding a Canon camera with near infra-red (NIR) photographic lens at affordable price is impossible. On experiments requiring several cameras (used to measure growth in plants - the crescographs, but also for coarse evaluation of the water content of leaves), the costs of the equipment are often over budget. Using two Raspberry Pi modules each equipped with a low cost NIR camera and a WIFI adapter (for downloading pictures stored on the SD card) and some freely available software, we have implemented two low budget intervalometer cameras. The shutting interval, the number of pictures to be taken, image resolution and some other parameters can be fully programmed. Cameras have been in use continuously for three months (July-October 2017) in a relevant environment (outside), proving the concept functionality.
Real-time improvement of continuous glucose monitoring accuracy: the smart sensor concept.
Facchinetti, Andrea; Sparacino, Giovanni; Guerra, Stefania; Luijf, Yoeri M; DeVries, J Hans; Mader, Julia K; Ellmerer, Martin; Benesch, Carsten; Heinemann, Lutz; Bruttomesso, Daniela; Avogaro, Angelo; Cobelli, Claudio
2013-04-01
Reliability of continuous glucose monitoring (CGM) sensors is key in several applications. In this work we demonstrate that real-time algorithms can render CGM sensors smarter by reducing their uncertainty and inaccuracy and improving their ability to alert for hypo- and hyperglycemic events. The smart CGM (sCGM) sensor concept consists of a commercial CGM sensor whose output enters three software modules, able to work in real time, for denoising, enhancement, and prediction. These three software modules were recently presented in the CGM literature, and here we apply them to the Dexcom SEVEN Plus continuous glucose monitor. We assessed the performance of the sCGM on data collected in two trials, each containing 12 patients with type 1 diabetes. The denoising module improves the smoothness of the CGM time series by an average of ∼57%, the enhancement module reduces the mean absolute relative difference from 15.1 to 10.3%, increases by 12.6% the pairs of values falling in the A-zone of the Clarke error grid, and finally, the prediction module forecasts hypo- and hyperglycemic events an average of 14 min ahead of time. We have introduced and implemented the sCGM sensor concept. Analysis of data from 24 patients demonstrates that incorporation of suitable real-time signal processing algorithms for denoising, enhancement, and prediction can significantly improve the performance of CGM applications. This can be of great clinical impact for hypo- and hyperglycemic alert generation as well in artificial pancreas devices.
NASA Astrophysics Data System (ADS)
Hardikar, Kedar Y.; Liu, Bill J. J.; Bheemreddy, Venkata
2016-09-01
Gaining an understanding of degradation mechanisms and their characterization are critical in developing relevant accelerated tests to ensure PV module performance warranty over a typical lifetime of 25 years. As newer technologies are adapted for PV, including new PV cell technologies, new packaging materials, and newer product designs, the availability of field data over extended periods of time for product performance assessment cannot be expected within the typical timeframe for business decisions. In this work, to enable product design decisions and product performance assessment for PV modules utilizing newer technologies, Simulation and Mechanism based Accelerated Reliability Testing (SMART) methodology and empirical approaches to predict field performance from accelerated test results are presented. The method is demonstrated for field life assessment of flexible PV modules based on degradation mechanisms observed in two accelerated tests, namely, Damp Heat and Thermal Cycling. The method is based on design of accelerated testing scheme with the intent to develop relevant acceleration factor models. The acceleration factor model is validated by extensive reliability testing under different conditions going beyond the established certification standards. Once the acceleration factor model is validated for the test matrix a modeling scheme is developed to predict field performance from results of accelerated testing for particular failure modes of interest. Further refinement of the model can continue as more field data becomes available. While the demonstration of the method in this work is for thin film flexible PV modules, the framework and methodology can be adapted to other PV products.
NASA Technical Reports Server (NTRS)
1999-01-01
The Parking Garage Automation System (PGAS) is based on a technology developed by a NASA-sponsored project called Robot sensorSkin(TM). Merritt Systems, Inc., of Orlando, Florida, teamed up with NASA to improve robots working with critical flight hardware at Kennedy Space Center in Florida. The system, containing smart sensor modules and flexible printed circuit board skin, help robots to steer clear of obstacles using a proximity sensing system. Advancements in the sensor designs are being applied to various commercial applications, including the PGAS. The system includes a smartSensor(TM) network installed around and within public parking garages to autonomously guide motorists to open facilities, and once within, to free parking spaces. The sensors use non-invasive reflective-ultrasonic technology for high accuracy, high reliability, and low maintenance. The system is remotely programmable: it can be tuned to site-specific requirements, has variable range capability, and allows remote configuration, monitoring, and diagnostics. The sensors are immune to interference from metallic construction materials, such as rebar and steel beams. Inside the garage, smart routing signs mounted overhead or on poles in front of each row of parking spots guide the motorist precisely to free spaces.
Smart Vest: wearable multi-parameter remote physiological monitoring system.
Pandian, P S; Mohanavelu, K; Safeer, K P; Kotresh, T M; Shakunthala, D T; Gopal, Parvati; Padaki, V C
2008-05-01
The wearable physiological monitoring system is a washable shirt, which uses an array of sensors connected to a central processing unit with firmware for continuously monitoring physiological signals. The data collected can be correlated to produce an overall picture of the wearer's health. In this paper, we discuss the wearable physiological monitoring system called 'Smart Vest'. The Smart Vest consists of a comfortable to wear vest with sensors integrated for monitoring physiological parameters, wearable data acquisition and processing hardware and remote monitoring station. The wearable data acquisition system is designed using microcontroller and interfaced with wireless communication and global positioning system (GPS) modules. The physiological signals monitored are electrocardiogram (ECG), photoplethysmogram (PPG), body temperature, blood pressure, galvanic skin response (GSR) and heart rate. The acquired physiological signals are sampled at 250samples/s, digitized at 12-bit resolution and transmitted wireless to a remote physiological monitoring station along with the geo-location of the wearer. The paper describes a prototype Smart Vest system used for remote monitoring of physiological parameters and the clinical validation of the data are also presented.
NASA Astrophysics Data System (ADS)
Tiwari, Samrat Vikramaditya; Sewaiwar, Atul; Chung, Yeon-Ho
2015-10-01
In optical wireless communications, multiple channel transmission is an attractive solution to enhancing capacity and system performance. A new modulation scheme called color coded multiple access (CCMA) for bidirectional multiuser visible light communications (VLC) is presented for smart home applications. The proposed scheme uses red, green and blue (RGB) light emitting diodes (LED) for downlink and phosphor based white LED (P-LED) for uplink to establish a bidirectional VLC and also employs orthogonal codes to support multiple users and devices. The downlink transmission for data user devices and smart home devices is provided using red and green colors from the RGB LEDs, respectively, while uplink transmission from both types of devices is performed using the blue color from P-LEDs. Simulations are conducted to verify the performance of the proposed scheme. It is found that the proposed bidirectional multiuser scheme is efficient in terms of data rate and performance. In addition, since the proposed scheme uses RGB signals for downlink data transmission, it provides flicker-free illumination that would lend itself to multiuser VLC system for smart home applications.
Smart Cup: A Minimally-Instrumented, Smartphone-Based Point-of-Care Molecular Diagnostic Device.
Liao, Shih-Chuan; Peng, Jing; Mauk, Michael G; Awasthi, Sita; Song, Jinzhao; Friedman, Harvey; Bau, Haim H; Liu, Changchun
2016-06-28
Nucleic acid amplification-based diagnostics offer rapid, sensitive, and specific means for detecting and monitoring the progression of infectious diseases. However, this method typically requires extensive sample preparation, expensive instruments, and trained personnel. All of which hinder its use in resource-limited settings, where many infectious diseases are endemic. Here, we report on a simple, inexpensive, minimally-instrumented, smart cup platform for rapid, quantitative molecular diagnostics of pathogens at the point of care. Our smart cup takes advantage of water-triggered, exothermic chemical reaction to supply heat for the nucleic acid-based, isothermal amplification. The amplification temperature is regulated with a phase-change material (PCM). The PCM maintains the amplification reactor at a constant temperature, typically, 60-65°C, when ambient temperatures range from 12 to 35°C. To eliminate the need for an optical detector and minimize cost, we use the smartphone's flashlight to excite the fluorescent dye and the phone camera to record real-time fluorescence emission during the amplification process. The smartphone can concurrently monitor multiple amplification reactors and analyze the recorded data. Our smart cup's utility was demonstrated by amplifying and quantifying herpes simplex virus type 2 (HSV-2) with LAMP assay in our custom-made microfluidic diagnostic chip. We have consistently detected as few as 100 copies of HSV-2 viral DNA per sample. Our system does not require any lab facilities and is suitable for use at home, in the field, and in the clinic, as well as in resource-poor settings, where access to sophisticated laboratories is impractical, unaffordable, or nonexistent.
Zheng, Z. Q.; Yao, J. D.; Wang, B.; Yang, G. W.
2015-01-01
In recent years, owing to the significant applications of health monitoring, wearable electronic devices such as smart watches, smart glass and wearable cameras have been growing rapidly. Gas sensor is an important part of wearable electronic devices for detecting pollutant, toxic, and combustible gases. However, in order to apply to wearable electronic devices, the gas sensor needs flexible, transparent, and working at room temperature, which are not available for traditional gas sensors. Here, we for the first time fabricate a light-controlling, flexible, transparentand working at room-temperature ethanol gas sensor by using commercial ZnO nanoparticles. The fabricated sensor not only exhibits fast and excellent photoresponse, but also shows high sensing response to ethanol under UV irradiation. Meanwhile, its transmittance exceeds 62% in the visible spectral range, and the sensing performance keeps the same even bent it at a curvature angle of 90o. Additionally, using commercial ZnO nanoparticles provides a facile and low-cost route to fabricate wearable electronic devices. PMID:26076705
Zheng, Z Q; Yao, J D; Wang, B; Yang, G W
2015-06-16
In recent years, owing to the significant applications of health monitoring, wearable electronic devices such as smart watches, smart glass and wearable cameras have been growing rapidly. Gas sensor is an important part of wearable electronic devices for detecting pollutant, toxic, and combustible gases. However, in order to apply to wearable electronic devices, the gas sensor needs flexible, transparent, and working at room temperature, which are not available for traditional gas sensors. Here, we for the first time fabricate a light-controlling, flexible, transparent, and working at room-temperature ethanol gas sensor by using commercial ZnO nanoparticles. The fabricated sensor not only exhibits fast and excellent photoresponse, but also shows high sensing response to ethanol under UV irradiation. Meanwhile, its transmittance exceeds 62% in the visible spectral range, and the sensing performance keeps the same even bent it at a curvature angle of 90(o). Additionally, using commercial ZnO nanoparticles provides a facile and low-cost route to fabricate wearable electronic devices.
Smart mobile robot system for rubbish collection
NASA Astrophysics Data System (ADS)
Ali, Mohammed A. H.; Sien Siang, Tan
2018-03-01
This paper records the research and procedures of developing a smart mobility robot with detection system to collect rubbish. The objective of this paper is to design a mobile robot that can detect and recognize medium-size rubbish such as drinking cans. Besides that, the objective is also to design a mobile robot with the ability to estimate the position of rubbish from the robot. In addition, the mobile robot is also able to approach the rubbish based on position of rubbish. This paper explained about the types of image processing, detection and recognition methods and image filters. This project implements RGB subtraction method as the prior system. Other than that, algorithm for distance measurement based on image plane is implemented in this project. This project is limited to use computer webcam as the sensor. Secondly, the robot is only able to approach the nearest rubbish in the same views of camera vision and any rubbish that contain RGB colour components on its body.
An Efficient Framework for Development of Task-Oriented Dialog Systems in a Smart Home Environment.
Park, Youngmin; Kang, Sangwoo; Seo, Jungyun
2018-05-16
In recent times, with the increasing interest in conversational agents for smart homes, task-oriented dialog systems are being actively researched. However, most of these studies are focused on the individual modules of such a system, and there is an evident lack of research on a dialog framework that can integrate and manage the entire dialog system. Therefore, in this study, we propose a framework that enables the user to effectively develop an intelligent dialog system. The proposed framework ontologically expresses the knowledge required for the task-oriented dialog system's process and can build a dialog system by editing the dialog knowledge. In addition, the framework provides a module router that can indirectly run externally developed modules. Further, it enables a more intelligent conversation by providing a hierarchical argument structure (HAS) to manage the various argument representations included in natural language sentences. To verify the practicality of the framework, an experiment was conducted in which developers without any previous experience in developing a dialog system developed task-oriented dialog systems using the proposed framework. The experimental results show that even beginner dialog system developers can develop a high-level task-oriented dialog system.
An Efficient Framework for Development of Task-Oriented Dialog Systems in a Smart Home Environment
Park, Youngmin; Kang, Sangwoo; Seo, Jungyun
2018-01-01
In recent times, with the increasing interest in conversational agents for smart homes, task-oriented dialog systems are being actively researched. However, most of these studies are focused on the individual modules of such a system, and there is an evident lack of research on a dialog framework that can integrate and manage the entire dialog system. Therefore, in this study, we propose a framework that enables the user to effectively develop an intelligent dialog system. The proposed framework ontologically expresses the knowledge required for the task-oriented dialog system’s process and can build a dialog system by editing the dialog knowledge. In addition, the framework provides a module router that can indirectly run externally developed modules. Further, it enables a more intelligent conversation by providing a hierarchical argument structure (HAS) to manage the various argument representations included in natural language sentences. To verify the practicality of the framework, an experiment was conducted in which developers without any previous experience in developing a dialog system developed task-oriented dialog systems using the proposed framework. The experimental results show that even beginner dialog system developers can develop a high-level task-oriented dialog system. PMID:29772668
Rapid Prototyping of a Smart Device-based Wireless Reflectance Photoplethysmograph
Ghamari, M.; Aguilar, C.; Soltanpur, C.; Nazeran, H.
2017-01-01
This paper presents the design, fabrication, and testing of a wireless heart rate (HR) monitoring device based on photoplethysmography (PPG) and smart devices. PPG sensors use infrared (IR) light to obtain vital information to assess cardiac health and other physiologic conditions. The PPG data that are transferred to a computer undergo further processing to derive the Heart Rate Variability (HRV) signal, which is analyzed to generate quantitative markers of the Autonomic Nervous System (ANS). The HRV signal has numerous monitoring and diagnostic applications. To this end, wireless connectivity plays an important role in such biomedical instruments. The photoplethysmograph consists of an optical sensor to detect the changes in the light intensity reflected from the illuminated tissue, a signal conditioning unit to prepare the reflected light for further signal conditioning through amplification and filtering, a low-power microcontroller to control and digitize the analog PPG signal, and a Bluetooth module to transmit the digital data to a Bluetooth-based smart device such as a tablet. An Android app is then used to enable the smart device to acquire and digitally display the received analog PPG signal in real-time on the smart device. This article is concluded with the prototyping of the wireless PPG followed by the verification procedures of the PPG and HRV signals acquired in a laboratory environment. PMID:28959119
Rapid Prototyping of a Smart Device-based Wireless Reflectance Photoplethysmograph.
Ghamari, M; Aguilar, C; Soltanpur, C; Nazeran, H
2016-03-01
This paper presents the design, fabrication, and testing of a wireless heart rate (HR) monitoring device based on photoplethysmography (PPG) and smart devices. PPG sensors use infrared (IR) light to obtain vital information to assess cardiac health and other physiologic conditions. The PPG data that are transferred to a computer undergo further processing to derive the Heart Rate Variability (HRV) signal, which is analyzed to generate quantitative markers of the Autonomic Nervous System (ANS). The HRV signal has numerous monitoring and diagnostic applications. To this end, wireless connectivity plays an important role in such biomedical instruments. The photoplethysmograph consists of an optical sensor to detect the changes in the light intensity reflected from the illuminated tissue, a signal conditioning unit to prepare the reflected light for further signal conditioning through amplification and filtering, a low-power microcontroller to control and digitize the analog PPG signal, and a Bluetooth module to transmit the digital data to a Bluetooth-based smart device such as a tablet. An Android app is then used to enable the smart device to acquire and digitally display the received analog PPG signal in real-time on the smart device. This article is concluded with the prototyping of the wireless PPG followed by the verification procedures of the PPG and HRV signals acquired in a laboratory environment.
Astronaut Russell Schweickart photographed during EVA
NASA Technical Reports Server (NTRS)
1969-01-01
Astronaut Russell L. Schweickart, lunar module pilot, operates a 70mm Hasselblad camera during his extravehicular activity on the fourth day of the Apollo 9 earth-orbital mission. The Command/Service Module and the Lunar Module 3 'Spider' are docked. This view was taken form the Command Module 'Gumdrop'. Schweickart, wearing an Extravehicular Mobility Unit (EMU), is standing in 'golden slippers' on the Lunar Module porch. On his back, partially visible, are a Portable Life Support System (PLSS) and an Oxygen Purge System (OPS).
Voss with coffee and snack in Service Module
2001-04-12
ISS002-E-5532 (12 April 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, has a coffee and a snack at the table in the Zvezda / Service Module of the International Space Station (ISS). This image was recorded with a digital still camera.
Usachev typing while in sleep station in the Service Module
2001-03-23
ISS002-E-5730 (23 March 2001) --- Cosmonaut Yury V. Usachev, Expedition Two commander, works at a laptop computer in his crew compartment in the Zvezda Service Module aboard the International Space Station (ISS). The image was recorded with a digital still camera.
Usachev with IRED hardware in Node 1/Unity module
2001-04-07
ISS002-E-5507 (07 April 2001) --- Cosmonaut Yury V. Usachev, Expedition Two mission commander, wears a harness while conducting resistance exercises in the Node 1 / Unity module of the International Space Station (ISS). This image was recorded with a digital still camera.
Helms and Voss in Service Module
2001-04-10
ISS002-E-5335 (10 April 2001) --- Astronaut Susan J. Helms (left and astronaut James S. Voss, both Expedition Two flight engineers, pose for a photograph aboard the Zvezda/Service Module of the International Space Station (ISS). This image was recorded with a digital still camera.
View of damaged Apollo 13 Service Module from the Lunar/Command Modules
1970-04-17
AS13-58-8464 (17 April 1970) --- This view of the severely damaged Apollo 13 Service Module (SM) was photographed from the Lunar Module/Command Module (LM/CM) following SM jettisoning. Nearest the camera is the Service Propulsion System (SPS) engine and nozzle. An entire SM panel was blown away by the apparent explosion of oxygen tank number two located in Sector 4 of the SM. The apparent rupture of the oxygen tank caused the Apollo 13 crew men to use the Lunar Module (LM) as a "lifeboat".
Smart SfM: Salinas Archaeological Museum
NASA Astrophysics Data System (ADS)
Inzerillo, L.
2017-08-01
In these last years, there has been an increasing use of the Structure from Motion (SfM) techniques applied to Cultural Heritage. The accessibility of SfM software can be especially advantageous to users in non-technical fields or to those with limited resources. Thanks to SfM using, everyone can make with a digital camera a 3D model applied to an object of both Cultural Heritage, and physically Environment, and work arts, etc. One very interesting and useful application can be envisioned into museum collection digitalization. In the last years, a social experiment has been conducted involving young generation to live a social museum using their own camera to take pictures and videos. Students of university of Catania and Palermo were involved into a national event #digitalinvasion (2015-2016 editions) offering their personal contribution: they realized 3D models of the museums collection through the SfM techniques. In particular at the National Archaeological Museum Salinas in Palermo, it has been conducted an organized survey to recognize the most important part of the archaeological collection. It was a success: in both #digitalinvasion National Event 2015 and 2016 the young students of Engineering classes carried out, with Photoscan Agisoft, more than one hundred 3D models some of which realized by phone camera and some other by reflex camera and some other with compact camera too. The director of the museum has been very impressed from these results and now we are going to collaborate at a National project to use the young generation crowdsourcing to realize a semi-automated monitoring system at Salinas Archaeological Museum.
NASA Astrophysics Data System (ADS)
Zhang, Shaojun; Xu, Xiping
2015-10-01
The 360-degree and all round looking camera, as its characteristics of suitable for automatic analysis and judgment on the ambient environment of the carrier by image recognition algorithm, is usually applied to opto-electronic radar of robots and smart cars. In order to ensure the stability and consistency of image processing results of mass production, it is necessary to make sure the centers of image planes of different cameras are coincident, which requires to calibrate the position of the image plane's center. The traditional mechanical calibration method and electronic adjusting mode of inputting the offsets manually, both exist the problem of relying on human eyes, inefficiency and large range of error distribution. In this paper, an approach of auto- calibration of the image plane of this camera is presented. The imaging of the 360-degree and all round looking camera is a ring-shaped image consisting of two concentric circles, the center of the image is a smaller circle and the outside is a bigger circle. The realization of the technology is just to exploit the above characteristics. Recognizing the two circles through HOUGH TRANSFORM algorithm and calculating the center position, we can get the accurate center of image, that the deviation of the central location of the optic axis and image sensor. The program will set up the image sensor chip through I2C bus automatically, we can adjusting the center of the image plane automatically and accurately. The technique has been applied to practice, promotes productivity and guarantees the consistent quality of products.
NASA Astrophysics Data System (ADS)
Godoy Simões, Marcelo; Davi Curi Busarello, Tiago; Saad Bubshait, Abdullah; Harirchi, Farnaz; Antenor Pomilio, José; Blaabjerg, Frede
2016-04-01
This paper presents interactive smart battery-based storage (BBS) for wind generator (WG) and photovoltaic (PV) systems. The BBS is composed of an asymmetric cascaded H-bridge multilevel inverter (ACMI) with staircase modulation. The structure is parallel to the WG and PV systems, allowing the ACMI to have a reduction in power losses compared to the usual solution for storage connected at the DC-link of the converter for WG or PV systems. Moreover, the BBS is embedded with a decision algorithm running real-time energy costs, plus a battery state-of-charge manager and power quality capabilities, making the described system in this paper very interactive, smart and multifunctional. The paper describes how BBS interacts with the WG and PV and how its performance is improved. Experimental results are presented showing the efficacy of this BBS for renewable energy applications.
Astronaut Ronald Evans photographed during transearth coast EVA
NASA Technical Reports Server (NTRS)
1972-01-01
Astronaut Ronald E. Evans is photographed performing extravehicular activity (EVA) during the Apollo 17 spacecraft's transearth coast. During his EVA Command Module pilot Evans retrieved film cassettes from the Lunar Sounder, Mapping Camera, and Panoramic Camera. The cylindrical object at Evans left side is the mapping camera cassette. The total time for the transearth EVA was one hour seven minutes 19 seconds, starting at ground elapsed time of 257:25 (2:28 p.m.) amd ending at ground elapsed time of 258:42 (3:35 p.m.) on Sunday, December 17, 1972.
Performance measurement of commercial electronic still picture cameras
NASA Astrophysics Data System (ADS)
Hsu, Wei-Feng; Tseng, Shinn-Yih; Chiang, Hwang-Cheng; Cheng, Jui-His; Liu, Yuan-Te
1998-06-01
Commercial electronic still picture cameras need a low-cost, systematic method for evaluating the performance. In this paper, we present a measurement method to evaluating the dynamic range and sensitivity by constructing the opto- electronic conversion function (OECF), the fixed pattern noise by the peak S/N ratio (PSNR) and the image shading function (ISF), and the spatial resolution by the modulation transfer function (MTF). The evaluation results of individual color components and the luminance signal from a PC camera using SONY interlaced CCD array as the image sensor are then presented.
Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA
NASA Astrophysics Data System (ADS)
Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki
2017-11-01
SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.
Usachev in hatch at aft end of Service module
2001-03-22
ISS002-E-5705 (22 March 2001) --- Cosmonaut Yury V. Usachev of Rosaviakosmos drifts through the forward hatch of the Zvezda Service Module during early days of his tour of duty aboard the International Space Station (ISS). The image was recorded with a digital still camera.
Voss in Service module with cycle ergometer
2001-03-23
ISS002-E-5734 (23 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, gives his arms and upper body a workout with the bicycle ergometer facility in the Zvezda Service Module aboard the International Space Station (ISS). The image was recorded with a digital still camera.
View in the Node 1/Unity module after docking
1998-12-10
S88-E-5111 (12-10-98) --- Astronaut Robert D. Cabana, mission commander, totes a notebook while checking on the progress of readiness tasks onboard the Unity connecting module. The photo was taken with an electronic still camera (ESC) at 20:25:57 GMT, Dec. 10.
Expedition Two crew share dessert in Zvezda module
2001-06-10
ISS002-E-6534 (10 June 2001) --- Expedition Two crewmembers Yury V. Usachev (left), mission commander, James S. Voss, flight engineer, and Susan J. Helms, flight engineer, share a dessert in the Zvezda Service Module. Usachev represents Rosaviakosmos. The image was recorded with a digital still camera.
Helms and Usachev with checklist in Service Module
2001-05-16
ISS002-E-7605 (16 May 2001) --- Susan J. Helms, flight engineer, and Yury V. Usachev of Rosaviakosmos, mission commander, read over procedures at the computer workstation in the Zvezda Service Module during the Expedition Two mission. The image was taken with a digital still camera.
NASA Astrophysics Data System (ADS)
Katsuta, Junichiro; Edahiro, Ikumi; Watanabe, Shin; Odaka, Hirokazu; Uchida, Yusuke; Uchida, Nagomi; Mizuno, Tsunefumi; Fukazawa, Yasushi; Hayashi, Katsuhiro; Habata, Sho; Ichinohe, Yuto; Kitaguchi, Takao; Ohno, Masanori; Ohta, Masayuki; Takahashi, Hiromitsu; Takahashi, Tadayuki; Takeda, Shin'ichiro; Tajima, Hiroyasu; Yuasa, Takayuki; Itou, Masayoshi; SGD Team
2016-12-01
Gamma-ray polarization offers a unique probe into the geometry of the γ-ray emission process in celestial objects. The Soft Gamma-ray Detector (SGD) onboard the X-ray observatory Hitomi is a Si/CdTe Compton camera and is expected to be an excellent polarimeter, as well as a highly sensitive spectrometer due to its good angular coverage and resolution for Compton scattering. A beam test of the final-prototype for the SGD Compton camera was conducted to demonstrate its polarimetric capability and to verify and calibrate the Monte Carlo simulation of the instrument. The modulation factor of the SGD prototype camera, evaluated for the inner and outer parts of the CdTe sensors as absorbers, was measured to be 0.649-0.701 (inner part) and 0.637-0.653 (outer part) at 122.2 keV and 0.610-0.651 (inner part) and 0.564-0.592 (outer part) at 194.5 keV at varying polarization angles with respect to the detector. This indicates that the relative systematic uncertainty of the modulation factor is as small as ∼ 3 % .
Characterization results from several commercial soft X-ray streak cameras
NASA Astrophysics Data System (ADS)
Stradling, G. L.; Studebaker, J. K.; Cavailler, C.; Launspach, J.; Planes, J.
The spatio-temporal performance of four soft X-ray streak cameras has been characterized. The objective in evaluating the performance capability of these instruments is to enable us to optimize experiment designs, to encourage quantitative analysis of streak data and to educate the ultra high speed photography and photonics community about the X-ray detector performance which is available. These measurements have been made collaboratively over the space of two years at the Forge pulsed X-ray source at Los Alamos and at the Ketjak laser facility an CEA Limeil-Valenton. The X-ray pulse lengths used for these measurements at these facilities were 150 psec and 50 psec respectively. The results are presented as dynamically-measured modulation transfer functions. Limiting temporal resolution values were also calculated. Emphasis is placed upon shot noise statistical limitations in the analysis of the data. Space charge repulsion in the streak tube limits the peak flux at ultra short experiments duration times. This limit results in a reduction of total signal and a decrease in signal to no ise ratio in the streak image. The four cameras perform well with 20 1p/mm resolution discernable in data from the French C650X, the Hadland X-Chron 540 and the Hamamatsu C1936X streak cameras. The Kentech X-ray streak camera has lower modulation and does not resolve below 10 1p/mm but has a longer photocathode.
Testing of a "smart-pebble" for measuring particle transport statistics
NASA Astrophysics Data System (ADS)
Kitsikoudis, Vasileios; Avgeris, Loukas; Valyrakis, Manousos
2017-04-01
This paper presents preliminary results from novel experiments aiming to assess coarse sediment transport statistics for a range of transport conditions, via the use of an innovative "smart-pebble" device. This device is a waterproof sphere, which has 7 cm diameter and is equipped with a number of sensors that provide information about the velocity, acceleration and positioning of the "smart-pebble" within the flow field. A series of specifically designed experiments are carried out to monitor the entrainment of a "smart-pebble" for fully developed, uniform, turbulent flow conditions over a hydraulically rough bed. Specifically, the bed surface is configured to three sections, each of them consisting of well packed glass beads of slightly increasing size at the downstream direction. The first section has a streamwise length of L1=150 cm and beads size of D1=15 mm, the second section has a length of L2=85 cm and beads size of D2=22 mm, and the third bed section has a length of L3=55 cm and beads size of D3=25.4 mm. Two cameras monitor the area of interest to provide additional information regarding the "smart-pebble" movement. Three-dimensional flow measurements are obtained with the aid of an acoustic Doppler velocimeter along a measurement grid to assess the flow forcing field. A wide range of flow rates near and above the threshold of entrainment is tested, while using four distinct densities for the "smart-pebble", which can affect its transport speed and total momentum. The acquired data are analyzed to derive Lagrangian transport statistics and the implications of such an important experiment for the transport of particles by rolling are discussed. The flow conditions for the initiation of motion, particle accelerations and equilibrium particle velocities (translating into transport rates), statistics of particle impact and its motion, can be extracted from the acquired data, which can be further compared to develop meaningful insights for sediment transport mechanics from a Lagrangian perspective and at unprecedented temporal detail and accuracy.
Construct and face validity of a virtual reality-based camera navigation curriculum.
Shetty, Shohan; Panait, Lucian; Baranoski, Jacob; Dudrick, Stanley J; Bell, Robert L; Roberts, Kurt E; Duffy, Andrew J
2012-10-01
Camera handling and navigation are essential skills in laparoscopic surgery. Surgeons rely on camera operators, usually the least experienced members of the team, for visualization of the operative field. Essential skills for camera operators include maintaining orientation, an effective horizon, appropriate zoom control, and a clean lens. Virtual reality (VR) simulation may be a useful adjunct to developing camera skills in a novice population. No standardized VR-based camera navigation curriculum is currently available. We developed and implemented a novel curriculum on the LapSim VR simulator platform for our residents and students. We hypothesize that our curriculum will demonstrate construct and face validity in our trainee population, distinguishing levels of laparoscopic experience as part of a realistic training curriculum. Overall, 41 participants with various levels of laparoscopic training completed the curriculum. Participants included medical students, surgical residents (Postgraduate Years 1-5), fellows, and attendings. We stratified subjects into three groups (novice, intermediate, and advanced) based on previous laparoscopic experience. We assessed face validity with a questionnaire. The proficiency-based curriculum consists of three modules: camera navigation, coordination, and target visualization using 0° and 30° laparoscopes. Metrics include time, target misses, drift, path length, and tissue contact. We analyzed data using analysis of variance and Student's t-test. We noted significant differences in repetitions required to complete the curriculum: 41.8 for novices, 21.2 for intermediates, and 11.7 for the advanced group (P < 0.05). In the individual modules, coordination required 13.3 attempts for novices, 4.2 for intermediates, and 1.7 for the advanced group (P < 0.05). Target visualization required 19.3 attempts for novices, 13.2 for intermediates, and 8.2 for the advanced group (P < 0.05). Participants believe that training improves camera handling skills (95%), is relevant to surgery (95%), and is a valid training tool (93%). Graphics (98%) and realism (93%) were highly regarded. The VR-based camera navigation curriculum demonstrates construct and face validity for our training population. Camera navigation simulation may be a valuable tool that can be integrated into training protocols for residents and medical students during their surgery rotations. Copyright © 2012 Elsevier Inc. All rights reserved.
Technology analysis for internet of things using big data learning
NASA Astrophysics Data System (ADS)
Senthilkumar, K.; Ellappan, Vijayan; Ajay
2017-11-01
We implemented a n efficient smart home automation system through the Internet of Things (IoT) including different type of sensors, this whole module will helps to the human beings to understand and provide the information about their home security system we are also going to apply Big Data Analysis to analyze the data that we are getting from different type of sensors in this module. We are using some sensors in our module to sense some type of things or object that makes our home standard and also introducing the face recognition system with an efficient algorithm into the module to make it more impressive and provide standardization in advance era.
A Low Power IoT Sensor Node Architecture for Waste Management Within Smart Cities Context.
Cerchecci, Matteo; Luti, Francesco; Mecocci, Alessandro; Parrino, Stefano; Peruzzi, Giacomo; Pozzebon, Alessandro
2018-04-21
This paper focuses on the realization of an Internet of Things (IoT) architecture to optimize waste management in the context of Smart Cities. In particular, a novel typology of sensor node based on the use of low cost and low power components is described. This node is provided with a single-chip microcontroller, a sensor able to measure the filling level of trash bins using ultrasounds and a data transmission module based on the LoRa LPWAN (Low Power Wide Area Network) technology. Together with the node, a minimal network architecture was designed, based on a LoRa gateway, with the purpose of testing the IoT node performances. Especially, the paper analyzes in detail the node architecture, focusing on the energy saving technologies and policies, with the purpose of extending the batteries lifetime by reducing power consumption, through hardware and software optimization. Tests on sensor and radio module effectiveness are also presented.
A Low Power IoT Sensor Node Architecture for Waste Management Within Smart Cities Context
Cerchecci, Matteo; Luti, Francesco; Mecocci, Alessandro; Parrino, Stefano; Peruzzi, Giacomo
2018-01-01
This paper focuses on the realization of an Internet of Things (IoT) architecture to optimize waste management in the context of Smart Cities. In particular, a novel typology of sensor node based on the use of low cost and low power components is described. This node is provided with a single-chip microcontroller, a sensor able to measure the filling level of trash bins using ultrasounds and a data transmission module based on the LoRa LPWAN (Low Power Wide Area Network) technology. Together with the node, a minimal network architecture was designed, based on a LoRa gateway, with the purpose of testing the IoT node performances. Especially, the paper analyzes in detail the node architecture, focusing on the energy saving technologies and policies, with the purpose of extending the batteries lifetime by reducing power consumption, through hardware and software optimization. Tests on sensor and radio module effectiveness are also presented. PMID:29690552
Sung, Wen-Tsai; Lin, Jia-Syun
2013-01-01
This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.
Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology.
Hsu, Yu-Liang; Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen
2017-07-15
This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents' wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident's feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment.
Design and Implementation of a Smart Home System Using Multisensor Data Fusion Technology
Chou, Po-Huan; Chang, Hsing-Cheng; Lin, Shyan-Lung; Yang, Shih-Chin; Su, Heng-Yi; Chang, Chih-Chien; Cheng, Yuan-Sheng; Kuo, Yu-Chen
2017-01-01
This paper aims to develop a multisensor data fusion technology-based smart home system by integrating wearable intelligent technology, artificial intelligence, and sensor fusion technology. We have developed the following three systems to create an intelligent smart home environment: (1) a wearable motion sensing device to be placed on residents’ wrists and its corresponding 3D gesture recognition algorithm to implement a convenient automated household appliance control system; (2) a wearable motion sensing device mounted on a resident’s feet and its indoor positioning algorithm to realize an effective indoor pedestrian navigation system for smart energy management; (3) a multisensor circuit module and an intelligent fire detection and alarm algorithm to realize a home safety and fire detection system. In addition, an intelligent monitoring interface is developed to provide in real-time information about the smart home system, such as environmental temperatures, CO concentrations, communicative environmental alarms, household appliance status, human motion signals, and the results of gesture recognition and indoor positioning. Furthermore, an experimental testbed for validating the effectiveness and feasibility of the smart home system was built and verified experimentally. The results showed that the 3D gesture recognition algorithm could achieve recognition rates for automated household appliance control of 92.0%, 94.8%, 95.3%, and 87.7% by the 2-fold cross-validation, 5-fold cross-validation, 10-fold cross-validation, and leave-one-subject-out cross-validation strategies. For indoor positioning and smart energy management, the distance accuracy and positioning accuracy were around 0.22% and 3.36% of the total traveled distance in the indoor environment. For home safety and fire detection, the classification rate achieved 98.81% accuracy for determining the conditions of the indoor living environment. PMID:28714884
Voss in Service Module with apples
2001-03-22
ISS002-E-5710 (22 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, appears to be trying to decide between two colors or two species of apples as he ponders them in the Zvezda Service Module on the International Space Station (ISS). This photo was taken with a digital still camera.
Usachev in Raffaello Multi-Purpose Logistics Module (MPLM)
2001-04-26
ISS002-E-5852 (26 April 2001) --- Yury V. Usachev of Rosaviakosmos, Expedtion Two mission commander, enjoys the extra space provided by the Multipurpose Logistics Module (MPLM) Raphaello which was mated to the International Space Station (ISS) during the STS-100 mission. The image was taken with a digital still camera.
Voss with docking probe in Service module
2001-05-30
ISS002-E-6478 (30 May 2001) --- James S. Voss, Expedition Two flight engineer, handles a spacecraft docking probe in the Service Module. The docking probe assists with the docking of the Soyuz and Progress vehicles to the International Space Station. The image was taken with a digital still camera.
Helms at photo quality window in Destiny Laboratory module
2001-03-31
ISS002-E-5489 (31 March 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, views the topography of a point on Earth from the nadir window in the U.S. Laboratory / Destiny module of the International Space Station (ISS). The image was recorded with a digital still camera.
Helms eats apple and carrot stick in Service module
2001-04-21
ISS002-E-5357 (21 April 2001) --- Just hours before the arrival of the STS-100/Endeavour crew, astronaut Susan J. Helms, Expedition Two flight engineer, enjoys a brief snack in the Zvezda Service Module on the International Space Station (ISS). The image was recorded with a digital still camera.
Expedition Two crewmembers pose in Destiny Laboratory module
2001-03-31
ISS002-E-5488 (31 March 2001) --- The Expedition Two crewmembers -- astronaut Susan J. Helms (left), cosmonaut Yury V. Usachev and astronaut James S. Voss -- pose for a photograph in the U.S. Laboratory / Destiny module of the International Space Station (ISS). This image was recorded with a digital still camera.
MMW/THz imaging using upconversion to visible, based on glow discharge detector array and CCD camera
NASA Astrophysics Data System (ADS)
Aharon, Avihai; Rozban, Daniel; Abramovich, Amir; Yitzhaky, Yitzhak; Kopeika, Natan S.
2017-10-01
An inexpensive upconverting MMW/THz imaging method is suggested here. The method is based on glow discharge detector (GDD) and silicon photodiode or simple CCD/CMOS camera. The GDD was previously found to be an excellent room-temperature MMW radiation detector by measuring its electrical current. The GDD is very inexpensive and it is advantageous due to its wide dynamic range, broad spectral range, room temperature operation, immunity to high power radiation, and more. An upconversion method is demonstrated here, which is based on measuring the visual light emitting from the GDD rather than its electrical current. The experimental setup simulates a setup that composed of a GDD array, MMW source, and a basic CCD/CMOS camera. The visual light emitting from the GDD array is directed to the CCD/CMOS camera and the change in the GDD light is measured using image processing algorithms. The combination of CMOS camera and GDD focal plane arrays can yield a faster, more sensitive, and very inexpensive MMW/THz camera, eliminating the complexity of the electronic circuits and the internal electronic noise of the GDD. Furthermore, three dimensional imaging systems based on scanning prohibited real time operation of such imaging systems. This is easily solved and is economically feasible using a GDD array. This array will enable us to acquire information on distance and magnitude from all the GDD pixels in the array simultaneously. The 3D image can be obtained using methods like frequency modulation continuous wave (FMCW) direct chirp modulation, and measuring the time of flight (TOF).
A higher-speed compressive sensing camera through multi-diode design
NASA Astrophysics Data System (ADS)
Herman, Matthew A.; Tidman, James; Hewitt, Donna; Weston, Tyler; McMackin, Lenore
2013-05-01
Obtaining high frame rates is a challenge with compressive sensing (CS) systems that gather measurements in a sequential manner, such as the single-pixel CS camera. One strategy for increasing the frame rate is to divide the FOV into smaller areas that are sampled and reconstructed in parallel. Following this strategy, InView has developed a multi-aperture CS camera using an 8×4 array of photodiodes that essentially act as 32 individual simultaneously operating single-pixel cameras. Images reconstructed from each of the photodiode measurements are stitched together to form the full FOV. To account for crosstalk between the sub-apertures, novel modulation patterns have been developed to allow neighboring sub-apertures to share energy. Regions of overlap not only account for crosstalk energy that would otherwise be reconstructed as noise, but they also allow for tolerance in the alignment of the DMD to the lenslet array. Currently, the multi-aperture camera is built into a computational imaging workstation configuration useful for research and development purposes. In this configuration, modulation patterns are generated in a CPU and sent to the DMD via PCI express, which allows the operator to develop and change the patterns used in the data acquisition step. The sensor data is collected and then streamed to the workstation via an Ethernet or USB connection for the reconstruction step. Depending on the amount of data taken and the amount of overlap between sub-apertures, frame rates of 2-5 frames per second can be achieved. In a stand-alone camera platform, currently in development, pattern generation and reconstruction will be implemented on-board.
X-ray imaging using digital cameras
NASA Astrophysics Data System (ADS)
Winch, Nicola M.; Edgar, Andrew
2012-03-01
The possibility of using the combination of a computed radiography (storage phosphor) cassette and a semiprofessional grade digital camera for medical or dental radiography is investigated. We compare the performance of (i) a Canon 5D Mk II single lens reflex camera with f1.4 lens and full-frame CMOS array sensor and (ii) a cooled CCD-based camera with a 1/3 frame sensor and the same lens system. Both systems are tested with 240 x 180 mm cassettes which are based on either powdered europium-doped barium fluoride bromide or needle structure europium-doped cesium bromide. The modulation transfer function for both systems has been determined and falls to a value of 0.2 at around 2 lp/mm, and is limited by light scattering of the emitted light from the storage phosphor rather than the optics or sensor pixelation. The modulation transfer function for the CsBr:Eu2+ plate is bimodal, with a high frequency wing which is attributed to the light-guiding behaviour of the needle structure. The detective quantum efficiency has been determined using a radioisotope source and is comparatively low at 0.017 for the CMOS camera and 0.006 for the CCD camera, attributed to the poor light harvesting by the lens. The primary advantages of the method are portability, robustness, digital imaging and low cost; the limitations are the low detective quantum efficiency and hence signal-to-noise ratio for medical doses, and restricted range of plate sizes. Representative images taken with medical doses are shown and illustrate the potential use for portable basic radiography.
NASA Astrophysics Data System (ADS)
Morozov, A.; Defendi, I.; Engels, R.; Fraga, F. A. F.; Fraga, M. M. F. R.; Gongadze, A.; Guerard, B.; Jurkovic, M.; Kemmerling, G.; Manzin, G.; Margato, L. M. S.; Niko, H.; Pereira, L.; Petrillo, C.; Peyaud, A.; Piscitelli, F.; Raspino, D.; Rhodes, N. J.; Sacchetti, F.; Schooneveld, E. M.; Solovov, V.; Van Esch, P.; Zeitelhack, K.
2013-05-01
The software package ANTS (Anger-camera type Neutron detector: Toolkit for Simulations), developed for simulation of Anger-type gaseous detectors for thermal neutron imaging was extended to include a module for experimental data processing. Data recorded with a sensor array containing up to 100 photomultiplier tubes (PMT) or silicon photomultipliers (SiPM) in a custom configuration can be loaded and the positions and energies of the events can be reconstructed using the Center-of-Gravity, Maximum Likelihood or Least Squares algorithm. A particular strength of the new module is the ability to reconstruct the light response functions and relative gains of the photomultipliers from flood field illumination data using adaptive algorithms. The performance of the module is demonstrated with simulated data generated in ANTS and experimental data recorded with a 19 PMT neutron detector. The package executables are publicly available at http://coimbra.lip.pt/~andrei/
Stereo imaging velocimetry for microgravity applications
NASA Technical Reports Server (NTRS)
Miller, Brian B.; Meyer, Maryjo B.; Bethea, Mark D.
1994-01-01
Stereo imaging velocimetry is the quantitative measurement of three-dimensional flow fields using two sensors recording data from different vantage points. The system described in this paper, under development at NASA Lewis Research Center in Cleveland, Ohio, uses two CCD cameras placed perpendicular to one another, laser disk recorders, an image processing substation, and a 586-based computer to record data at standard NTSC video rates (30 Hertz) and reduce it offline. The flow itself is marked with seed particles, hence the fluid must be transparent. The velocimeter tracks the motion of the particles, and from these we deduce a multipoint (500 or more), quantitative map of the flow. Conceptually, the software portion of the velocimeter can be divided into distinct modules. These modules are: camera calibration, particle finding (image segmentation) and centroid location, particle overlap decomposition, particle tracking, and stereo matching. We discuss our approach to each module, and give our currently achieved speed and accuracy for each where available.
Apollo 12 Mission image - Astronaut Alan L. Bean,lunar module pilot,and two U.S. spacecraft
1969-11-20
AS12-48-7134 (20 Nov. 1969) --- This unusual photograph, taken during the second Apollo 12 extravehicular activity (EVA), shows two U.S. spacecraft on the surface of the moon. The Apollo 12 Lunar Module (LM) is in the background. The unmanned Surveyor 3 spacecraft is in the foreground. The Apollo 12 LM, with astronauts Charles Conrad Jr. and Alan L. Bean aboard, landed about 600 feet from Surveyor 3 in the Ocean of Storms. The television camera and several other pieces were taken from Surveyor 3 and brought back to Earth for scientific examination. Here, Conrad examines the Surveyor's TV camera prior to detaching it. Astronaut Richard F. Gordon Jr. remained with the Apollo 12 Command and Service Modules (CSM) in lunar orbit while Conrad and Bean descended in the LM to explore the moon. Surveyor 3 soft-landed on the moon on April 19, 1967.
Simultaneous multicolor imaging of wide-field epi-fluorescence microscopy with four-bucket detection
Park, Kwan Seob; Kim, Dong Uk; Lee, Jooran; Kim, Geon Hee; Chang, Ki Soo
2016-01-01
We demonstrate simultaneous imaging of multiple fluorophores using wide-field epi-fluorescence microscopy with a monochrome camera. The intensities of the three lasers are modulated by a sinusoidal waveform in order to excite each fluorophore with the same modulation frequency and a different time-delay. Then, the modulated fluorescence emissions are simultaneously detected by a camera operating at four times the excitation frequency. We show that two different fluorescence beads having crosstalk can be clearly separated using digital processing based on the phase information. In addition, multiple organelles within multi-stained single cells are shown with the phase mapping method, demonstrating an improved dynamic range and contrast compared to the conventional fluorescence image. These findings suggest that wide-field epi-fluorescence microscopy with four-bucket detection could be utilized for high-contrast multicolor imaging applications such as drug delivery and fluorescence in situ hybridization. PMID:27375944
Status of the photomultiplier-based FlashCam camera for the Cherenkov Telescope Array
NASA Astrophysics Data System (ADS)
Pühlhofer, G.; Bauer, C.; Eisenkolb, F.; Florin, D.; Föhr, C.; Gadola, A.; Garrecht, F.; Hermann, G.; Jung, I.; Kalekin, O.; Kalkuhl, C.; Kasperek, J.; Kihm, T.; Koziol, J.; Lahmann, R.; Manalaysay, A.; Marszalek, A.; Rajda, P. J.; Reimer, O.; Romaszkan, W.; Rupinski, M.; Schanz, T.; Schwab, T.; Steiner, S.; Straumann, U.; Tenzer, C.; Vollhardt, A.; Weitzel, Q.; Winiarski, K.; Zietara, K.
2014-07-01
The FlashCam project is preparing a camera prototype around a fully digital FADC-based readout system, for the medium sized telescopes (MST) of the Cherenkov Telescope Array (CTA). The FlashCam design is the first fully digital readout system for Cherenkov cameras, based on commercial FADCs and FPGAs as key components for digitization and triggering, and a high performance camera server as back end. It provides the option to easily implement different types of trigger algorithms as well as digitization and readout scenarios using identical hardware, by simply changing the firmware on the FPGAs. The readout of the front end modules into the camera server is Ethernet-based using standard Ethernet switches and a custom, raw Ethernet protocol. In the current implementation of the system, data transfer and back end processing rates of 3.8 GB/s and 2.4 GB/s have been achieved, respectively. Together with the dead-time-free front end event buffering on the FPGAs, this permits the cameras to operate at trigger rates of up to several ten kHz. In the horizontal architecture of FlashCam, the photon detector plane (PDP), consisting of photon detectors, preamplifiers, high voltage-, control-, and monitoring systems, is a self-contained unit, mechanically detached from the front end modules. It interfaces to the digital readout system via analogue signal transmission. The horizontal integration of FlashCam is expected not only to be more cost efficient, it also allows PDPs with different types of photon detectors to be adapted to the FlashCam readout system. By now, a 144-pixel mini-camera" setup, fully equipped with photomultipliers, PDP electronics, and digitization/ trigger electronics, has been realized and extensively tested. Preparations for a full-scale, 1764 pixel camera mechanics and a cooling system are ongoing. The paper describes the status of the project.
ePix: a class of architectures for second generation LCLS cameras
Dragone, A.; Caragiulo, P.; Markovic, B.; ...
2014-03-31
ePix is a novel class of ASIC architectures, based on a common platform, optimized to build modular scalable detectors for LCLS. The platform architecture is composed of a random access analog matrix of pixel with global shutter, fast parallel column readout, and dedicated sigma-delta analog-to-digital converters per column. It also implements a dedicated control interface and all the required support electronics to perform configuration, calibration and readout of the matrix. Based on this platform a class of front-end ASICs and several camera modules, meeting different requirements, can be developed by designing specific pixel architectures. This approach reduces development time andmore » expands the possibility of integration of detector modules with different size, shape or functionality in the same camera. The ePix platform is currently under development together with the first two integrating pixel architectures: ePix100 dedicated to ultra low noise applications and ePix10k for high dynamic range applications.« less
A multipurpose camera system for monitoring Kīlauea Volcano, Hawai'i
Patrick, Matthew R.; Orr, Tim R.; Lee, Lopaka; Moniz, Cyril J.
2015-01-01
We describe a low-cost, compact multipurpose camera system designed for field deployment at active volcanoes that can be used either as a webcam (transmitting images back to an observatory in real-time) or as a time-lapse camera system (storing images onto the camera system for periodic retrieval during field visits). The system also has the capability to acquire high-definition video. The camera system uses a Raspberry Pi single-board computer and a 5-megapixel low-light (near-infrared sensitive) camera, as well as a small Global Positioning System (GPS) module to ensure accurate time-stamping of images. Custom Python scripts control the webcam and GPS unit and handle data management. The inexpensive nature of the system allows it to be installed at hazardous sites where it might be lost. Another major advantage of this camera system is that it provides accurate internal timing (independent of network connection) and, because a full Linux operating system and the Python programming language are available on the camera system itself, it has the versatility to be configured for the specific needs of the user. We describe example deployments of the camera at Kīlauea Volcano, Hawai‘i, to monitor ongoing summit lava lake activity.
Microprocessor-controlled wide-range streak camera
NASA Astrophysics Data System (ADS)
Lewis, Amy E.; Hollabaugh, Craig
2006-08-01
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storage using flash-based storage media. The camera's user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.
NASA Technical Reports Server (NTRS)
Florance, Jennifer P.; Burner, Alpheus W.; Fleming, Gary A.; Martin, Christopher A.
2003-01-01
An overview of the contributions of the NASA Langley Research Center (LaRC) to the DARPA/AFRL/NASA/ Northrop Grumman Corporation (NGC) Smart Wing program is presented. The overall objective of the Smart Wing program was to develop smart** technologies and demonstrate near-flight-scale actuation systems to improve the aerodynamic performance of military aircraft. NASA LaRC s roles were to provide technical guidance, wind-tunnel testing time and support, and Computational Fluid Dynamics (CFD) analyses. The program was divided into two phases, with each phase having two wind-tunnel entries in the Langley Transonic Dynamics Tunnel (TDT). This paper focuses on the fourth and final wind-tunnel test: Phase 2, Test 2. During this test, a model based on the NGC Unmanned Combat Air Vehicle (UCAV) concept was tested at Mach numbers up to 0.8 and dynamic pressures up to 150 psf to determine the aerodynamic performance benefits that could be achieved using hingeless, smoothly-contoured control surfaces actuated with smart materials technologies. The UCAV-based model was a 30% geometric scale, full-span, sting-mounted model with the smart control surfaces on the starboard wing and conventional, hinged control surfaces on the port wing. Two LaRC-developed instrumentation systems were used during the test to externally measure the shapes of the smart control surface and quantify the effects of aerodynamic loading on the deflections: Videogrammetric Model Deformation (VMD) and Projection Moire Interferometry (PMI). VMD is an optical technique that uses single-camera photogrammetric tracking of discrete targets to determine deflections at specific points. PMI provides spatially continuous measurements of model deformation by computationally analyzing images of a grid projected onto the model surface. Both the VMD and PMI measurements served well to validate the use of on-board (internal) rotary potentiometers to measure the smart control surface deflection angles. Prior to the final entry, NASA LaRC also performed three-dimensional unstructured Navier Stokes CFD analyses in an attempt to predict the potential aerodynamic impact of the smart control surface on overall model forces and moments. Eight different control surface shapes were selected for study at Mach = 0.6, Reynolds number = 3.25 x 10(exp 6), and + 2 deg., 3 deg., 8 deg., and 10 deg.model angles-of-attack. For the baseline, undeflected control surface geometry, the CFD predictions and wind-tunnel results matched well. The agreement was not as good for the more complex aero-loaded control surface shapes, though, because of the inability to accurately predict those shapes. Despite these results, the NASA CFD study served as an important step in studying advanced control effectors.
The Common Data Acquisition Platform in the Helmholtz Association
NASA Astrophysics Data System (ADS)
Kaever, P.; Balzer, M.; Kopmann, A.; Zimmer, M.; Rongen, H.
2017-04-01
Various centres of the German Helmholtz Association (HGF) started in 2012 to develop a modular data acquisition (DAQ) platform, covering the entire range from detector readout to data transfer into parallel computing environments. This platform integrates generic hardware components like the multi-purpose HGF-Advanced Mezzanine Card or a smart scientific camera framework, adding user value with Linux drivers and board support packages. Technically the scope comprises the DAQ-chain from FPGA-modules to computing servers, notably frontend-electronics-interfaces, microcontrollers and GPUs with their software plus high-performance data transmission links. The core idea is a generic and component-based approach, enabling the implementation of specific experiment requirements with low effort. This so called DTS-platform will support standards like MTCA.4 in hard- and software to ensure compatibility with commercial components. Its capability to deploy on other crate standards or FPGA-boards with PCI express or Ethernet interfaces remains an essential feature. Competences of the participating centres are coordinated in order to provide a solid technological basis for both research topics in the Helmholtz Programme ``Matter and Technology'': ``Detector Technology and Systems'' and ``Accelerator Research and Development''. The DTS-platform aims at reducing costs and development time and will ensure access to latest technologies for the collaboration. Due to its flexible approach, it has the potential to be applied in other scientific programs.
Geovisualization for Smart Video Surveillance
NASA Astrophysics Data System (ADS)
Oves García, R.; Valentín, L.; Serrano, S. A.; Palacios-Alonso, M. A.; Sucar, L. Enrique
2017-09-01
Nowadays with the emergence of smart cities and the creation of new sensors capable to connect to the network, it is not only possible to monitor the entire infrastructure of a city, including roads, bridges, rail/subways, airports, communications, water, power, but also to optimize its resources, plan its preventive maintenance and monitor security aspects while maximizing services for its citizens. In particular, the security aspect is one of the most important issues due to the need to ensure the safety of people. However, if we want to have a good security system, it is necessary to take into account the way that we are going to present the information. In order to show the amount of information generated by sensing devices in real time in an understandable way, several visualization techniques are proposed for both local (involves sensing devices in a separated way) and global visualization (involves sensing devices as a whole). Taking into consideration that the information is produced and transmitted from a geographic location, the integration of a Geographic Information System to manage and visualize the behavior of data becomes very relevant. With the purpose of facilitating the decision-making process in a security system, we have integrated the visualization techniques and the Geographic Information System to produce a smart security system, based on a cloud computing architecture, to show relevant information about a set of monitored areas with video cameras.
Krychowiak, M; Adnan, A; Alonso, A; Andreeva, T; Baldzuhn, J; Barbui, T; Beurskens, M; Biel, W; Biedermann, C; Blackwell, B D; Bosch, H S; Bozhenkov, S; Brakel, R; Bräuer, T; Brotas de Carvalho, B; Burhenn, R; Buttenschön, B; Cappa, A; Cseh, G; Czarnecka, A; Dinklage, A; Drews, P; Dzikowicka, A; Effenberg, F; Endler, M; Erckmann, V; Estrada, T; Ford, O; Fornal, T; Frerichs, H; Fuchert, G; Geiger, J; Grulke, O; Harris, J H; Hartfuß, H J; Hartmann, D; Hathiramani, D; Hirsch, M; Höfel, U; Jabłoński, S; Jakubowski, M W; Kaczmarczyk, J; Klinger, T; Klose, S; Knauer, J; Kocsis, G; König, R; Kornejew, P; Krämer-Flecken, A; Krawczyk, N; Kremeyer, T; Książek, I; Kubkowska, M; Langenberg, A; Laqua, H P; Laux, M; Lazerson, S; Liang, Y; Liu, S C; Lorenz, A; Marchuk, A O; Marsen, S; Moncada, V; Naujoks, D; Neilson, H; Neubauer, O; Neuner, U; Niemann, H; Oosterbeek, J W; Otte, M; Pablant, N; Pasch, E; Sunn Pedersen, T; Pisano, F; Rahbarnia, K; Ryć, L; Schmitz, O; Schmuck, S; Schneider, W; Schröder, T; Schuhmacher, H; Schweer, B; Standley, B; Stange, T; Stephey, L; Svensson, J; Szabolics, T; Szepesi, T; Thomsen, H; Travere, J-M; Trimino Mora, H; Tsuchiya, H; Weir, G M; Wenzel, U; Werner, A; Wiegel, B; Windisch, T; Wolf, R; Wurden, G A; Zhang, D; Zimbal, A; Zoletnik, S
2016-11-01
Wendelstein 7-X, a superconducting optimized stellarator built in Greifswald/Germany, started its first plasmas with the last closed flux surface (LCFS) defined by 5 uncooled graphite limiters in December 2015. At the end of the 10 weeks long experimental campaign (OP1.1) more than 20 independent diagnostic systems were in operation, allowing detailed studies of many interesting plasma phenomena. For example, fast neutral gas manometers supported by video cameras (including one fast-frame camera with frame rates of tens of kHz) as well as visible cameras with different interference filters, with field of views covering all ten half-modules of the stellarator, discovered a MARFE-like radiation zone on the inboard side of machine module 4. This structure is presumably triggered by an inadvertent plasma-wall interaction in module 4 resulting in a high impurity influx that terminates some discharges by radiation cooling. The main plasma parameters achieved in OP1.1 exceeded predicted values in discharges of a length reaching 6 s. Although OP1.1 is characterized by short pulses, many of the diagnostics are already designed for quasi-steady state operation of 30 min discharges heated at 10 MW of ECRH. An overview of diagnostic performance for OP1.1 is given, including some highlights from the physics campaigns.
Close-up view of astronauts footprint in lunar soil
1969-07-20
AS11-40-5878 (20 July 1969) --- A close-up view of an astronaut's bootprint in the lunar soil, photographed with a 70mm lunar surface camera during the Apollo 11 extravehicular activity (EVA) on the moon. While astronauts Neil A. Armstrong, commander, and Edwin E. Aldrin Jr., lunar module pilot, descended in the Lunar Module (LM) "Eagle" to explore the Sea of Tranquility region of the moon, astronaut Michael Collins, command module pilot, remained with the Command and Service Modules (CSM) "Columbia" in lunar orbit.
Energy Efficient IoT Data Collection in Smart Cities Exploiting D2D Communications.
Orsino, Antonino; Araniti, Giuseppe; Militano, Leonardo; Alonso-Zarate, Jesus; Molinaro, Antonella; Iera, Antonio
2016-06-08
Fifth Generation (5G) wireless systems are expected to connect an avalanche of "smart" objects disseminated from the largest "Smart City" to the smallest "Smart Home". In this vision, Long Term Evolution-Advanced (LTE-A) is deemed to play a fundamental role in the Internet of Things (IoT) arena providing a large coherent infrastructure and a wide wireless connectivity to the devices. However, since LTE-A was originally designed to support high data rates and large data size, novel solutions are required to enable an efficient use of radio resources to convey small data packets typically exchanged by IoT applications in "smart" environments. On the other hand, the typically high energy consumption required by cellular communications is a serious obstacle to large scale IoT deployments under cellular connectivity as in the case of Smart City scenarios. Network-assisted Device-to-Device (D2D) communications are considered as a viable solution to reduce the energy consumption for the devices. The particular approach presented in this paper consists in appointing one of the IoT smart devices as a collector of all data from a cluster of objects using D2D links, thus acting as an aggregator toward the eNodeB. By smartly adapting the Modulation and Coding Scheme (MCS) on the communication links, we will show it is possible to maximize the radio resource utilization as a function of the total amount of data to be sent. A further benefit that we will highlight is the possibility to reduce the transmission power when a more robust MCS is adopted. A comprehensive performance evaluation in a wide set of scenarios will testify the achievable gains in terms of energy efficiency and resource utilization in the envisaged D2D-based IoT data collection.
DOE Office of Scientific and Technical Information (OSTI.GOV)
McParland, Charles
The Smart Grid envisions a transformed US power distribution grid that enables communicating devices, under human supervision, to moderate loads and increase overall system stability and security. This vision explicitly promotes increased participation from a community that, in the past, has had little involvement in power grid operations -the consumer. The potential size of this new community and its member's extensive experience with the public Internet prompts an analysis of the evolution and current state of the Internet as a predictor for best practices in the architectural design of certain portions of the Smart Grid network. Although still evolving, themore » vision of the Smart Grid is that of a community of communicating and cooperating energy related devices that can be directed to route power and modulate loads in pursuit of an integrated, efficient and secure electrical power grid. The remaking of the present power grid into the Smart Grid is considered as fundamentally transformative as previous developments such as modern computing technology and high bandwidth data communications. However, unlike these earlier developments, which relied on the discovery of critical new technologies (e.g. the transistor or optical fiber transmission lines), the technologies required for the Smart Grid currently exist and, in many cases, are already widely deployed. In contrast to other examples of technical transformations, the path (and success) of the Smart Grid will be determined not by its technology, but by its system architecture. Fortunately, we have a recent example of a transformative force of similar scope that shares a fundamental dependence on our existing communications infrastructure - namely, the Internet. We will explore several ways in which the scale of the Internet and expectations of its users have shaped the present Internet environment. As the presence of consumers within the Smart Grid increases, some experiences from the early growth of the Internet are expected to be informative and pertinent.« less
Damage Detection and Verification System (DDVS) for In-Situ Health Monitoring
NASA Technical Reports Server (NTRS)
Williams, Martha K.; Lewis, Mark; Szafran, J.; Shelton, C.; Ludwig, L.; Gibson, T.; Lane, J.; Trautwein, T.
2015-01-01
Project presentation for Game Changing Program Smart Book Release. Detection and Verification System (DDVS) expands the Flat Surface Damage Detection System (FSDDS) sensory panels damage detection capabilities and includes an autonomous inspection capability utilizing cameras and dynamic computer vision algorithms to verify system health. Objectives of this formulation task are to establish the concept of operations, formulate the system requirements for a potential ISS flight experiment, and develop a preliminary design of an autonomous inspection capability system that will be demonstrated as a proof-of-concept ground based damage detection and inspection system.
Smartphone based point-of-care detector of urine albumin
NASA Astrophysics Data System (ADS)
Cmiel, Vratislav; Svoboda, Ondrej; Koscova, Pavlina; Provaznik, Ivo
2016-03-01
Albumin plays an important role in human body. Its changed level in urine may indicate serious kidney disorders. We present a new point-of-care solution for sensitive detection of urine albumin - the miniature optical adapter for iPhone with in-built optical filters and a sample slot. The adapter exploits smart-phone flash to generate excitation light and camera to measure the level of emitted light. Albumin Blue 580 is used as albumin reagent. The proposed light-weight adapter can be produced at low cost using a 3D printer. Thus, the miniaturized detector is easy to use out of lab.
Human recognition in a video network
NASA Astrophysics Data System (ADS)
Bhanu, Bir
2009-10-01
Video networks is an emerging interdisciplinary field with significant and exciting scientific and technological challenges. It has great promise in solving many real-world problems and enabling a broad range of applications, including smart homes, video surveillance, environment and traffic monitoring, elderly care, intelligent environments, and entertainment in public and private spaces. This paper provides an overview of the design of a wireless video network as an experimental environment, camera selection, hand-off and control, anomaly detection. It addresses challenging questions for individual identification using gait and face at a distance and present new techniques and their comparison for robust identification.
Apollo 16 lunar module 'Orion' photographed from distance during EVA
NASA Technical Reports Server (NTRS)
1972-01-01
The Apollo 16 Lunar Module 'Orion' is photographed from a distance by Astronaut Chares M. Duke Jr., lunar module pilot, aboard the moving Lunar Roving Vehicle. Astronauts Duke and John W. Young, commander, were returing from the third Apollo 16 extravehicular activity (EVA-2). The RCA color television camera mounted on the LRV is in the foreground. A portion of the LRV's high-gain antenna is at top left.
A Smart Power Electronic Multiconverter for the Residential Sector.
Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva
2017-05-26
The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it.
3D Printed "Earable" Smart Devices for Real-Time Detection of Core Body Temperature.
Ota, Hiroki; Chao, Minghan; Gao, Yuji; Wu, Eric; Tai, Li-Chia; Chen, Kevin; Matsuoka, Yasutomo; Iwai, Kosuke; Fahad, Hossain M; Gao, Wei; Nyein, Hnin Yin Yin; Lin, Liwei; Javey, Ali
2017-07-28
Real-time detection of basic physiological parameters such as blood pressure and heart rate is an important target in wearable smart devices for healthcare. Among these, the core body temperature is one of the most important basic medical indicators of fever, insomnia, fatigue, metabolic functionality, and depression. However, traditional wearable temperature sensors are based upon the measurement of skin temperature, which can vary dramatically from the true core body temperature. Here, we demonstrate a three-dimensional (3D) printed wearable "earable" smart device that is designed to be worn on the ear to track core body temperature from the tympanic membrane (i.e., ear drum) based on an infrared sensor. The device is fully integrated with data processing circuits and a wireless module for standalone functionality. Using this smart earable device, we demonstrate that the core body temperature can be accurately monitored regardless of the environment and activity of the user. In addition, a microphone and actuator are also integrated so that the device can also function as a bone conduction hearing aid. Using 3D printing as the fabrication method enables the device to be customized for the wearer for more personalized healthcare. This smart device provides an important advance in realizing personalized health care by enabling real-time monitoring of one of the most important medical parameters, core body temperature, employed in preliminary medical screening tests.
A Smart Power Electronic Multiconverter for the Residential Sector
Guerrero-Martinez, Miguel Angel; Milanes-Montero, Maria Isabel; Barrero-Gonzalez, Fermin; Miñambres-Marcos, Victor Manuel; Romero-Cadaval, Enrique; Gonzalez-Romera, Eva
2017-01-01
The future of the grid includes distributed generation and smart grid technologies. Demand Side Management (DSM) systems will also be essential to achieve a high level of reliability and robustness in power systems. To do that, expanding the Advanced Metering Infrastructure (AMI) and Energy Management Systems (EMS) are necessary. The trend direction is towards the creation of energy resource hubs, such as the smart community concept. This paper presents a smart multiconverter system for residential/housing sector with a Hybrid Energy Storage System (HESS) consisting of supercapacitor and battery, and with local photovoltaic (PV) energy source integration. The device works as a distributed energy unit located in each house of the community, receiving active power set-points provided by a smart community EMS. This central EMS is responsible for managing the active energy flows between the electricity grid, renewable energy sources, storage equipment and loads existing in the community. The proposed multiconverter is responsible for complying with the reference active power set-points with proper power quality; guaranteeing that the local PV modules operate with a Maximum Power Point Tracking (MPPT) algorithm; and extending the lifetime of the battery thanks to a cooperative operation of the HESS. A simulation model has been developed in order to show the detailed operation of the system. Finally, a prototype of the multiconverter platform has been implemented and some experimental tests have been carried out to validate it. PMID:28587131
Implementation and performance of shutterless uncooled micro-bolometer cameras
NASA Astrophysics Data System (ADS)
Das, J.; de Gaspari, D.; Cornet, P.; Deroo, P.; Vermeiren, J.; Merken, P.
2015-06-01
A shutterless algorithm is implemented into the Xenics LWIR thermal cameras and modules. Based on a calibration set and a global temperature coefficient the optimal non-uniformity correction is calculated onboard of the camera. The limited resources in the camera require a compact algorithm, hence the efficiency of the coding is important. The performance of the shutterless algorithm is studied by a comparison of the residual non-uniformity (RNU) and signal-to-noise ratio (SNR) between the shutterless and shuttered correction algorithm. From this comparison we conclude that the shutterless correction is only slightly less performant compared to the standard shuttered algorithm, making this algorithm very interesting for thermal infrared applications where small weight and size, and continuous operation are important.
NASA Technical Reports Server (NTRS)
Katzberg, S. J.; Kelly, W. L., IV; Rowland, C. W.; Burcher, E. E.
1973-01-01
The facsimile camera is an optical-mechanical scanning device which has become an attractive candidate as an imaging system for planetary landers and rovers. This paper presents electronic techniques which permit the acquisition and reconstruction of high quality images with this device, even under varying lighting conditions. These techniques include a control for low frequency noise and drift, an automatic gain control, a pulse-duration light modulation scheme, and a relative spectral gain control. Taken together, these techniques allow the reconstruction of radiometrically accurate and properly balanced color images from facsimile camera video data. These techniques have been incorporated into a facsimile camera and reproduction system, and experimental results are presented for each technique and for the complete system.
Absolute colorimetric characterization of a DSLR camera
NASA Astrophysics Data System (ADS)
Guarnera, Giuseppe Claudio; Bianco, Simone; Schettini, Raimondo
2014-03-01
A simple but effective technique for absolute colorimetric camera characterization is proposed. It offers a large dynamic range requiring just a single, off-the-shelf target and a commonly available controllable light source for the characterization. The characterization task is broken down in two modules, respectively devoted to absolute luminance estimation and to colorimetric characterization matrix estimation. The characterized camera can be effectively used as a tele-colorimeter, giving an absolute estimation of the XYZ data in cd=m2. The user is only required to vary the f - number of the camera lens or the exposure time t, to better exploit the sensor dynamic range. The estimated absolute tristimulus values closely match the values measured by a professional spectro-radiometer.
Astronaut Ronald Evans photographed during transearth coast EVA
1972-12-17
AS17-152-23391 (17 Dec. 1972) --- Astronaut Ronald E. Evans is photographed performing extravehicular activity during the Apollo 17 spacecraft's trans-Earth coast. During his EVA, Evans, command module pilot, retrieved film cassettes from the lunar sounder, mapping camera and panoramic camera. The cylindrical object at Evans' left side is the mapping camera cassette. The total time for the trans-Earth EVA was one hour, seven minutes, 18 seconds, starting at ground elapsed time of 257:25 (2:28 p.m.) and ending at G.E.T. of 258:42 (3:35 p.m.) on Sunday, Dec. 17, 1972.
Astronaut Ronald Evans photographed during transearth coast EVA
1972-12-17
AS17-152-23393 (17 Dec. 1972) --- Astronaut Ronald E. Evans is photographed performing extravehicular activity during the Apollo 17 spacecraft's trans-Earth coast. During his EVA, command module pilot Evans retrieved film cassettes from the Lunar Sounder, Mapping Camera, and Panoramic Camera. The cylindrical object at Evans' left side is the Mapping Camera cassette. The total time for the trans-Earth EVA was one hour seven minutes 18 seconds, starting at ground elapsed time of 257:25 (2:28 p.m.) and ending at ground elapsed timed of 258:42 (3:35 p.m.) on Sunday, Dec. 17, 1972.
Modeling of digital information optical encryption system with spatially incoherent illumination
NASA Astrophysics Data System (ADS)
Bondareva, Alyona P.; Cheremkhin, Pavel A.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Rostislav S.; Starikov, Sergey N.
2015-10-01
State of the art micromirror DMD spatial light modulators (SLM) offer unprecedented framerate up to 30000 frames per second. This, in conjunction with high speed digital camera, should allow to build high speed optical encryption system. Results of modeling of digital information optical encryption system with spatially incoherent illumination are presented. Input information is displayed with first SLM, encryption element - with second SLM. Factors taken into account are: resolution of SLMs and camera, holograms reconstruction noise, camera noise and signal sampling. Results of numerical simulation demonstrate high speed (several gigabytes per second), low bit error rate and high crypto-strength.
Chavez-Burbano, Patricia; Rabadan, Jose; Perez-Jimenez, Rafael
2017-01-01
Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios. PMID:28677613
Chavez-Burbano, Patricia; Guerra, Victor; Rabadan, Jose; Rodríguez-Esparragón, Dionisio; Perez-Jimenez, Rafael
2017-07-04
Due to the massive insertion of embedded cameras in a wide variety of devices and the generalized use of LED lamps, Optical Camera Communication (OCC) has been proposed as a practical solution for future Internet of Things (IoT) and smart cities applications. Influence of mobility, weather conditions, solar radiation interference, and external light sources over Visible Light Communication (VLC) schemes have been addressed in previous works. Some authors have studied the spatial intersymbol interference from close emitters within an OCC system; however, it has not been characterized or measured in function of the different transmitted wavelengths. In this work, this interference has been experimentally characterized and the Normalized Power Signal to Interference Ratio (NPSIR) for easily determining the interference in other implementations, independently of the selected system devices, has been also proposed. A set of experiments in a darkroom, working with RGB multi-LED transmitters and a general purpose camera, were performed in order to obtain the NPSIR values and to validate the deduced equations for 2D pixel representation of real distances. These parameters were used in the simulation of a wireless sensor network scenario in a small office, where the Bit Error Rate (BER) of the communication link was calculated. The experiments show that the interference of other close emitters in terms of the distance and the used wavelength can be easily determined with the NPSIR. Finally, the simulation validates the applicability of the deduced equations for scaling the initial results into real scenarios.
Mobile phone based mini-spectrometer for rapid screening of skin cancer
NASA Astrophysics Data System (ADS)
Das, Anshuman; Swedish, Tristan; Wahi, Akshat; Moufarrej, Mira; Noland, Marie; Gurry, Thomas; Aranda-Michel, Edgar; Aksel, Deniz; Wagh, Sneha; Sadashivaiah, Vijay; Zhang, Xu; Raskar, Ramesh
2015-06-01
We demonstrate a highly sensitive mobile phone based spectrometer that has potential to detect cancerous skin lesions in a rapid, non-invasive manner. Earlier reports of low cost spectrometers utilize the camera of the mobile phone to image the field after moving through a diffraction grating. These approaches are inherently limited by the closed nature of mobile phone image sensors and built in optical elements. The system presented uses a novel integrated grating and sensor that is compact, accurate and calibrated. Resolutions of about 10 nm can be achieved. Additionally, UV and visible LED excitation sources are built into the device. Data collection and analysis is simplified using the wireless interfaces and logical control on the smart phone. Furthermore, by utilizing an external sensor, the mobile phone camera can be used in conjunction with spectral measurements. We are exploring ways to use this device to measure endogenous fluorescence of skin in order to distinguish cancerous from non-cancerous lesions with a mobile phone based dermatoscope.
VizieR Online Data Catalog: HST FGS-1r parallaxes for 8 metal-poor stars (Chaboyer+, 2017)
NASA Astrophysics Data System (ADS)
Chaboyer, B.; McArthur, B. E.; O'Malley, E.; Benedict, G. F.; Feiden, G. A.; Harrison, T. E.; McWilliam, A.; Nelan, E. P.; Patterson, R. J.; Sarajedini, A.
2017-08-01
Each program star was observed with the HST Advanced Camera for Surveys-Wide Field Camera (ACS/WFC) in the F606W and F814W filters. The CTE-corrected ACS/WFC images for the program stars were retrieved from MAST. These instrumental magnitudes were corrected for exposure time, matched to form colors, and calibrated to the VEGAMag and ground-based VI systems using the Sirianni+ (2005PASP..117.1049S) photometric transformations. Ground based photometry for all of our program stars were obtained using the New Mexico State University (NMSU) 1m telescope, the MDM 1.3m telescope, and the SMARTS 0.9m telescope. See appendix A1 for further details. We used HST FGS-1r, a two-axis interferometer, to make the astrometric observations. Eighty-nine orbits of HST astrometric observations were made between 2008 December and 2013 June. Every orbit contained several observations of the target and surrounding reference stars. (4 data files).
Image quality analysis of a color LCD as well as a monochrome LCD using a Foveon color CMOS camera
NASA Astrophysics Data System (ADS)
Dallas, William J.; Roehrig, Hans; Krupinski, Elizabeth A.
2007-09-01
We have combined a CMOS color camera with special software to compose a multi-functional image-quality analysis instrument. It functions as a colorimeter as well as measuring modulation transfer functions (MTF) and noise power spectra (NPS). It is presently being expanded to examine fixed-pattern noise and temporal noise. The CMOS camera has 9 μm square pixels and a pixel matrix of 2268 x 1512 x 3. The camera uses a sensor that has co-located pixels for all three primary colors. We have imaged sections of both a color and a monochrome LCD monitor onto the camera sensor with LCD-pixel-size to camera-pixel-size ratios of both 12:1 and 17.6:1. When used as an imaging colorimeter, each camera pixel is calibrated to provide CIE color coordinates and tristimulus values. This capability permits the camera to simultaneously determine chromaticity in different locations on the LCD display. After the color calibration with a CS-200 colorimeter the color coordinates of the display's primaries determined from the camera's luminance response are very close to those found from the CS-200. Only the color coordinates of the display's white point were in error. For calculating the MTF a vertical or horizontal line is displayed on the monitor. The captured image is color-matrix preprocessed, Fourier transformed then post-processed. For NPS, a uniform image is displayed on the monitor. Again, the image is pre-processed, transformed and processed. Our measurements show that the horizontal MTF's of both displays have a larger negative slope than that of the vertical MTF's. This behavior indicates that the horizontal MTF's are poorer than the vertical MTF's. However the modulations at the Nyquist frequency seem lower for the color LCD than for the monochrome LCD. The spatial noise of the color display in both directions is larger than that of the monochrome display. Attempts were also made to analyze the total noise in terms of spatial and temporal noise by applying subtractions of images taken at exactly the same exposure. Temporal noise seems to be significantly lower than spatial noise.
NASA Astrophysics Data System (ADS)
Zuhrie, M. S.; Basuki, I.; Asto, B. I. G. P.; Anifah, L.
2018-04-01
The development of robotics in Indonesia has been very encouraging. The barometer is the success of the Indonesian Robot Contest. The focus of research is a teaching module manufacturing, planning mechanical design, control system through microprocessor technology and maneuverability of the robot. Contextual Teaching and Learning (CTL) strategy is the concept of learning where the teacher brings the real world into the classroom and encourage students to make connections between knowledge possessed by its application in everyday life. This research the development model used is the 4-D model. This Model consists of four stages: Define Stage, Design Stage, Develop Stage, and Disseminate Stage. This research was conducted by applying the research design development with the aim to produce a tool of learning in the form of smart educational robot modules and kit based on Contextual Teaching and Learning at the Department of Electrical Engineering to improve the skills of the Electrical Engineering student. Socialization questionnaires showed that levels of the student majoring in electrical engineering competencies image currently only limited to conventional machines. The average assessment is 3.34 validator included in either category. Modules developed can give hope to the future are able to produce Intelligent Robot Tool for Teaching.
Voss unpacks stowage bags in Destiny module
2001-05-03
ISS002-E-5246 (03 May 2001) --- Astronaut James S. Voss (left), Expedition Two flight engineer, unpacks a stowage bag while cosmonaut Yury V. Usachev, Expedition Two mission commander, takes notes in the U.S. Laboratory / Destiny module of the International Space Station (ISS). This image was recorded with a digital still camera.
STS-40 Payload Specialist Hughes-Fulford "flies" through SLS-1 module
1991-06-14
STS040-212-006 (5-14 June 1991) --- Payload specialist Millie Hughes-Fulford floats through the Spacelab Life Sciences (SLS-1) module aboard the Earth-orbiting Columbia. Astronaut James P. Bagian, mission specialist, is at the blood draw station in the background. The scene was photographed with a 35mm camera.
Tyurin and Voss perform maintenance on the TVIS treadmill in the Service Module
2001-08-19
ISS003-E-5200 (19 August 2001) --- Cosmonaut Mikhail Tyurin (left), Expedition Three flight engineer representing Rosaviakosmos, and astronaut James S. Voss, Expedition Two flight engineer, perform maintenance in the Zvezda Service Module on the International Space Station (ISS). This image was taken with a digital still camera.
Cabana, Newman and Ross in the Node 1/Unity module
1998-12-10
S88-E-5124 (12-11-98) --- From the left, astronauts Robert D. Cabana, Jerry L. Ross and James H. Newman are pictured during work to ready the Unity connecting module for its ISS role. The photo was taken with an electronic still camera (ESC) at 00:23:27 GMT, Dec. 11.
Voss with soldering tool in Service Module
2001-03-28
ISS002-E-5068 (28 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, prepares to use a soldering tool for a maintenance task in the Zvezda Service Module onboard the International Space Station (ISS). Astronaut Susan J. Helms, flight engineer, is in the background. The image was recorded with a digital still camera.
Expedition Two crew eat a meal in the Service Module
2001-04-12
ISS002-E-5339 (12 April 2001) --- The Expedition Two crewmembers -- astronaut Susan J. Helms (left), cosmonaut Yury V. Usachev and astronaut James S. Voss -- share a meal at the table in the Zvezda / Service Module of the International Space Station (ISS). This image was recorded with a digital still camera.
Helms with laptop in Destiny laboratory module
2001-03-30
ISS002-E-5478 (30 March 2001) --- Astronaut Susan J. Helms, Expedition Two flight engineer, works at a laptop computer in the U.S. Laboratory / Destiny module of the International Space Station (ISS). The Space Station Remote Manipulator System (SSRMS) control panel is visible to Helms' right. This image was recorded with a digital still camera.
Teleoperated control system for underground room and pillar mining
Mayercheck, William D.; Kwitowski, August J.; Brautigam, Albert L.; Mueller, Brian K.
1992-01-01
A teleoperated mining system is provided for remotely controlling the various machines involved with thin seam mining. A thin seam continuous miner located at a mining face includes a camera mounted thereon and a slave computer for controlling the miner and the camera. A plurality of sensors for relaying information about the miner and the face to the slave computer. A slave computer controlled ventilation sub-system which removes combustible material from the mining face. A haulage sub-system removes material mined by the continuous miner from the mining face to a collection site and is also controlled by the slave computer. A base station, which controls the supply of power and water to the continuous miner, haulage system, and ventilation systems, includes cable/hose handling module for winding or unwinding cables/hoses connected to the miner, an operator control module, and a hydraulic power and air compressor module for supplying air to the miner. An operator controlled host computer housed in the operator control module is connected to the slave computer via a two wire communications line.
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-01-01
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera. PMID:26959023
Feng, Wei; Zhang, Fumin; Qu, Xinghua; Zheng, Shiwei
2016-03-04
High-speed photography is an important tool for studying rapid physical phenomena. However, low-frame-rate CCD (charge coupled device) or CMOS (complementary metal oxide semiconductor) camera cannot effectively capture the rapid phenomena with high-speed and high-resolution. In this paper, we incorporate the hardware restrictions of existing image sensors, design the sampling functions, and implement a hardware prototype with a digital micromirror device (DMD) camera in which spatial and temporal information can be flexibly modulated. Combined with the optical model of DMD camera, we theoretically analyze the per-pixel coded exposure and propose a three-element median quicksort method to increase the temporal resolution of the imaging system. Theoretically, this approach can rapidly increase the temporal resolution several, or even hundreds, of times without increasing bandwidth requirements of the camera. We demonstrate the effectiveness of our method via extensive examples and achieve 100 fps (frames per second) gain in temporal resolution by using a 25 fps camera.
Advanced illumination control algorithm for medical endoscopy applications
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.
2015-05-01
CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.
Compact and smart laser diode systems for cancer treatment
NASA Astrophysics Data System (ADS)
Svirin, Viatcheslav N.; Sokolov, Victor V.; Solovieva, Tatiana I.
2003-04-01
To win the cancer is one of the most important mankind task to be decided in III Millenium. New technology of treatment is to recognize and kill cancer cells with the laser light not by surgery operation, but by soft painless therapy. Even though from the beginning of the 80s of the last century this technology, so-called photodynamic therapy (PDT) has received acceptance in America, Europe and Asia it is still considered in the medical circles to be a new method with the little-known approaches of cancer treatment. Recently the next step was done, and the unique method of PDT combined with laser-induced thermotherapy (LITT) was developed. Compact and smart diode laser apparatus "Modul-GF" for its realization was designed. In this report the concept of this method, experimental materials on clinical trials and ways of optimization of technical decisions and software of apparatus "Modul-GF", including the autotuning of laser power dependently on tissue temperature measured with thermosensors are discussed. The special instruments such as fiber cables and special sensors are described to permit application of "Modul-GF" for the treatment of the tumors of the different localizations, both surface and deeply located with using of the endoscopy method. The examples of the oncological and nononcological pathologies" treatment by the developed method and apparatus in urology, gynecology, gastroenterology, dermatology, cosmetology, bronchology, pulmonology are observed. The results of clinical approval the developed combination of PDT&LITT realized with "Modul-GF" leads to essentially increasing of the treatment effectiveness.
Programmable 10 MHz optical fiducial system for hydrodiagnostic cameras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Huen, T.
1987-07-01
A solid state light control system was designed and fabricated for use with hydrodiagnostic streak cameras of the electro-optic type. With its use, the film containing the streak images will have on it two time scales simultaneously exposed with the signal. This allows timing and cross timing. The latter is achieved with exposure modulation marking onto the time tick marks. The purpose of using two time scales will be discussed. The design is based on a microcomputer, resulting in a compact and easy to use instrument. The light source is a small red light emitting diode. Time marking can bemore » programmed in steps of 0.1 microseconds, with a range of 255 steps. The time accuracy is based on a precision 100 MHz quartz crystal, giving a divided down 10 MHz system frequency. The light is guided by two small 100 micron diameter optical fibers, which facilitates light coupling onto the input slit of an electro-optic streak camera. Three distinct groups of exposure modulation of the time tick marks can be independently set anywhere onto the streak duration. This system has been successfully used in Fabry-Perot laser velocimeters for over four years in our Laboratory. The microcomputer control section is also being used in providing optical fids to mechanical rotor cameras.« less
NASA Astrophysics Data System (ADS)
Pospisil, J.; Jakubik, P.; Machala, L.
2005-11-01
This article reports the suggestion, realization and verification of the newly developed measuring means of the noiseless and locally shift-invariant modulation transfer function (MTF) of a digital video camera in a usual incoherent visible region of optical intensity, especially of its combined imaging, detection, sampling and digitizing steps which are influenced by the additive and spatially discrete photodetector, aliasing and quantization noises. Such means relates to the still camera automatic working regime and static two-dimensional spatially continuous light-reflection random target of white-noise property. The introduced theoretical reason for such a random-target method is also performed under exploitation of the proposed simulation model of the linear optical intensity response and possibility to express the resultant MTF by a normalized and smoothed rate of the ascertainable output and input power spectral densities. The random-target and resultant image-data were obtained and processed by means of a processing and evaluational PC with computation programs developed on the basis of MATLAB 6.5E The present examples of results and other obtained results of the performed measurements demonstrate the sufficient repeatability and acceptability of the described method for comparative evaluations of the performance of digital video cameras under various conditions.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xiao, Xiudi; Zhang, Hua; Chai, Guanqi
2014-03-01
Graphical abstract: Combining codeposition and short time post annealing, VO{sub 2} (M) with high quality and excellent phase transition performance is obtained. After mixing the VO{sub 2} powder with acrylic resin, the composite films deposited on glass show superior visible transmission and solar modulation, which can be used as an excellent candidate of low cost smart window in energy saving field. - Highlights: • The VO{sub 2} powder obtained by short time thermolysis method is high purity and crystallinity with superior phase transition performance. • The maximum decreasing efficiency of phase transition temperature is about −30 K/at% with w =more » 0.4 at%. • After mixing VO{sub 2} powder with acrylic resin, the maximal visible transmission of the composite films is 48% and the transmission modulation at 2000 nm is 37.3% with phase transition temperature of 66.2 °C. • Though the phase transition performance is weakened by tungsten doping, the film prepared by 1.3 at% tungsten doped VO{sub 2} still show superior transmission modulation about 26.4%, which means that it is a potential candidate as smart windows. - Abstract: VO{sub 2} powder with superior phase transition performance was prepared by convenient thermolysis method. The results illustrated that VO{sub 2} powder show high purity and crystallinity. VO{sub 2} particles are transformed from cluster to quasi-sphere with the increase of annealing temperature. The DSC analysis proves that VO{sub 2} show superior phase transition performance around 68 °C. The phase transition temperature can be reduced to 33.5 °C by 1.8 at% tungsten doping. The maximum decreasing efficiency of phase transition temperature is about −30 K/at% with w = 0.4 at%. After mixing VO{sub 2} powder with acrylic resin, the maximal visible transmission of the composite thin films on glass is 48% and the transmission modulation at 2000 nm is 37.3% with phase transition temperature of 66.2 °C. Though the phase transition performance is weakened by tungsten doping, the film prepared by 1.3 at% tungsten doped VO{sub 2} still show superior transmission modulation about 26.4% at 2000 nm, which means that it is a potential candidate as smart windows.« less
Microprocessor-controlled, wide-range streak camera
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amy E. Lewis, Craig Hollabaugh
Bechtel Nevada/NSTec recently announced deployment of their fifth generation streak camera. This camera incorporates many advanced features beyond those currently available for streak cameras. The arc-resistant driver includes a trigger lockout mechanism, actively monitors input trigger levels, and incorporates a high-voltage fault interrupter for user safety and tube protection. The camera is completely modular and may deflect over a variable full-sweep time of 15 nanoseconds to 500 microseconds. The camera design is compatible with both large- and small-format commercial tubes from several vendors. The embedded microprocessor offers Ethernet connectivity, and XML [extensible markup language]-based configuration management with non-volatile parameter storagemore » using flash-based storage media. The camera’s user interface is platform-independent (Microsoft Windows, Unix, Linux, Macintosh OSX) and is accessible using an AJAX [asynchronous Javascript and XML]-equipped modem browser, such as Internet Explorer 6, Firefox, or Safari. User interface operation requires no installation of client software or browser plug-in technology. Automation software can also access the camera configuration and control using HTTP [hypertext transfer protocol]. The software architecture supports multiple-simultaneous clients, multiple cameras, and multiple module access with a standard browser. The entire user interface can be customized.« less
Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass.
Ayoola, Idowu; Chen, Wei; Feijs, Loe
2015-09-18
A major problem related to chronic health is patients' "compliance" with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean.
Camera on Vessel: A Camera-Based System to Measure Change in Water Volume in a Drinking Glass
Ayoola, Idowu; Chen, Wei; Feijs, Loe
2015-01-01
A major problem related to chronic health is patients’ “compliance” with new lifestyle changes, medical prescriptions, recommendations, or restrictions. Heart-failure and hemodialysis patients are usually placed on fluid restrictions due to their hemodynamic status. A holistic approach to managing fluid imbalance will incorporate the monitoring of salt-water intake, body-fluid retention, and fluid excretion in order to provide effective intervention at an early stage. Such an approach creates a need to develop a smart device that can monitor the drinking activities of the patient. This paper employs an empirical approach to infer the real water level in a conically shapped glass and the volume difference due to changes in water level. The method uses a low-resolution miniaturized camera to obtain images using an Arduino microcontroller. The images are processed in MATLAB. Conventional segmentation techniques (such as a Sobel filter to obtain a binary image) are applied to extract the level gradient, and an ellipsoidal fitting helps to estimate the size of the cup. The fitting (using least-squares criterion) between derived measurements in pixel and the real measurements shows a low covariance between the estimated measurement and the mean. The correlation between the estimated results to ground truth produced a variation of 3% from the mean. PMID:26393600
Altered Perspectives: Immersive Environments
NASA Astrophysics Data System (ADS)
Shipman, J. S.; Webley, P. W.
2016-12-01
Immersive environments provide an exciting experiential technology to visualize the natural world. Given the increasing accessibility of 360o cameras and virtual reality headsets we are now able to visualize artistic principles and scientific concepts in a fully immersive environment. The technology has become popular for photographers as well as designers, industry, educational groups, and museums. Here we show a sci-art perspective on the use of optics and light in the capture and manipulation of 360o images and video of geologic phenomena and cultural heritage sites in Alaska, England, and France. Additionally, we will generate intentionally altered perspectives to lend a surrealistic quality to the landscapes. Locations include the Catacombs of Paris, the Palace of Versailles, and the Northern Lights over Fairbanks, Alaska. Some 360o view cameras now use small portable dual lens technology extending beyond the 180o fish eye lens previously used, providing better coverage and image quality. Virtual reality headsets range in level of sophistication and cost, with the most affordable versions using smart phones and Google Cardboard viewers. The equipment used in this presentation includes a Ricoh Theta S spherical imaging camera. Here we will demonstrate the use of 360o imaging with attendees being able to be part of the immersive environment and experience our locations as if they were visiting themselves.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dudleson, B.; Arnold, M.; McCann, D.
Rapid detection of unexpected drilling events requires continuous monitoring of drilling parameters. A major R and D program by a drilling contractor has led to the introduction of a computerized monitoring system on its offshore rigs. System includes advanced color graphics displays and new smart alarms to help both contractor and operator personnel detect and observe drilling events before they would normally be apparent with conventional rig instrumentation. This article describes a module of this monitoring system, which uses expert system technology to detect the earliest stages of drillstring washouts. Field results demonstrate the effectiveness of the smart alarm incorporatedmore » in the system. Early detection allows the driller to react before a twist-off results in expensive fishing operations.« less
A smart magnetic resonance contrast agent for selective copper sensing.
Que, Emily L; Chang, Christopher J
2006-12-20
We describe the synthesis and properties of Copper-Gad-1 (CG1), a new type of smart magnetic resonance (MR) sensor for selective detection of copper. CG1 is composed of a gadolinium contrast agent core tethered to copper-selective recognition motif. Cu2+-induced modulation of inner-sphere water access to the Gd3+ center provides a sensing mechanism for reporting Cu2+ levels by reading out changes in longitudinal proton relaxivity values. CG1 features good selectivity for Cu2+ over abundant biological cations and a 41% increase in relaxivity upon Cu2+ binding and is capable of detecting micromolar changes in Cu2+ concentrations in aqueous media.
Smart Metamaterial Based on the Simplex Tensegrity Pattern.
Al Sabouni-Zawadzka, Anna; Gilewski, Wojciech
2018-04-26
In the present paper, a novel cellular metamaterial that was based on a tensegrity pattern is presented. The material is constructed from supercells, each of which consists of eight 4-strut simplex modules. The proposed metamaterial exhibits some unusual properties, which are typical for smart structures. It is possible to control its mechanical characteristics by adjusting the level of self-stress or by changing the properties of structural members. A continuum model is used to identify the qualitative properties of the considered metamaterial, and to estimate how the applied self-stress and the characteristics of cables and struts affect the whole structure. The performed analyses proved that the proposed structure can be regarded as a smart metamaterial with orthotropic properties. One of its most important features are unique values of Poisson’s ratio, which can be either positive or negative, depending on the applied control parameters. Moreover, all of the mechanical characteristics of the proposed metamaterial are prone to structural control.
BIPV-powered smart windows utilizing photovoltaic and electrochromic devices.
Ma, Rong-Hua; Chen, Yu-Chia
2012-01-01
A BIPV-powered smart window comprising a building-integrated photovoltaic (BIPV) panel and an all-solid-state electrochromic (EC) stack is proposed. In the proposed device, the output voltage of the BIPV panel varies in accordance with the intensity of the incident light and is modulated in such a way as to generate the EC stack voltage required to maintain the indoor illuminance within a specified range. Two different EC stacks are fabricated and characterized, namely one stack comprising ITO/WO(3)/Ta(2)O(5)/ITO and one stack comprising ITO/WO(3)/lithium-polymer electrolyte/ITO. It is shown that of the two stacks, the ITO/WO(3)/lithium-polymer electrolyte/ITO stack has a larger absorptance (i.e., approximately 99% at a driving voltage of 3.5 V). The experimental results show that the smart window incorporating an ITO/WO(3)/lithium-polymer electrolyte/ITO stack with an electrolyte thickness of 1.0 μm provides an indoor illuminance range of 750-1,500 Lux under typical summertime conditions in Taiwan.
Li, Yamei; Ji, Shidong; Gao, Yanfeng; Luo, Hongjie; Kanehira, Minoru
2013-01-01
Vanadium dioxide (VO2) is a Mott phase transition compound that can be applied as a thermochromic smart material for energy saving and comfort, and titanium dioxide (TiO2) is a well-known photocatalyst for self-cleaning coatings. In this paper, we report a VO2@TiO2 core-shell structure, in which the VO2 nanorod core exhibits a remarkable modulation ability for solar infrared light, and the TiO2 anatase shell exhibits significant photocatalytic degradation of organic dye. In addition, the TiO2 overcoating not only increased the luminous transmittance of VO2 based on an antireflection effect, but also modified the intrinsic colour of VO2 films from yellow to light blue. The TiO2 also enhanced the chemical stability of VO2 against oxidation. This is the first report of such a single nanoparticle structure with both thermochromic and photocatalytic properties that offer significant potential for creating a multifunctional smart coating. PMID:23546301
Design of a mobile brain computer interface-based smart multimedia controller.
Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh
2015-03-06
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.
Li, Yamei; Ji, Shidong; Gao, Yanfeng; Luo, Hongjie; Kanehira, Minoru
2013-01-01
Vanadium dioxide (VO2) is a Mott phase transition compound that can be applied as a thermochromic smart material for energy saving and comfort, and titanium dioxide (TiO2) is a well-known photocatalyst for self-cleaning coatings. In this paper, we report a VO2@TiO2 core-shell structure, in which the VO2 nanorod core exhibits a remarkable modulation ability for solar infrared light, and the TiO2 anatase shell exhibits significant photocatalytic degradation of organic dye. In addition, the TiO2 overcoating not only increased the luminous transmittance of VO2 based on an antireflection effect, but also modified the intrinsic colour of VO2 films from yellow to light blue. The TiO2 also enhanced the chemical stability of VO2 against oxidation. This is the first report of such a single nanoparticle structure with both thermochromic and photocatalytic properties that offer significant potential for creating a multifunctional smart coating.
Presence capture cameras - a new challenge to the image quality
NASA Astrophysics Data System (ADS)
Peltoketo, Veli-Tapani
2016-04-01
Commercial presence capture cameras are coming to the markets and a new era of visual entertainment starts to get its shape. Since the true presence capturing is still a very new technology, the real technical solutions are just passed a prototyping phase and they vary a lot. Presence capture cameras have still the same quality issues to tackle as previous phases of digital imaging but also numerous new ones. This work concentrates to the quality challenges of presence capture cameras. A camera system which can record 3D audio-visual reality as it is has to have several camera modules, several microphones and especially technology which can synchronize output of several sources to a seamless and smooth virtual reality experience. Several traditional quality features are still valid in presence capture cameras. Features like color fidelity, noise removal, resolution and dynamic range create the base of virtual reality stream quality. However, co-operation of several cameras brings a new dimension for these quality factors. Also new quality features can be validated. For example, how the camera streams should be stitched together with 3D experience without noticeable errors and how to validate the stitching? The work describes quality factors which are still valid in the presence capture cameras and defines the importance of those. Moreover, new challenges of presence capture cameras are investigated in image and video quality point of view. The work contains considerations how well current measurement methods can be used in presence capture cameras.
Multi-User Low Intrusive Occupancy Detection
Widyawan, Widyawan; Lazovik, Alexander
2018-01-01
Smart spaces are those that are aware of their state and can act accordingly. Among the central elements of such a state is the presence of humans and their number. For a smart office building, such information can be used for saving energy and safety purposes. While acquiring presence information is crucial, using sensing techniques that are highly intrusive, such as cameras, is often not acceptable for the building occupants. In this paper, we illustrate a proposal for occupancy detection which is low intrusive; it is based on equipment typically available in modern offices such as room-level power-metering and an app running on workers’ mobile phones. For power metering, we collect the aggregated power consumption and disaggregate the load of each device. For the mobile phone, we use the Received Signal Strength (RSS) of BLE (Bluetooth Low Energy) nodes deployed around workspaces to localize the phone in a room. We test the system in our offices. The experiments show that sensor fusion of the two sensing modalities gives 87–90% accuracy, demonstrating the effectiveness of the proposed approach. PMID:29509693
Fusion of footsteps and face biometrics on an unsupervised and uncontrolled environment
NASA Astrophysics Data System (ADS)
Vera-Rodriguez, Ruben; Tome, Pedro; Fierrez, Julian; Ortega-Garcia, Javier
2012-06-01
This paper reports for the first time experiments on the fusion of footsteps and face on an unsupervised and not controlled environment for person authentication. Footstep recognition is a relatively new biometric based on signals extracted from people walking over floor sensors. The idea of the fusion between footsteps and face starts from the premise that in an area where footstep sensors are installed it is very simple to place a camera to capture also the face of the person that walks over the sensors. This setup may find application in scenarios like ambient assisted living, smart homes, eldercare, or security access. The paper reports a comparative assessment of both biometrics using the same database and experimental protocols. In the experimental work we consider two different applications: smart homes (small group of users with a large set of training data) and security access (larger group of users with a small set of training data) obtaining results of 0.9% and 5.8% EER respectively for the fusion of both modalities. This is a significant performance improvement compared with the results obtained by the individual systems.
Impact of a Saharan dust intrusion over southern Spain on DNI estimation with sky cameras
NASA Astrophysics Data System (ADS)
Alonso-Montesinos, J.; Barbero, J.; Polo, J.; López, G.; Ballestrín, J.; Batlles, F. J.
2017-12-01
To operate Central Tower Solar Power (CTSP) plants properly, solar collector systems must be able to work under varied weather conditions. Therefore, knowing the state of the atmosphere, and more specifically the level of incident radiation, is essential operational information to adapt the electricity production system to atmospheric conditions. In this work, we analyze the impact of a strong Saharan dust intrusion on the Direct normal irradiance (DNI) registered at two sites 35 km apart in southeastern Spain: the University of Almería (UAL) and the Plataforma Solar de Almería (PSA). DNI can be inputted into the European Solar Radiation Atlas (ESRA) clear sky procedure to derive Linke turbidity values, which proved to be extremely high at the UAL. By using the Simple Model of the Atmospheric Radiative Transfer of Sunshine (SMARTS) at the PSA site, AERONET data from PSA and assuming dust dominated aerosol, DNI estimations agreed strongly with the measured DNI values. At the UAL site, a SMARTS simulation of the DNI values also seemed to be compatible with dust dominated aerosol.
Micro-optical system based 3D imaging for full HD depth image capturing
NASA Astrophysics Data System (ADS)
Park, Yong-Hwa; Cho, Yong-Chul; You, Jang-Woo; Park, Chang-Young; Yoon, Heesun; Lee, Sang-Hun; Kwon, Jong-Oh; Lee, Seung-Wan
2012-03-01
20 Mega-Hertz-switching high speed image shutter device for 3D image capturing and its application to system prototype are presented. For 3D image capturing, the system utilizes Time-of-Flight (TOF) principle by means of 20MHz high-speed micro-optical image modulator, so called 'optical shutter'. The high speed image modulation is obtained using the electro-optic operation of the multi-layer stacked structure having diffractive mirrors and optical resonance cavity which maximizes the magnitude of optical modulation. The optical shutter device is specially designed and fabricated realizing low resistance-capacitance cell structures having small RC-time constant. The optical shutter is positioned in front of a standard high resolution CMOS image sensor and modulates the IR image reflected from the object to capture a depth image. Suggested novel optical shutter device enables capturing of a full HD depth image with depth accuracy of mm-scale, which is the largest depth image resolution among the-state-of-the-arts, which have been limited up to VGA. The 3D camera prototype realizes color/depth concurrent sensing optical architecture to capture 14Mp color and full HD depth images, simultaneously. The resulting high definition color/depth image and its capturing device have crucial impact on 3D business eco-system in IT industry especially as 3D image sensing means in the fields of 3D camera, gesture recognition, user interface, and 3D display. This paper presents MEMS-based optical shutter design, fabrication, characterization, 3D camera system prototype and image test results.
2014-05-07
View of the High Definition Earth Viewing (HDEV) flight assembly installed on the exterior of the Columbus European Laboratory module. Image was released by astronaut on Twitter. The High Definition Earth Viewing (HDEV) experiment places four commercially available HD cameras on the exterior of the space station and uses them to stream live video of Earth for viewing online. The cameras are enclosed in a temperature specific housing and are exposed to the harsh radiation of space. Analysis of the effect of space on the video quality, over the time HDEV is operational, may help engineers decide which cameras are the best types to use on future missions. High school students helped design some of the cameras' components, through the High Schools United with NASA to Create Hardware (HUNCH) program, and student teams operate the experiment.
Zero gravity tissue-culture laboratory
NASA Technical Reports Server (NTRS)
Cook, J. E.; Montgomery, P. O., Jr.; Paul, J. S.
1972-01-01
Hardware was developed for performing experiments to detect the effects that zero gravity may have on living human cells. The hardware is composed of a timelapse camera that photographs the activity of cell specimens and an experiment module in which a variety of living-cell experiments can be performed using interchangeable modules. The experiment is scheduled for the first manned Skylab mission.
View in the Node 1/Unity module after docking
1998-12-10
S88-E-5113 (12-10-98) --- Sergei Krikalev, mission specialist representing the Russian Space Agency (RSA), totes a notebook onboard the Unity connecting module while he and two crewmates perform various tasks to ready it for its ISS role. The photo was taken with an electronic still camera (ESC) at 20:27:03 GMT, Dec. 10.
International Space Station (ISS)
2000-09-01
This image of the International Space Station (ISS) was taken during the STS-106 mission. The ISS component nearest the camera is the U.S. built Node 1 or Unity module, which cornected with the Russian built Functional Cargo Block (FGB) or Zarya. The FGB was linked with the Service Module or Zvezda. On the far end is the Russian Progress supply ship.
Helms and Usachev in Destiny Laboratory module
2001-04-05
ISS002-E-5497 (05 April 2001) --- Astronaut Susan J. Helms (left), Expedition Two flight engineer, pauses from her work to pose for a photograph while Expedition Two mission commander, cosmonaut Yury V. Usachev, speaks into a microphone aboard the U.S. Laboratory / Destiny module of the International Space Station (ISS). This image was recorded with a digital still camera.
Horowitz is hugged by Usachev in the ISS Service Module/Zvezda
2001-08-12
STS-105-E-5121 (12 August 2001) --- Yury V. Usachev of Rosaviakosmos, Expedition Two mission commander, and Scott J. Horowitz, STS-105 commander, embrace in the Zvezda Service Module with open arms during the initial ingress into the International Space Station (ISS) for the STS-105 mission. This image was taken with a digital still camera.
2001-03-31
ISS002-E-5084 (31 March 2001) --- Astronaut James S. Voss, Expedition Two flight engineer, floats in the Zvezda Service Module onboard the International Space Station (ISS). Voss, along with astronaut Susan J. Helms and cosmonaut Yury V. Usachev of Rosaviakosmos, recently replaced the initial three-member crew onboard the orbital outpost. The image was taken with a digital still camera.
Close-up view of astronauts foot and footprint in lunar soil
1969-07-20
AS11-40-5880 (20 July 1969) --- A close-up view of an astronaut's boot and bootprint in the lunar soil, photographed with a 70mm lunar surface camera during the Apollo 11 lunar surface extravehicular activity (EVA). While astronauts Neil A. Armstrong, commander, and Edwin A. Aldrin Jr., lunar module pilot, descended in the Lunar Module (LM) "Eagle" to explore the Sea of Tranquility region of the moon, astronaut Michael Collins, command module pilot, remained with the Command and Service Modules (CSM)" Columbia" in lunar orbit.
The Advanced Gamma-ray Imaging System (AGIS) - Camera Electronics Development
NASA Astrophysics Data System (ADS)
Tajima, Hiroyasu; Bechtol, K.; Buehler, R.; Buckley, J.; Byrum, K.; Drake, G.; Falcone, A.; Funk, S.; Hanna, D.; Horan, D.; Humensky, B.; Karlsson, N.; Kieda, D.; Konopelko, A.; Krawczynski, H.; Krennrich, F.; Mukherjee, R.; Ong, R.; Otte, N.; Quinn, J.; Schroedter, M.; Swordy, S.; Wagner, R.; Wakely, S.; Weinstein, A.; Williams, D.; Camera Working Group; AGIS Collaboration
2010-03-01
AGIS, a next-generation imaging atmospheric Cherenkov telescope (IACT) array, aims to achieve a sensitivity level of about one milliCrab for gamma-ray observations in the energy band of 50 GeV to 100 TeV. Achieving this level of performance will require on the order of 50 telescopes with perhaps as many as 1M total electronics channels. The larger scale of AGIS requires a very different approach from the currently operating IACTs, with lower-cost and lower-power electronics incorporated into camera modules designed for high reliability and easy maintenance. Here we present the concept and development status of the AGIS camera electronics.
Tailorable and Wearable Textile Devices for Solar Energy Harvesting and Simultaneous Storage.
Chai, Zhisheng; Zhang, Nannan; Sun, Peng; Huang, Yi; Zhao, Chuanxi; Fan, Hong Jin; Fan, Xing; Mai, Wenjie
2016-10-05
The pursuit of harmonic combination of technology and fashion intrinsically points to the development of smart garments. Herein, we present an all-solid tailorable energy textile possessing integrated function of simultaneous solar energy harvesting and storage, and we call it tailorable textile device. Our technique makes it possible to tailor the multifunctional textile into any designed shape without impairing its performance and produce stylish smart energy garments for wearable self-powering system with enhanced user experience and more room for fashion design. The "threads" (fiber electrodes) featuring tailorability and knittability can be large-scale fabricated and then woven into energy textiles. The fiber supercapacitor with merits of tailorability, ultrafast charging capability, and ultrahigh bending-resistance is used as the energy storage module, while an all-solid dye-sensitized solar cell textile is used as the solar energy harvesting module. Our textile sample can be fully charged to 1.2 V in 17 s by self-harvesting solar energy and fully discharged in 78 s at a discharge current density of 0.1 mA.