Single photon emission tomography in neurological studies: Instrumentation and clinical applications
NASA Astrophysics Data System (ADS)
Nikkinen, Paivi Helena
One triple head and two single head gamma camera systems were used for single photon emission tomography (SPET) imaging of both patients and brain phantoms. Studies with an anatomical brain phantom were performed for evaluation of reconstruction and correction methods in brain perfusion SPET studies. The use of the triple head gamma camera system resulted in a significant increase in image contrast and resolution. This was mainly due to better imaging geometry and the use of a high resolution collimator. The conventional Chang attenuation correction was found suitable for the brain perfusion studies. In the brain perfusion studies region of interest (ROI) based semiquantitation methods were used. A ROI map based on anatomical areas was used in 70 elderly persons (age range 55-85 years) without neurological diseases and in patients suffering from encephalitis or having had a cardiac arrest. Semiquantitative reference values are presented. For the 14 patients with encephalitis the right-to-left side differences were calculated. Defect volume indexes were calculated for 64 patients with brain infarcts. For the 30 cardiac arrest patients the defect percentages and the anteroposterior ratios were used for semiquantitation. It is concluded that different semiquantitation methods are needed for the various patient groups. Age-related reference values will improve the interpretation of SPET data. For validation of the basal ganglia receptor studies measurements were performed using a cylindrical and an anatomical striatal phantom. In these measurements conventional and transmission imaging based non-uniform attenuation corrections were compared. A calibration curve was calculated for the determination of the specific receptor uptake ratio. In the phantom studies using the triple head camera the uptake ratio obtained from simultaneous transmission-emission protocol (STEP) acquisition and iterative reconstruction was closest to the true activity ratio. Conventional acquisition and uniform Chang attenuation correction gave 40% lower values. The effect of dual window scatter correction was also measured. In conventional reconstruction dual window scatter correction increased the uptake ratios when using a single head camera, but when using the triple head camera this correction did not have a significant effect on the ratios. Semiquantitative values for striatal 123I-labelled β-carbomethoxy-3β- (4-iodophenyl)tropane (123I-βCIT) dopamine transporter uptake in 20 adults (mean age 52 +/- 15 years) are presented. The mean basal ganglia to cerebellum ratio was 6.5 +/- 0.9 and the mean caudatus to putamen ratio was 1.2. The registration of brain SPET and magnetic resonance (MR) studies provides the necessary anatomical information for determination of the ROIs. A procedure for registration and simultaneous display of brain SPET and MR images based on six external skin markers is presented. The usefulness of this method was demonstrated in selected patients. The registration accuracy was determined for single and triple head gamma camera systems using brain phantom and simulation studies. The registration residual for three internal test markers was calculated using 4 to 13 external markers in the registration. For 6 external markers, as used in the registration in the patient studies, the mean RMS residuals of the test markers for the single head camera and the triple head camera were 3.5 mm and 3.2 mm, respectively. According to the simulation studies the largest inaccuracy is due mainly to the spatial resolution of SPET. The use of six markers, as in the patient studies, is adequate for accurate registration.
Donnemiller, E; Brenneis, C; Wissel, J; Scherfler, C; Poewe, W; Riccabona, G; Wenning, G K
2000-09-01
Structural imaging suggests that traumatic brain injury (TBI) may be associated with disruption of neuronal networks, including the nigrostriatal dopaminergic pathway. However, to date deficits in pre- and/or postsynaptic dopaminergic neurotransmission have not been demonstrated in TBI using functional imaging. We therefore assessed dopaminergic function in ten TBI patients using [123I]2-beta-carbomethoxy-3-beta-(4-iodophenyl)tropane (beta-CIT) and [123I]iodobenzamide (IBZM) single-photon emission tomography (SPET). Average Glasgow Coma Scale score (+/-SD) at the time of head trauma was 5.8+/-4.2. SPET was performed on average 141 days (SD +/-92) after TBI. The SPET images were compared with structural images using cranial computerised tomography (CCT) and magnetic resonance imaging (MRI). SPET was performed with an ADAC Vertex dual-head camera. The activity ratios of striatal to cerebellar uptake were used as a semiquantitative parameter of striatal dopamine transporter (DAT) and D2 receptor (D2R) binding. Compared with age-matched controls, patients with TBI had significantly lower striatal/cerebellar beta-CIT and IBZM binding ratios (P< or =0.01). Overall, the DAT deficit was more marked than the D2R loss. CCT and MRI studies revealed varying cortical and subcortical lesions, with the frontal lobe being most frequently affected whereas the striatum appeared structurally normal in all but one patient. Our findings suggest that nigrostriatal dysfunction may be detected using SPET following TBI despite relative structural preservation of the striatum. Further investigations of possible clinical correlates and efficacy of dopaminergic therapy in patients with TBI seem justified.
Abu-Judeh, H H; Levine, S; Kumar, M; el-Zeftawy, H; Naddaf, S; Lou, J Q; Abdel-Dayem, H M
1998-11-01
Chronic fatigue syndrome is a clinically defined condition of uncertain aetiology. We compared 99Tcm-HMPAO single photon emission tomography (SPET) brain perfusion with dual-head 18F-FDG brain metabolism in patients with chronic fatigue syndrome. Eighteen patients (14 females, 4 males), who fulfilled the diagnostic criteria of the Centers for Disease Control for chronic fatigue syndrome, were investigated. Thirteen patients had abnormal SPET brain perfusion scans and five had normal scans. Fifteen patients had normal glucose brain metabolism scans and three had abnormal scans. We conclude that, in chronic fatigue syndrome patients, there is discordance between SPET brain perfusion and 18F-FDG brain uptake. It is possible to have brain perfusion abnormalities without corresponding changes in glucose uptake.
Vemmer, T; Steinbüchel, C; Bertram, J; Eschner, W; Kögler, A; Luig, H
1997-03-01
The purpose of this study was to determine whether data acquisition in the list mode and iterative tomographic reconstruction would render feasible cardiac phase-synchronized thallium-201 single-photon emission tomography (SPET) of the myocardium under routine conditions without modifications in tracer dose, acquisition time, or number of steps of the a gamma camera. Seventy non-selected patients underwent 201T1 SPET imaging according to a routine protocol (74 MBq/2 mCi 201T1, 180 degrees rotation of the gamma camera, 32 steps, 30 min). Gamma camera data, ECG, and a time signal were recorded in list mode. The cardiac cycle was divided into eight phases, the end-diastolic phase encompassing the QRS complex, and the end-systolic phase the T wave. Both phase- and non-phase-synchronized tomograms based on the same list mode data were reconstructed iteratively. Phase-synchronized and non-synchronized images were compared. Patients were divided into two groups depending on whether or not coronary artery disease had been definitely diagnosed prior to SPET imaging. The numbers of patients in both groups demonstrating defects visible on the phase-synchronized but not on the non-synchronized images were compared. It was found that both postexercise and redistribution phase tomograms were suited for interpretation. The changes from end-diastolic to end-systolic images allowed a comparative assessment of regional wall motility and tracer uptake. End-diastolic tomograms provided the best definition of defects. Additional defects not apparent on non-synchronized images were visible in 40 patients, six of whom did not show any defect on the non-synchronized images. Of 42 patients in whom coronary artery disease had been definitely diagnosed, 19 had additional defects not visible on the non-synchronized images, in comparison to 21 of 28 in whom coronary artery disease was suspected (P < 0.02; chi 2). It is concluded that cardiac phase-synchronized 201T1 SPET of the myocardium was made feasible by list mode data acquisition and iterative reconstruction. The additional findings on the phase-synchronized tomograms, not visible on the non-synchronized ones, represented genuine defects. Cardiac phase-synchronized 201T1 SPET is advantageous in allowing simultaneous assessment of regional wall motion and tracer uptake, and in visualizing smaller defects.
Suga, Kazuyoshi; Yasuhiko, Kawakami; Zaki, Mohammed; Yamashita, Tomio; Seto, Aska; Matsumoto, Tsuneo; Matsunaga, Naofumi
2004-02-01
In this study, respiratory-gated ventilation and perfusion single-photon emission tomography (SPET) were used to define regional functional impairment and to obtain reliable co-registration with computed tomography (CT) images in various lung diseases. Using a triple-headed SPET unit and a physiological synchroniser, gated perfusion SPET was performed in a total of 78 patients with different pulmonary diseases, including metastatic nodules (n = 15); in 34 of these patients, it was performed in combination with gated technetium-99m Technegas SPET. Projection data were acquired using 60 stops over 120 degrees for each detector. Gated end-inspiration and ungated images were reconstructed from 1/8 data centered at peak inspiration for each regular respiratory cycle and full respiratory cycle data, respectively. Gated images were registered with tidal inspiration CT images using automated three-dimensional (3D) registration software. Registration mismatch was assessed by measuring 3D distance of the centroid of the nine selected round perfusion-defective nodules. Gated SPET images were completed within 29 min, and increased the number of visible ventilation and perfusion defects by 9.7% and 17.2%, respectively, as compared with ungated images; furthermore, lesion-to-normal lung contrast was significantly higher on gated SPET images. In the nine round perfusion-defective nodules, gated images yielded a significantly better SPET-CT match compared with ungated images (4.9 +/- 3.1 mm vs 19.0 +/- 9.1 mm, P<0.001). The co-registered SPET-CT images allowed accurate perception of the location and extent of each ventilation/perfusion defect on the underlying CT anatomy, and characterised the pathophysiology of the various diseases. By reducing respiratory motion effects and enhancing perfusion/ventilation defect clarity, gated SPET can provide reliable co-registered images with CT images to accurately characterise regional functional impairment in various lung diseases.
Henze, Marcus; Mohammed, Ashour; Mier, Walter; Rudat, Volker; Dietz, Andreas; Nollert, Jörg; Eisenhut, Michael; Haberkorn, Uwe
2002-03-01
While fluorine-18 2-fluoro-2-deoxy-D-glucose (FDG) positron emission tomography (PET) is helpful in the pretherapeutic evaluation of head and neck cancer, it is only available in selected centres. Therefore, single-photon emission tomography (SPET) tracers would be desirable if they were to demonstrate tumour uptake reliably. This multitracer study was performed to evaluate the pretherapeutic uptake of the SPET tracers iodine-123 alpha-methyl-L-tyrosine (IMT) and technetium-99m hexakis-2-methoxyisobutylisonitrile (99mTc-MIBI) in primary carcinomas of the hypopharynx and larynx and to compare the results with those of FDG PET. We examined 22 fasted patients (20 male, 2 female, mean age 60.5+/-10.2 years) with histologically confirmed carcinoma of the hypopharynx (n=9) or larynx (n=13), within 1 week before therapy. In 20 patients a cervical PET scan was acquired after intravenous injection of 232+/-43 MBq 18F-FDG. Data analysis was semiquantitative, being based on standardised uptake values (SUVs) obtained at 60-90 min after injection. After injection of 570+/-44 MBq 99mTc-MIBI, cervical SPET scans (high-resolution collimator, 64x64 matrix, 64 steps, 40 s each) were obtained in 19 patients, 15 and 60 min after tracer injection. Finally, 15 min after injection of 327+/-93 MBq 123I-IMT (medium-energy collimator, 64x64 matrix, 64 steps, 40 s each) SPET scans were acquired in 15 patients. All images were analysed visually and by calculating the tumour to nuchal muscle ratio. Eighteen of 20 (90%) carcinomas showed an increased glucose metabolism, with a mean SUV of 8.7 and a mean carcinoma to muscle ratio of 7.3. The IMT uptake was increased in 13 of 15 (87%) patients, who had a mean carcinoma to muscle ratio of 2.9. Only 13 of 19 (68%) carcinomas revealed pathological MIBI uptake, with a mean tumour to muscle ratio of 2.2 and no significant difference between early and late MIBI SPET images (P=0.23). In conclusion, in the diagnosis of primary carcinomas of the hypopharynx and larynx, IMT SPET achieved a detection rate comparable to that of FDG PET. IMT SPET was clearly superior to MIBI SPET in this population. A further evaluation of the specificity of IMT in a larger number of patients appears justified.
A prototype PET/SPECT/X-rays scanner dedicated for whole body small animal studies.
Rouchota, Maritina; Georgiou, Maria; Fysikopoulos, Eleftherios; Fragogeorgi, Eirini; Mikropoulos, Konstantinos; Papadimitroulas, Panagiotis; Kagadis, George; Loudos, George
2017-01-01
To present a prototype tri-modal imaging system, consisting of a single photon emission computed tomography (SPET), a positron emission tomography (PET), and a computed tomography (CT) subsystem, evaluated in planar mode. The subsystems are mounted on a rotating gantry, so as to be able to allow tomographic imaging in the future. The system, designed and constructed by our group, allows whole body mouse imaging of competent performance and is currently, to the best of our knowledge, unequaled in a national and regional level. The SPET camera is based on two Position Sensitive Photomultiplier Tubes (PSPMT), coupled to a pixilated Sodium Iodide activated with Thallium (NaI(Tl)) scintillator, having an active area of 5x10cm 2 . The dual head PET camera is also based on two pairs of PSPMT, coupled to pixelated berillium germanium oxide (BGO) scintillators, having an active area of 5x10cm 2 . The X-rays system consists of a micro focus X-rays tube and a complementary metal-oxide-semiconductor (CMOS) detector, having an active area of 12x12cm 2 . The scintigraphic mode has a spatial resolution of 1.88mm full width at half maximum (FWHM) and a sensitivity of 107.5cpm/0.037MBq at the collimator surface. The coincidence PET mode has an average spatial resolution of 3.5mm (FWHM) and a peak sensitivity of 29.9cpm/0.037MBq. The X-rays spatial resolution is 3.5lp/mm and the contrast discrimination function value is lower than 2%. A compact tri-modal system was successfully built and evaluated for planar mode operation. The system has an efficient performance, allowing accurate and informative anatomical and functional imaging, as well as semi-quantitative results. Compared to other available systems, it provides a moderate but comparable performance, at a fraction of the cost and complexity. It is fully open, scalable and its main purpose is to support groups on a national and regional level and provide an open technological platform to study different detector components and acquisition strategies.
Vattimo, A; Burroni, L; Bertelli, P; Volterrani, D; Vella, A
1996-01-01
We performed 99Tcm-ethyl cysteinate dimer (ECD) interictal single photon emission tomography (SPET) in 26 children with severe therapy-resistant epilepsy. All the children underwent a detailed clinical examination, an electroencephalogram (EEG) investigation and brain magnetic resonance imaging (MRI). In 21 of the 26 children, SPET demonstrated brain blood flow abnormalities, in 13 cases in the same territories that showed EEG alterations. MRI showed structural lesions in 6 of the 26 children, while SPET imaging confirmed these abnormalities in only 5 children. The lesion not detected on SPET was shown to be 3 mm thick on MRI. Five symptomatic patients had normal SPET. In one of these patients, the EEG findings were normal and MRI revealed a small calcific nodule (4 mm thick); in the others, the EEG showed non-focal but diffuse abnormalities. These data confirm that brain SPET is sensitive in detecting and localizing hypoperfused areas that could be associated with epileptic foci in this group of patients, even when the MRI image is normal.
The role of 99Tcm-HMPAO brain SPET in paediatric traumatic brain injury.
Goshen, E; Zwas, S T; Shahar, E; Tadmor, R
1996-05-01
Twenty-eight paediatric patients suffering from chronic sequelae of traumatic brain injury (TBI) were examined by EEG, radionuclide imaging with 99Tcm-hexamethylpropyleneamine oxime (99Tcm-HMPAO), computed tomography (CT) and, when available, magnetic resonance imaging (MRI), the results of which were evaluated retrospectively. Our findings indicate that neuro-SPET (single photon emission tomography) with 99Tcm-HMPAO is more sensitive than morphological or electrophysiological tests in detecting functional lesions. In our group, 15 of 32 CT scans were normal, compared with 3 of 35 SPET studies. SPET identified approximately 2.5 times more lesions than CT (86 vs 34). SPET was found to be particularly sensitive in detecting organic abnormalities in the basal ganglia and cerebellar regions, with a 3.6:1 detection rate in the basal ganglia and a 5:1 detection rate in the cerebellum compared with CT. In conclusion, neuro-SPET appears to be very useful when evaluating paediatric post-TBI patients in whom other modalities are not successful.
Abu-Judeh, H H; Parker, R; Singh, M; el-Zeftawy, H; Atay, S; Kumar, M; Naddaf, S; Aleksic, S; Abdel-Dayem, H M
1999-06-01
We present SPET brain perfusion findings in 32 patients who suffered mild traumatic brain injury without loss of consciousness and normal computed tomography. None of the patients had previous traumatic brain injury, CVA, HIV, psychiatric disorders or a history of alcohol or drug abuse. Their ages ranged from 11 to 61 years (mean = 42). The study was performed in 20 patients (62%) within 3 months of the date of injury and in 12 (38%) patients more than 3 months post-injury. Nineteen patients (60%) were involved in a motor vehicle accident, 10 patients (31%) sustained a fall and three patients (9%) received a blow to the head. The most common complaints were headaches in 26 patients (81%), memory deficits in 15 (47%), dizziness in 13 (41%) and sleep disorders in eight (25%). The studies were acquired approximately 2 h after an intravenous injection of 740 MBq (20.0 mCi) of 99Tcm-HMPAO. All images were acquired on a triple-headed gamma camera. The data were displayed on a 10-grade colour scale, with 2-pixel thickness (7.4 mm), and were reviewed blind to the patient's history of symptoms. The cerebellum was used as the reference site (100% maximum value). Any decrease in cerebral perfusion in the cortex or basal ganglia less than 70%, or less than 50% in the medial temporal lobe, compared to the cerebellar reference was considered abnormal. The results show that 13 (41%) had normal studies and 19 (59%) were abnormal (13 studies performed within 3 months of the date of injury and six studies performed more than 3 months post-injury). Analysis of the abnormal studies revealed that 17 showed 48 focal lesions and two showed diffuse supratentorial hypoperfusion (one from each of the early and delayed imaging groups). The 12 abnormal studies performed early had 37 focal lesions and averaged 3.1 lesions per patient, whereas there was a reduction to--an average of 2.2 lesions per patient in the five studies (total 11 lesions) performed more than 3 months post-injury. In the 17 abnormal studies with focal lesions, the following regions were involved in descending frequency: frontal lobes 58%, basal ganglia and thalami 47%, temporal lobes 26% and parietal lobes 16%. We conclude that: (1) SPET brain perfusion imaging is valuable and sensitive for the evaluation of cerebral perfusion changes following mild traumatic brain injury; (2) these changes can occur without loss of consciousness; (3) SPET brain perfusion imaging is more sensitive than computed tomography in detecting brain lesions; and (4) the changes may explain a neurological component of the patient's symptoms in the absence of morphological abnormalities using other imaging modalities.
SPET/CT image co-registration in the abdomen with a simple and cost-effective tool.
Förster, Gregor J; Laumann, Christina; Nickel, Otmar; Kann, Peter; Rieker, Olaf; Bartenstein, Peter
2003-01-01
Fusion of morphology and function has been shown to improve diagnostic accuracy in many clinical circumstances. Taking this into account, a number of instruments combining computed tomography (CT) with positron emission tomography (PET) or single-photon emission tomography (SPET) are appearing on the market. The aim of this study was to evaluate a simple and cost-effective approach to generate fusion images of similar quality. For the evaluation of the proposed approach, patients with neuroendocrine abdominal tumours with liver metastases were chosen, since the exact superimposition in the abdomen is more difficult than in other regions. Five hours following the injection of 110 MBq (111)In-DTPA-octreotide, patients were fixed in a vacuum cushion (MED-TEC, Vac-Loc) and investigated with helical CT in a mid-inspiration position ( n=14). Directly following the CT, a SPET study (SPET1) of the abdominal region was performed without changing the position of the patient. A second SPET study (SPET2), 24 h p.i., was acquired after repositioning the patient in his or her individually moulded vacuum cushion. A total of nine markers suitable for imaging with CT and SPET were fixed on the cushion. Datasets were fused by means of internal landmarks (e.g. metastases or margin of abdominal organs) or by the external markers. Image fusion using external markers was fast and easy to handle compared with the use of internal landmarks. Using this technique, all lesions detectable by SPET ( n=28) appeared exactly superpositioned on the respective CT morphology by visual inspection. Image fusion of CT/SPET1 and CT/SPET2 showed a mean deviation of the external markers that in the former case was smaller than the voxel size of 4.67 mm: 4.17+/-0.61 (CT/SPET1; +/-SD) and 5.52+/-1.56 mm (CT/SPET2), respectively. Using internal landmarks, the mean deviation of the chosen landmarks was 6.47+/-1.37 and 7.78+/-1.21 mm. Vector subtraction of corresponding anatomical points of the CT and the re-sampled SPET volume datasets resulted in a similar accuracy. Vector subtraction of the metastases showed a significantly less accurate superimposition when internal landmarks were used ( P<0.001). The vacuum cushion did not affect the image quality of CT and SPET. The proposed technique is a simple and cost-effective way to generate abdominal datasets suitable for image fusion. External markers positioned on the cushion allow for a rapid and robust overlay even if no readily identifiable internal landmarks are present. This technique is, in principle, also suitable for CT/PET fusion as well as for fusions of MRI data with PET or SPET.
Acton, Paul D; Choi, Seok-Rye; Plössl, Karl; Kung, Hank F
2002-05-01
Functional imaging of small animals, such as mice and rats, using ultra-high resolution positron emission tomography (PET) and single-photon emission tomography (SPET), is becoming a valuable tool for studying animal models of human disease. While several studies have shown the utility of PET imaging in small animals, few have used SPET in real research applications. In this study we aimed to demonstrate the feasibility of using ultra-high resolution SPET in quantitative studies of dopamine transporters (DAT) in the mouse brain. Four healthy ICR male mice were injected with (mean+/-SD) 704+/-154 MBq [(99m)Tc]TRODAT-1, and scanned using an ultra-high resolution SPET system equipped with pinhole collimators (spatial resolution 0.83 mm at 3 cm radius of rotation). Each mouse had two studies, to provide an indication of test-retest reliability. Reference tissue kinetic modeling analysis of the time-activity data in the striatum and cerebellum was used to quantitate the availability of DAT. A simple equilibrium ratio of striatum to cerebellum provided another measure of DAT binding. The SPET imaging results were compared against ex vivo biodistribution data from the striatum and cerebellum. The mean distribution volume ratio (DVR) from the reference tissue kinetic model was 2.17+/-0.34, with a test-retest reliability of 2.63%+/-1.67%. The ratio technique gave similar results (DVR=2.03+/-0.38, test-retest reliability=6.64%+/-3.86%), and the ex vivo analysis gave DVR=2.32+/-0.20. Correlations between the kinetic model and the ratio technique ( R(2)=0.86, P<0.001) and the ex vivo data ( R(2)=0.92, P=0.04) were both excellent. This study demonstrated clearly that ultra-high resolution SPET of small animals is capable of accurate, repeatable, and quantitative measures of DAT binding, and should open up the possibility of further studies of cerebral binding sites in mice using pinhole SPET.
Busatto, G F; Costa, D C; Ell, P J; Pilowsky, L S; David, A S; Kerwin, R W
1994-05-01
Regional cerebral blood flow (rCBF) was investigated in a group of medicated DSM-III-R schizophrenic patients and age, sex and handedness matched normal volunteers using a split-dose 99mTc-HMPAO Single Photon Emission Tomography (SPET) protocol. Measures were taken during the performance of a verbal memory task aimed at activating the left medial temporal lobe, a region repeatedly suggested to be structurally abnormal in schizophrenia. In normal subjects, the performance of the task was associated with significant rCBF increases in the left medial temporal, left inferior frontal and anterior cingulate cortices, and right cerebellum. Despite their significantly poorer performance on the memory task, the degree of medial temporal activation measured in the schizophrenic patients was not significantly different from that found in the control group. This finding suggests that memory deficits in schizophrenia do not necessarily imply failure to activate the left medial temporal lobe as assessed by 99mTc-HMPAO SPET.
2006-10-01
patients with breast cancer underwent scanning with a hybrid camera which combined a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE...investigated a novel approach which combines the output of a dual-head SPECT camera and a low-dose, single slice CT scanner , (GE Hawkeye®). This... scanner , (Hawkeye®, GE Medical system) is attempted in this study. This device is widely available in cardiology community and has the potential to
Montz, R; Perez-Castejón, M J; Jurado, J A; Martín-Comín, J; Esplugues, E; Salgado, L; Ventosa, A; Cantinho, G; Sá, E P; Fonseca, A T; Vieira, M R
1996-06-01
Technetium-99m tetrofosmin (Myoview) has unique properties for myocardial perfusion imaging very early after injection of the tracer. We used a very short same-day rest/stress protocol, to be performed within 2 h and evaluated its diagnostic accuracy. The study included 144 patients from seven Spanish and four Portuguese centres with a diagnosis of uncomplicated coronary artery disease (CAD); 78 patients (54%) had no history of prior myocardial infarction. Patients were injected with =300 MBq 99mTc-tetrofosmin at rest and =900 MBq approximately 1 h later at peak exercise. Single-photon emission tomographic (SPET) acquisitions were initiated within 5-30 min post injection. The results were compared with those of coronary angiography (CA). The data of 142 patients were completely evaluable (two with non-evaluable images were excluded). The quality of rest images was excellent or good in 86%, regionally problematic in 7%, poor but well interpretable in 5% and non-evaluable in 2%. The overall sensitivity for the detection of CAD was 93%, the specificity 38% and the accuracy 85%. The localization of defects by SPET in relation to the perfusion territories of stenosed vessels (>/=50%) was achieved with a sensitivity of 64% for the left anterior descending artery, 49% for the left circumflex artery and 86% for the right coronary artery, and an accuracy of 71%, 72% and 73% respectively. Concordance of SPET and CA was 62% for single-vessel disease and 68% for multivessel disease. In conclusion, this Spanish-Portuguese multicentre clinical trial confirmed, in a considerable number of patients who underwent coronary angiography, the feasibility of 99mTc tetrofosmin (Myoview) rest/stress myocardial SPET using a very short protocol (2 h).
Burroni, L; Aucone, A M; Volterrani, D; Hayek, Y; Bertelli, P; Vella, A; Zappella, M; Vattimo, A
1997-06-01
Rett syndrome is a progressive neurological paediatric disorder associated with severe mental deficiency, which affects only girls. The aim of this study was to determine if brain blood flow abnormalities detected with 99Tc(m)-ethyl-cysteinate-dimer (99Tc[m]-ECD) single photon emission tomography (SPET) can explain the clinical manifestation and progression of the disease. Qualitative and quantitative global and regional brain blood flow was evaluated in 12 girls with Rett syndrome and compared with an aged-matched reference group of children. In comparison with the reference group, SPET revealed a considerable global reduction in cerebral perfusion in the groups of girls with Rett syndrome. A large statistical difference was noted, which was more evident when comparing the control group with girls with stage IV Rett syndrome than girls with stage III Rett syndrome. The reduction in cerebral perfusion reflects functional disturbance in the brain of children with Rett syndrome. These data confirm that 99Tc(m)-ECD brain SPET is sensitive in detecting hypoperfused areas in girls with Rett syndrome that may be associated with brain atrophy, even when magnetic resonance imaging appears normal.
Moralidis, Efstratios; Spyridonidis, Tryfon; Arsos, Georgios; Skeberis, Vassilios; Anagnostopoulos, Constantinos; Gavrielidis, Stavros
2010-01-01
This study aimed to determine systolic dysfunction and estimate resting left ventricular ejection fraction (LVEF) from information collected during routine evaluation of patients with suspected or known coronary heart disease. This approach was then compared to gated single photon emission tomography (SPET). Patients having undergone stress (201)Tl myocardial perfusion imaging followed by equilibrium radionuclide angiography (ERNA) were separated into derivation (n=954) and validation (n=309) groups. Logistic regression analysis was used to develop scoring systems, containing clinical, electrocardiographic (ECG) and scintigraphic data, for the discrimination of an ERNA-LVEF<0.50. Linear regression analysis provided equations predicting ERNA-LVEF from those scores. In 373 patients LVEF was also assessed with (201)Tl gated SPET. Our results showed that an ECG-Scintigraphic scoring system was the best simple predictor of an ERNA-LVEF<0.50 in comparison to other models including ECG, clinical and scintigraphic variables in both the derivation and validation subpopulations. A simple linear equation was derived also for the assessment of resting LVEF from the ECG-Scintigraphic model. Equilibrium radionuclide angiography-LVEF had a good correlation with the ECG-Scintigraphic model LVEF (r=0.716, P=0.000), (201)Tl gated SPET LVEF (r=0.711, P=0.000) and the average LVEF from those assessments (r=0.796, P=0.000). The Bland-Altman statistic (mean+/-2SD) provided values of 0.001+/-0.176, 0.071+/-0.196 and 0.040+/-0.152, respectively. The average LVEF was a better discriminator of systolic dysfunction than gated SPET-LVEF in receiver operating characteristic (ROC) analysis and identified more patients (89%) with a =10% difference from ERNA-LVEF than gated SPET (65%, P=0.000). In conclusion, resting left ventricular systolic dysfunction can be determined effectively from simple resting ECG and stress myocardial perfusion imaging variables. This model provides reliable LVEF estimations, comparable to those from (201)Tl gated SPET, and can enhance the clinical performance of the latter.
A single camera photogrammetry system for multi-angle fast localization of EEG electrodes.
Qian, Shuo; Sheng, Yang
2011-11-01
Photogrammetry has become an effective method for the determination of electroencephalography (EEG) electrode positions in three dimensions (3D). Capturing multi-angle images of the electrodes on the head is a fundamental objective in the design of photogrammetry system for EEG localization. Methods in previous studies are all based on the use of either a rotating camera or multiple cameras, which are time-consuming or not cost-effective. This study aims to present a novel photogrammetry system that can realize simultaneous acquisition of multi-angle head images in a single camera position. Aligning two planar mirrors with the angle of 51.4°, seven views of the head with 25 electrodes are captured simultaneously by the digital camera placed in front of them. A complete set of algorithms for electrode recognition, matching, and 3D reconstruction is developed. It is found that the elapsed time of the whole localization procedure is about 3 min, and camera calibration computation takes about 1 min, after the measurement of calibration points. The positioning accuracy with the maximum error of 1.19 mm is acceptable. Experimental results demonstrate that the proposed system provides a fast and cost-effective method for the EEG positioning.
Adjustment of multi-CCD-chip-color-camera heads
NASA Astrophysics Data System (ADS)
Guyenot, Volker; Tittelbach, Guenther; Palme, Martin
1999-09-01
The principle of beam-splitter-multi-chip cameras consists in splitting an image into differential multiple images of different spectral ranges and in distributing these onto separate black and white CCD-sensors. The resulting electrical signals from the chips are recombined to produce a high quality color picture on the monitor. Because this principle guarantees higher resolution and sensitivity in comparison to conventional single-chip camera heads, the greater effort is acceptable. Furthermore, multi-chip cameras obtain the compete spectral information for each individual object point while single-chip system must rely on interpolation. In a joint project, Fraunhofer IOF and STRACON GmbH and in future COBRA electronic GmbH develop methods for designing the optics and dichroitic mirror system of such prism color beam splitter devices. Additionally, techniques and equipment for the alignment and assembly of color beam splitter-multi-CCD-devices on the basis of gluing with UV-curable adhesives have been developed, too.
Le Scao, Y; Robier, A; Baulieu, J L; Beutter, P; Pourcelot, L
1992-01-01
Brain activation procedures associated with single photon emission tomography (SPET) have recently been developed in healthy controls and diseased patients in order to help in their diagnosis and treatment. We investigated the effects of a promontory test (PT) on the cerebral distribution of technetium-99m hexamethylpropylene amine oxime (99mTc-HMPAO) in 7 profoundly deaf patients, 6 PT+ and one PT-. The count variation in the temporal lobe was calculated on 6 coronal slices using the ratio (Rstimulation-Rdeprivation)/Rdeprivation where R = counts in the temporal lobe/whole-brain count. A count increase in the temporal lobe was observed in all patients and was higher in all patients with PT+ than in the patient with PT-. The problems of head positioning and resolution of the system were taken into account, and we considered that the maximal count increment was related to the auditory cortex response to the stimulus. Further clinical investigations with high-resolution systems have to be performed in order to validate this presurgery test in cochlear implant assessment.
Calibration Procedures on Oblique Camera Setups
NASA Astrophysics Data System (ADS)
Kemper, G.; Melykuti, B.; Yu, C.
2016-06-01
Beside the creation of virtual animated 3D City models, analysis for homeland security and city planning, the accurately determination of geometric features out of oblique imagery is an important task today. Due to the huge number of single images the reduction of control points force to make use of direct referencing devices. This causes a precise camera-calibration and additional adjustment procedures. This paper aims to show the workflow of the various calibration steps and will present examples of the calibration flight with the final 3D City model. In difference to most other software, the oblique cameras are used not as co-registered sensors in relation to the nadir one, all camera images enter the AT process as single pre-oriented data. This enables a better post calibration in order to detect variations in the single camera calibration and other mechanical effects. The shown sensor (Oblique Imager) is based o 5 Phase One cameras were the nadir one has 80 MPIX equipped with a 50 mm lens while the oblique ones capture images with 50 MPix using 80 mm lenses. The cameras are mounted robust inside a housing to protect this against physical and thermal deformations. The sensor head hosts also an IMU which is connected to a POS AV GNSS Receiver. The sensor is stabilized by a gyro-mount which creates floating Antenna -IMU lever arms. They had to be registered together with the Raw GNSS-IMU Data. The camera calibration procedure was performed based on a special calibration flight with 351 shoots of all 5 cameras and registered the GPS/IMU data. This specific mission was designed in two different altitudes with additional cross lines on each flying heights. The five images from each exposure positions have no overlaps but in the block there are many overlaps resulting in up to 200 measurements per points. On each photo there were in average 110 well distributed measured points which is a satisfying number for the camera calibration. In a first step with the help of the nadir camera and the GPS/IMU data, an initial orientation correction and radial correction were calculated. With this approach, the whole project was calculated and calibrated in one step. During the iteration process the radial and tangential parameters were switched on individually for the camera heads and after that the camera constants and principal point positions were checked and finally calibrated. Besides that, the bore side calibration can be performed either on basis of the nadir camera and their offsets, or independently for each camera without correlation to the others. This must be performed in a complete mission anyway to get stability between the single camera heads. Determining the lever arms of the nodal-points to the IMU centre needs more caution than for a single camera especially due to the strong tilt angle. Prepared all these previous steps, you get a highly accurate sensor that enables a fully automated data extraction with a rapid update of you existing data. Frequently monitoring urban dynamics is then possible in fully 3D environment.
Latha, Muniappan; Pari, Leelavinothan; Sitasawad, Sandhya; Bhonde, Ramesh
2004-01-01
Oxidative stress is implicated in the pathogenesis of diabetic complications. The experiments were performed on normal and experimental male Wistar rats treated with Scoparia dulcis plant extract (SPEt). The effect of SPEt was tested on streptozotocin (STZ) treated Rat insulinoma cell lines (RINm5F cells) and isolated islets in vitro. Administration of an aqueous extract of Scoparia dulcis by intragastric intubation (po) at a dose of 200 mg/kg body weight significantly decreased the blood glucose and lipid peroxidative marker thiobarbituric acid reactive substances (TBARS) with significant increase in the activities of plasma insulin, pancreatic superoxide dismutase (SOD), catalase (CAT), and reduced glutathione (GSH) in streptozotocin diabetic rats at the end of 15 days treatment. Streptozotocin at a dose of 10 mug/mL evoked 6-fold stimulation of insulin secretion from isolated islets indicating its insulin secretagogue activity. The extract markedly reduced the STZ-induced lipidperoxidation in RINm5F cells. Further, SPEt protected STZ-mediated cytotoxicity and nitric oxide (NO) production in RINm5F cells. Treatment of RINm5F cells with 5 mM STZ and 10 mug of SPEt completely abrogated apoptosis induced by STZ, suggesting the involvement of oxidative stress. Flow cytometric assessment on the level of intracellular peroxides using fluorescent probe 2'7'-dichlorofluorescein diacetate (DCF-DA) confirmed that STZ (46%) induced an intracellular oxidative stress in RINm5F cells, which was suppressed by SPEt (21%). In addition, SPEt also reduced (33%) the STZ-induced apoptosis (72%) in RINm5F cells indicating the mode of protection of SPEt on RIN m5Fcells, islets, and pancreatic beta-cell mass (histopathological observations). Present study thus confirms antihyperglycemic effect of SPEt and also demonstrated the consistently strong antioxidant properties of Scoparia dulcis used in the traditional medicine. (c) 2004 Wiley Periodicals, Inc.
Weckesser, M; Griessmeier, M; Schmidt, D; Sonnenberg, F; Ziemons, K; Kemna, L; Holschbach, M; Langen, K; Müller-Gärtner, H
1998-02-01
Single-photon emission tomography (SPET) with the amino acid analogue l-3-[123I]iodo-alpha-methyl tyrosine (IMT) is helpful in the diagnosis and monitoring of cerebral gliomas. Radiolabelled amino acids seem to reflect tumour infiltration more specifically than conventional methods like magnetic resonance imaging and computed tomography. Automatic tumour delineation based on maximal tumour uptake may cause an overestimation of mean tumour uptake and an underestimation of tumour extension in tumours with circumscribed peaks. The aim of this study was to develop a program for tumour delineation and calculation of mean tumour uptake which takes into account the mean background activity and is thus optimised to the problem of tumour definition in IMT SPET. Using the frequency distribution of pixel intensities of the tomograms a program was developed which automatically detects a reference brain region and draws an isocontour region around the tumour taking into account mean brain radioactivity. Tumour area and tumour/brain ratios were calculated. A three-compartment phantom was simulated to test the program. The program was applied to IMT SPET studies of 20 patients with cerebral gliomas and was compared to the results of manual analysis by three different investigators. Activity ratios and chamber extension of the phantom were correctly calculated by the automatic analysis. A method based on image maxima alone failed to determine chamber extension correctly. Manual region of interest analysis in patient studies resulted in a mean inter-observer standard deviation of 8.7% +/ -6.1% (range 2.7% -25.0%). The mean value of the results of the manual analysis showed a significant correlation to the results of the automatic analysis (r = 0.91, P<0. 0001 for the uptake ratio; r = 0.87, P<0.0001 for the tumour area). We conclude that the algorithm proposed simplifies the calculation of uptake ratios and may be used for observer-independent evaluation of IMT SPET studies. Three-dimensional tumour recognition and transfer to co-registered morphological images based on this program may be useful for the planning of surgical and radiation treatment.
Clausner, Tommy; Dalal, Sarang S; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D . Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position.
Clausner, Tommy; Dalal, Sarang S.; Crespo-García, Maité
2017-01-01
The performance of EEG source reconstruction has benefited from the increasing use of advanced head modeling techniques that take advantage of MRI together with the precise positions of the recording electrodes. The prevailing technique for registering EEG electrode coordinates involves electromagnetic digitization. However, the procedure adds several minutes to experiment preparation and typical digitizers may not be accurate enough for optimal source reconstruction performance (Dalal et al., 2014). Here, we present a rapid, accurate, and cost-effective alternative method to register EEG electrode positions, using a single digital SLR camera, photogrammetry software, and computer vision techniques implemented in our open-source toolbox, janus3D. Our approach uses photogrammetry to construct 3D models from multiple photographs of the participant's head wearing the EEG electrode cap. Electrodes are detected automatically or semi-automatically using a template. The rigid facial features from these photo-based models are then surface-matched to MRI-based head reconstructions to facilitate coregistration to MRI space. This method yields a final electrode coregistration error of 0.8 mm, while a standard technique using an electromagnetic digitizer yielded an error of 6.1 mm. The technique furthermore reduces preparation time, and could be extended to a multi-camera array, which would make the procedure virtually instantaneous. In addition to EEG, the technique could likewise capture the position of the fiducial markers used in magnetoencephalography systems to register head position. PMID:28559791
EEG, PET, SPET and MRI in intractable childhood epilepsies: possible surgical correlations.
Fois, A; Farnetani, M A; Balestri, P; Buoni, S; Di Cosmo, G; Vattimo, A; Guazzelli, M; Guzzardi, R; Salvadori, P A
1995-12-01
Magnetic resonance imaging (MRI), single photon emission tomography (SPET), and positron emission tomography (PET) using [18F]fluorodeoxyglucose were used in combination with scalp and scalp-video EEGs in a group of 30 pediatric patients with drug resistant epilepsy (DRE) in order to identify patients who could benefit from neurosurgical approach. Seizures were classified according to the consensus criteria of The International League Against Epilepsy. In three patients infantile spasms (IS) were diagnosed; 13 subjects were affected by different types of generalized seizures, associated with complex partial seizures (CPS) in three. In the other 14 patients partial seizures, either simple (SPS) or complex, were present. A localized abnormality was demonstrated in one patient with IS and in three patients with generalized seizures. Of the group of 14 subjects with CPS, MRI and CT were normal in 7, but SPET or PET indicated focal hypoperfusion or hypometabolism concordant with the localization of the EEG abnormalities. In 5 of the other 7 patients anatomical and functional imaging and EEG findings were concordant for a localized abnormality. It can be concluded that functional imaging combined with scalp EEGs appears to be superior to the use of only CT and MRI for selecting children with epilepsy in whom a surgical approach can be considered, in particular when CPS resistant to therapy are present.
Latha, Muniappan; Pari, Leelavinothan; Sitasawad, Sandhya; Bhonde, Ramesh
2004-09-03
Scoparia dulcis (Sweet Broomweed) has been documented as a traditional treatment of diabetes. The administration of an aqueous extract of Scoparia dulcis at a dose of 200 mg/kg body weight significantly decreased the blood glucose with significant increase in plasma insulin level in streptozotocin diabetic rats at the end of 15 days treatment. The insulin secretagogue action of Scoparia dulcis plant extract (SPEt) was further investigated using isolated pancreatic islets from mice. SPEt at a dose of 10 microg/ml evoked 6.0 fold stimulation of insulin secretion from isolated islets indicating its insulin secretagogue activity. In addition the effect of SPEt on streptozotocin induced cell death and nitric oxide (NO) in terms of nitrite production were also examined. SPEt protected against streptozotocin- mediated cytotoxicity (88%) and NO production in rat insulinoma cell line (RINm5F). Above results suggest the glucose lowering effect of SPEt to be associated with potentiation of insulin release from pancreatic islets. Our results revealed the possible therapeutic value of Scoparia dulcis for the better control, management and prevention of diabetes mellitus progression.
Tranfaglia, Cristina; Palumbo, Barbara; Siepi, Donatella; Sinzinger, Helmut; Parnetti, Lucilla
2009-01-01
Different perfusion defects reflect neurological damage characteristics of different kinds of dementia. Our aim was to investigate the role of brain single photon emission tomography (SPET) with semiquantitative analysis of Brodmann areas in dementia, by technetium-99m - hexamethyl-propylenamine- oxime ((99m)Tc-HMPAO) brain SPET with semiquantitative analysis of Brodmann areas in patients with Alzheimer's disease (AD), frontotemporal dementia (FTD) and mild cognitive impairment (MCI). We studied 75 patients, 25 with AD (NiNCDS ADRDA criteria), 25 with FTD (Lund and Manchester criteria), 25 with MCI (EADC criteria). After i.v. injection of 740MBq of (99m)Tc-HMPAO, each patient underwent brain SPET. A software application was used able to map the SPET brain image to a stereotaxic atlas (Talairach), providing an affine co-registration by blocks of data defined in the Talairach space. A normal database calculating voxel by voxel the mean and the standard deviation of the measured values was built. Functional SPET data of 3D regions of interest (ROI) of predefined Brodmann's area templates were compared with those of a database of healthy subjects of the same age and gender. Mean values obtained in the Brodmann area ROI in the different groups of patients studied were evaluated. Our results showed that different Brodmann areas were significantly impaired in the different categories of dementia subjects. Both areas 37 (temporal gyrus) and 39 (angular gyrus) of AD patients (mean+/-SD: 37L= -1.6+/-1.0; 37R= -1.5+/-1.1; 39L= -2.3+/-1.3; 39R= -1.9+/-1.2) showed significant hypoperfusion (P<0.05) versus MCI (37L= -0.9 +/-0.7; 37R= -1.1+/-0.9; 39L= -1.4+/-1.1; 39R= -1.6+/-1.6.) and FTD (37L= -1.1+/-0.8; 37R= -1.0+/-0.9; 39L= -1.4+/-1.0; 39R= -1.2+/-1.2) subjects. AD patients showed significantly (P<0.01) decreased perfusion in areas 40 (supramarginal gyrus) (40L= -2.6+/-1.0; 40R= -2.3+/-1.1) with respect to MCI patients (40L= -1.8+/-0.9; 40R= -1.7+/-1.2). Finally, FTD patients showed significant hypoperfusion (P<0.05) in both areas 47 (frontal association cortex) (47L= -1.8+/-0.8; 47R= -1.1+/-0.8) in comparison with MCI subjects (47L= -1.2+/-0.9; 47R= -0.9+/-0.9). In conclusion, our results suggest that semiquantitative analysis of single Brodmann areas identify frontal area hypoperfusion in FTD patients and also parietal and temporal impairment in AD patients. In MCI patients, no hypoperfusion pattern is identified.
A data acquisition system for coincidence imaging using a conventional dual head gamma camera
NASA Astrophysics Data System (ADS)
Lewellen, T. K.; Miyaoka, R. S.; Jansen, F.; Kaplan, M. S.
1997-06-01
A low cost data acquisition system (DAS) was developed to acquire coincidence data from an unmodified General Electric Maxxus dual head scintillation camera. A high impedance pick-off circuit provides position and energy signals to the DAS without interfering with normal camera operation. The signals are pulse-clipped to reduce pileup effects. Coincidence is determined with fast timing signals derived from constant fraction discriminators. A charge-integrating FERA 16 channel ADC feeds position and energy data to two CAMAC FERA memories operated as ping-pong buffers. A Macintosh PowerPC running Labview controls the system and reads the CAMAC memories. A CAMAC 12-channel scaler records singles and coincidence rate data. The system dead-time is approximately 10% at a coincidence rate of 4.0 kHz.
Sciammarella, M G; Fragasso, G; Gerundini, P; Maffioli, L; Cappelletti, A; Margonato, A; Savi, A; Chierchia, S
1992-12-01
The ability of 99Tcm-methoxyisobutylisonitrile (MIBI) single photon emission tomography (SPET) to detect myocardial ischaemia and necrosis was assessed in 56 patients (45 male, 11 female, aged 55 +/- 5 years), with clinically recognized ischaemic heart disease (IHD). All underwent coronary angiography (CA) and left ventriculography (LV). SPET images were obtained at rest and at peak exercise (Modified Bruce) 90 min after injection of 99Tcm-MIBI (650-850 MBq). Data were acquired in 30 min over 180 degrees (from 45 degrees RAO to 45 degrees LPO) with no correction for attenuation, using a 64 x 64 matrix. The presence of persistent (P) or reversible (R) perfusion defects (PD) was then correlated to the resting and exercise ECG and to the results of CA and LV. Of the 56 patients, 34 had reversible underperfusion (RPD), 46 persistent underperfusion (PPD) and 31 had both. The occurrence of RPD correlated well with the occurrence of exercise-induced ST segment depression and/or angina (27 patients of 34 patients, 79%) and with the presence of significant coronary artery disease (CAD) (33 of 44, 73%). In 45 of 46 patients (98%) PPD corresponded to akinetic or severely hypokinetic segments (LV) usually explored by ECG leads exhibiting diagnostic Q waves (42 of 46 patients, 91%). The scan was normal both at rest and after stress in four of 11 patients with no CAD, and in two of 45 patients with CAD. Finally, an abnormal resting scan was seen in seven of 11 patients with normal coronary arteries, of whom six had regional wall motion abnormalities.(ABSTRACT TRUNCATED AT 250 WORDS)
NASA Astrophysics Data System (ADS)
Kittle, David S.; Patil, Chirag G.; Mamelak, Adam; Hansen, Stacey; Perry, Jeff; Ishak, Laura; Black, Keith L.; Butte, Pramod V.
2016-03-01
Current surgical microscopes are limited in sensitivity for NIR fluorescence. Recent developments in tumor markers attached with NIR dyes require newer, more sensitive imaging systems with high resolution to guide surgical resection. We report on a small, single camera solution enabling advanced image processing opportunities previously unavailable for ultra-high sensitivity imaging of these agents. The system captures both visible reflectance and NIR fluorescence at 300 fps while displaying full HD resolution video at 60 fps. The camera head has been designed to easily mount onto the Zeiss Pentero microscope head for seamless integration into surgical procedures.
Reprographics Career Ladder AFSC 703X0.
1981-07-01
LINEUP AND REGISTER TABLES 39 BINDING MACHINES 36 FLOURESCENT LAMPS 36 WET PROCESS PLATEMAKERS 36 ELECTRIC STAPLERS 32 MANUAL PAPER CUTTERS 32...ELECTROSTATIC COPIERS/PLATEMAKERS 78% PAPER CUTTERS 57% ELECTRIC STAPLERS 47% BINDING MACHINES 42% SINGLE HEAD DRILLS 37% PADDING RACKS 31% PLATEMAKING...HEAD DRILLS 78% MANUAL PAPER CUTTERS 71% STATION COLLATORS 51% BINDING MACHiNES 46% ELECTRIC STAPLERS 46% PLATEMAKING CAMERAS 44% SADDLE STITCHERS 42
The Surgeon's View: Comparison of Two Digital Video Recording Systems in Veterinary Surgery.
Giusto, Gessica; Caramello, Vittorio; Comino, Francesco; Gandini, Marco
2015-01-01
Video recording and photography during surgical procedures are useful in veterinary medicine for several reasons, including legal, educational, and archival purposes. Many systems are available, such as hand cameras, light-mounted cameras, and head cameras. We chose a reasonably priced head camera that is among the smallest video cameras available. To best describe its possible uses and advantages, we recorded video and images of eight different surgical cases and procedures, both in hospital and field settings. All procedures were recorded both with a head-mounted camera and a commercial hand-held photo camera. Then sixteen volunteers (eight senior clinicians and eight final-year students) completed an evaluation questionnaire. Both cameras produced high-quality photographs and videos, but observers rated the head camera significantly better regarding point of view and their understanding of the surgical operation. The head camera was considered significantly more useful in teaching surgical procedures. Interestingly, senior clinicians tended to assign generally lower scores compared to students. The head camera we tested is an effective, easy-to-use tool for recording surgeries and various veterinary procedures in all situations, with no need for assistance from a dedicated operator. It can be a valuable aid for veterinarians working in all fields of the profession and a useful tool for veterinary surgical education.
Vacuum compatible miniature CCD camera head
Conder, Alan D.
2000-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close(0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
ERIC Educational Resources Information Center
Smith, Linda B.; Yu, Chen; Yoshida, Hanako; Fausey, Caitlin M.
2015-01-01
Head-mounted video cameras (with and without an eye camera to track gaze direction) are being increasingly used to study infants' and young children's visual environments and provide new and often unexpected insights about the visual world from a child's point of view. The challenge in using head cameras is principally conceptual and concerns the…
Printed circuit board for a CCD camera head
Conder, Alan D.
2002-01-01
A charge-coupled device (CCD) camera head which can replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating x-rays, such as within a target chamber where laser produced plasmas are studied. The camera head is small, capable of operating both in and out of a vacuum environment, and is versatile. The CCD camera head uses PC boards with an internal heat sink connected to the chassis for heat dissipation, which allows for close (0.04" for example) stacking of the PC boards. Integration of this CCD camera head into existing instrumentation provides a substantial enhancement of diagnostic capabilities for studying high energy density plasmas, for a variety of military industrial, and medical imaging applications.
Pari, Leelavinothan; Latha, Muniappan; Rao, Chippada Appa
2004-01-01
We investigated the insulin-receptor-binding effect of Scoparia dulcis plant extract in streptozotocin (STZ)-induced male Wistar rats, using circulating erythrocytes (ER) as a model system. An aqueous extract of S dulcis plant (SPEt) (200 mg/kg body weight) was administered orally. We measured blood levels of glucose and plasma insulin and the binding of insulin to cell-membrane ER receptors. Glibenclamide was used as standard reference drug. The mean specific binding of insulin to ER was significantly lower in diabetic control rats (DC) (55.0 +/- 2.8%) than in SPEt-treated (70.0 +/- 3.5%)- and glibenclamide-treated (65.0 +/- 3.3%) diabetic rats, resulting in a significant decrease in plasma insulin. Scatchard plot analysis demonstrated that the decrease in insulin binding was accounted for by a lower number of insulin receptor sites per cell in DC rats when compared with SPEt- and glibenclamide-treated rats. High-affinity (Kd1), low-affinity (Kd2), and kinetic analysis revealed an increase in the average receptor affinity in ER from SPEt and glibenclamide treated diabetic rats having 2.5 +/- 0.15 x 10(10) M(-1) (Kd1); 17.0 +/- 1.0 x 10(-8) M(-1) (Kd2), and 2.0 +/- 0.1 x 10(-10) M(-1) (Kd1); 12.3 +/- 0.9 x 10(-8) M(-1) (Kd2) compared with 1.0 +/- 0.08 x 10(-10) M(-1) (Kd1); 2.7 +/- 0.25 x 10(-8) M(-1) (Kd2) in DC rats. The results suggest an acute alteration in the number of insulin receptors on ER membranes in STZ-induced diabetic rats. Treatment with SPEt and glibenclamide significantly improved specific insulin binding, with receptor number and affinity binding (p < 0.001) reaching almost normal non-diabetic levels. The data presented here show that SPEt and glibenclamide increase total ER membrane insulin binding sites with a concomitant significant increase in plasma insulin.
Light-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system has utility for use in various fields, such as telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a single video camera and a directional light source such as a laser mounted on a camera platform, and a remotely positioned operator. In one embodiment, the position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. The laser is offset vertically and horizontally from the camera, and the laser/camera platform is directed by the user to point the laser and the camera toward a target device. The image produced by the video camera is processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. A reference point is defined at a point in the video frame, which may be located outside of the image area of the camera. The disparity between the digital image of the laser spot and the reference point is calculated for use in a ranging analysis to determine range to the target.
Comparison of Cyberware PX and PS 3D human head scanners
NASA Astrophysics Data System (ADS)
Carson, Jeremy; Corner, Brian D.; Crockett, Eric; Li, Peng; Paquette, Steven
2008-02-01
A common limitation of laser line three-Dimensional (3D) scanners is the inability to scan objects with surfaces that are either parallel to the laser line or that self-occlude. Filling in missing areas adds some unwanted inaccuracy to the 3D model. Capturing the human head with a Cyberware PS Head Scanner is an example of obtaining a model where the incomplete areas are difficult to fill accurately. The PS scanner uses a single vertical laser line to illuminate the head and is unable to capture data at top of the head, where the line of sight is tangent to the surface, and under the chin, an area occluded by the chin when the subject looks straight forward. The Cyberware PX Scanner was developed to obtain this missing 3D head data. The PX scanner uses two cameras offset at different angles to provide a more detailed head scan that captures surfaces missed by the PS scanner. The PX scanner cameras also use new technology to obtain color maps that are of higher resolution than the PS Scanner. The two scanners were compared in terms of amount of surface captured (surface area and volume) and the quality of head measurements when compared to direct measurements obtained through standard anthropometry methods. Relative to the PS scanner, the PX head scans were more complete and provided the full set of head measurements, but actual measurement values, when available from both scanners, were about the same.
Wide-Field-of-View, High-Resolution, Stereoscopic Imager
NASA Technical Reports Server (NTRS)
Prechtl, Eric F.; Sedwick, Raymond J.
2010-01-01
A device combines video feeds from multiple cameras to provide wide-field-of-view, high-resolution, stereoscopic video to the user. The prototype under development consists of two camera assemblies, one for each eye. One of these assemblies incorporates a mounting structure with multiple cameras attached at offset angles. The video signals from the cameras are fed to a central processing platform where each frame is color processed and mapped into a single contiguous wide-field-of-view image. Because the resolution of most display devices is typically smaller than the processed map, a cropped portion of the video feed is output to the display device. The positioning of the cropped window will likely be controlled through the use of a head tracking device, allowing the user to turn his or her head side-to-side or up and down to view different portions of the captured image. There are multiple options for the display of the stereoscopic image. The use of head mounted displays is one likely implementation. However, the use of 3D projection technologies is another potential technology under consideration, The technology can be adapted in a multitude of ways. The computing platform is scalable, such that the number, resolution, and sensitivity of the cameras can be leveraged to improve image resolution and field of view. Miniaturization efforts can be pursued to shrink the package down for better mobility. Power savings studies can be performed to enable unattended, remote sensing packages. Image compression and transmission technologies can be incorporated to enable an improved telepresence experience.
Virtual performer: single camera 3D measuring system for interaction in virtual space
NASA Astrophysics Data System (ADS)
Sakamoto, Kunio; Taneji, Shoto
2006-10-01
The authors developed interaction media systems in the 3D virtual space. In these systems, the musician virtually plays an instrument like the theremin in the virtual space or the performer plays a show using the virtual character such as a puppet. This interactive virtual media system consists of the image capture, measuring performer's position, detecting and recognizing motions and synthesizing video image using the personal computer. In this paper, we propose some applications of interaction media systems; a virtual musical instrument and superimposing CG character. Moreover, this paper describes the measuring method of the positions of the performer, his/her head and both eyes using a single camera.
Intraocular camera for retinal prostheses: Refractive and diffractive lens systems
NASA Astrophysics Data System (ADS)
Hauer, Michelle Christine
The focus of this thesis is on the design and analysis of refractive, diffractive, and hybrid refractive/diffractive lens systems for a miniaturized camera that can be surgically implanted in the crystalline lens sac and is designed to work in conjunction with current and future generation retinal prostheses. The development of such an intraocular camera (IOC) would eliminate the need for an external head-mounted or eyeglass-mounted camera. Placing the camera inside the eye would allow subjects to use their natural eye movements for foveation (attention) instead of more cumbersome head tracking, would notably aid in personal navigation and mobility, and would also be significantly more psychologically appealing from the standpoint of personal appearances. The capability for accommodation with no moving parts or feedback control is incorporated by employing camera designs that exhibit nearly infinite depth of field. Such an ultracompact optical imaging system requires a unique combination of refractive and diffractive optical elements and relaxed system constraints derived from human psychophysics. This configuration necessitates an extremely compact, short focal-length lens system with an f-number close to unity. Initially, these constraints appear highly aggressive from an optical design perspective. However, after careful analysis of the unique imaging requirements of a camera intended to work in conjunction with the relatively low pixellation levels of a retinal microstimulator array, it becomes clear that such a design is not only feasible, but could possibly be implemented with a single lens system.
Single camera photogrammetry system for EEG electrode identification and localization.
Baysal, Uğur; Sengül, Gökhan
2010-04-01
In this study, photogrammetric coordinate measurement and color-based identification of EEG electrode positions on the human head are simultaneously implemented. A rotating, 2MP digital camera about 20 cm above the subject's head is used and the images are acquired at predefined stop points separated azimuthally at equal angular displacements. In order to realize full automation, the electrodes have been labeled by colored circular markers and an electrode recognition algorithm has been developed. The proposed method has been tested by using a plastic head phantom carrying 25 electrode markers. Electrode locations have been determined while incorporating three different methods: (i) the proposed photogrammetric method, (ii) conventional 3D radiofrequency (RF) digitizer, and (iii) coordinate measurement machine having about 6.5 mum accuracy. It is found that the proposed system automatically identifies electrodes and localizes them with a maximum error of 0.77 mm. It is suggested that this method may be used in EEG source localization applications in the human brain.
Keyboard before Head Tracking Depresses User Success in Remote Camera Control
NASA Astrophysics Data System (ADS)
Zhu, Dingyun; Gedeon, Tom; Taylor, Ken
In remote mining, operators of complex machinery have more tasks or devices to control than they have hands. For example, operating a rock breaker requires two handed joystick control to position and fire the jackhammer, leaving the camera control to either automatic control or require the operator to switch between controls. We modelled such a teleoperated setting by performing experiments using a simple physical game analogue, being a half size table soccer game with two handles. The complex camera angles of the mining application were modelled by obscuring the direct view of the play area and the use of a Pan-Tilt-Zoom (PTZ) camera. The camera control was via either a keyboard or via head tracking using two different sets of head gestures called “head motion” and “head flicking” for turning camera motion on/off. Our results show that the head motion control was able to provide a comparable performance to using a keyboard, while head flicking was significantly worse. In addition, the sequence of use of the three control methods is highly significant. It appears that use of the keyboard first depresses successful use of the head tracking methods, with significantly better results when one of the head tracking methods was used first. Analysis of the qualitative survey data collected supports that the worst (by performance) method was disliked by participants. Surprisingly, use of that worst method as the first control method significantly enhanced performance using the other two control methods.
Potency of a novel saw palmetto ethanol extract, SPET-085, for inhibition of 5alpha-reductase II.
Pais, Pilar
2010-08-01
The nicotinamide adenine dinucleotide phosphate (NADPH)-dependent membrane protein 5alpha-reductase irreversibly catalyses the conversion of testosterone to the most potent androgen, 5alpha-dihydrotestosterone (DHT). In humans, two 5alpha-reductase isoenyzmes are expressed: type I and type II. Type II is found primarily in prostate tissue. Saw palmetto extract (SPE) has been widely used for the treatment of lower urinary tract symptoms secondary to benign prostatic hyperplasia (BPH). The mechanisms of the pharmacological effects of SPE include the inhibition of 5alpha-reductase, among other actions. Clinical studies of SPE have been equivocal, with some showing significant results and others not. These inconsistent results may be due, in part, to varying bioactivities of the SPE used in the studies. The aim of the present study was to determine the in vitro potency of a novel saw palmetto ethanol extract (SPET-085), an inhibitor of the 5alpha-reductase isoenzyme type II, in a cell-free test system. On the basis of the enzymatic conversion of the substrate androstenedione to the 5alpha-reduced product 5alpha-androstanedione, the inhibitory potency was measured and compared to those of finasteride, an approved 5alpha-reductase inhibitor. SPET-085 concentration-dependently inhibited 5alpha-reductase type II in vitro (IC(50)=2.88+/-0.45 microg/mL). The approved 5alpha-reductase inhibitor, finasteride, tested as positive control, led to 61% inhibition of 5alpha-reductase type II. SPET-085 effectively inhibits the enzyme that has been linked to BPH, and the amount of extract required for activity is very low compared to data reported for other extracts. It can be concluded from data in the literature that SPET-085 is as effective as a hexane extract of saw palmetto that exhibited the highest levels of bioactivity, and is more effective than other SPEs tested. This study confirmed that SPET-085 has prostate health-promoting bioactivity that also corresponds favorably to that reported for the established prescription drug standard of therapy, finasteride.
Real-time Awake Animal Motion Tracking System for SPECT Imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Goddard Jr, James Samuel; Baba, Justin S; Lee, Seung Joon
Enhancements have been made in the development of a real-time optical pose measurement and tracking system that provides 3D position and orientation data for a single photon emission computed tomography (SPECT) imaging system for awake, unanesthetized, unrestrained small animals. Three optical cameras with infrared (IR) illumination view the head movements of an animal enclosed in a transparent burrow. Markers placed on the head provide landmark points for image segmentation. Strobed IR LED s are synchronized to the cameras and illuminate the markers to prevent motion blur for each set of images. The system using the three cameras automatically segments themore » markers, detects missing data, rejects false reflections, performs trinocular marker correspondence, and calculates the 3D pose of the animal s head. Improvements have been made in methods for segmentation, tracking, and 3D calculation to give higher speed and more accurate measurements during a scan. The optical hardware has been installed within a Siemens MicroCAT II small animal scanner at Johns Hopkins without requiring functional changes to the scanner operation. The system has undergone testing using both phantoms and live mice and has been characterized in terms of speed, accuracy, robustness, and reliability. Experimental data showing these motion tracking results are given.« less
NASA Astrophysics Data System (ADS)
Minamoto, Masahiko; Matsunaga, Katsuya
1999-05-01
Operator performance while using a remote controlled backhoe shovel is described for three different stereoscopic viewing conditions: direct view, fixed stereoscopic cameras connected to a helmet mounted display (HMD), and rotating stereo camera connected and slaved to the head orientation of a free moving stereo HMD. Results showed that the head- slaved system provided the best performance.
Comparison of three different techniques for camera and motion control of a teleoperated robot.
Doisy, Guillaume; Ronen, Adi; Edan, Yael
2017-01-01
This research aims to evaluate new methods for robot motion control and camera orientation control through the operator's head orientation in robot teleoperation tasks. Specifically, the use of head-tracking in a non-invasive way, without immersive virtual reality devices was combined and compared with classical control modes for robot movements and camera control. Three control conditions were tested: 1) a condition with classical joystick control of both the movements of the robot and the robot camera, 2) a condition where the robot movements were controlled by a joystick and the robot camera was controlled by the user head orientation, and 3) a condition where the movements of the robot were controlled by hand gestures and the robot camera was controlled by the user head orientation. Performance, workload metrics and their evolution as the participants gained experience with the system were evaluated in a series of experiments: for each participant, the metrics were recorded during four successive similar trials. Results shows that the concept of robot camera control by user head orientation has the potential of improving the intuitiveness of robot teleoperation interfaces, specifically for novice users. However, more development is needed to reach a margin of progression comparable to a classical joystick interface. Copyright © 2016 Elsevier Ltd. All rights reserved.
Antidiabetic effect of Scoparia dulcis: effect on lipid peroxidation in streptozotocin diabetes.
Pari, L; Latha, M
2005-03-01
Oxidative damage has been suggested to be a contributory factor in the development and complications of diabetes. The antioxidant effect of an aqueous extract of Scoparia dulcis, an indigenous plant used in Ayurvedic medicine in India was studied in rats with streptozotocin-induced diabetes. Oral administration of Scoparia dulcis plant extract (SPEt) (200 mg/kg body weight) for 3 weeks resulted in a significant reduction in blood glucose and an increase in plasma insulin. The aqueous extract also resulted in decreased free radical formation in tissues (liver and kidney) studied. The decrease in thiobarbituric acid reactive substances (TBARS) and hydroperoxides (HPX) and increase in the activities of superoxide dismutase (SOD), catalase (CAT), glutathione peroxidase (GPx), reduced glutathione (GSH) and glutathione-S-transferase (GST) clearly show the antioxidant properties of SPEt in addition to its antidiabetic effect. The effect of SPEt at 200 mg/kg body weight was better than glibenclamide, a reference drug.
Geometric Calibration and Validation of Ultracam Aerial Sensors
NASA Astrophysics Data System (ADS)
Gruber, Michael; Schachinger, Bernhard; Muick, Marc; Neuner, Christian; Tschemmernegg, Helfried
2016-03-01
We present details of the calibration and validation procedure of UltraCam Aerial Camera systems. Results from the laboratory calibration and from validation flights are presented for both, the large format nadir cameras and the oblique cameras as well. Thus in this contribution we show results from the UltraCam Eagle and the UltraCam Falcon, both nadir mapping cameras, and the UltraCam Osprey, our oblique camera system. This sensor offers a mapping grade nadir component together with the four oblique camera heads. The geometric processing after the flight mission is being covered by the UltraMap software product. Thus we present details about the workflow as well. The first part consists of the initial post-processing which combines image information as well as camera parameters derived from the laboratory calibration. The second part, the traditional automated aerial triangulation (AAT) is the step from single images to blocks and enables an additional optimization process. We also present some special features of our software, which are designed to better support the operator to analyze large blocks of aerial images and to judge the quality of the photogrammetric set-up.
Head-coupled remote stereoscopic camera system for telepresence applications
NASA Astrophysics Data System (ADS)
Bolas, Mark T.; Fisher, Scott S.
1990-09-01
The Virtual Environment Workstation Project (VIEW) at NASA's Ames Research Center has developed a remotely controlled stereoscopic camera system that can be used for telepresence research and as a tool to develop and evaluate configurations for head-coupled visual systems associated with space station telerobots and remote manipulation robotic arms. The prototype camera system consists of two lightweight CCD video cameras mounted on a computer controlled platform that provides real-time pan, tilt, and roll control of the camera system in coordination with head position transmitted from the user. This paper provides an overall system description focused on the design and implementation of the camera and platform hardware configuration and the development of control software. Results of preliminary performance evaluations are reported with emphasis on engineering and mechanical design issues and discussion of related psychophysiological effects and objectives.
Studying Upper-Limb Amputee Prosthesis Use to Inform Device Design
2015-10-01
the study. This equipment has included a modified GoPro head-mounted camera and a Vicon 13-camera optical motion capture system, which was not part...also completed for relevant members of the study team. 4. The head-mounted camera setup has been established (a modified GoPro Hero 3 with external
360 deg Camera Head for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Townsend, Julie A.; Kulczycki, Eric A.; Willson, Reginald G.; Huntsberger, Terrance L.; Garrett, Michael S.; Trebi-Ollennu, Ashitey; Bergh, Charles F.
2012-01-01
The 360 camera head consists of a set of six color cameras arranged in a circular pattern such that their overlapping fields of view give a full 360 view of the immediate surroundings. The cameras are enclosed in a watertight container along with support electronics and a power distribution system. Each camera views the world through a watertight porthole. To prevent overheating or condensation in extreme weather conditions, the watertight container is also equipped with an electrical cooling unit and a pair of internal fans for circulation.
Wang, Yan; Ma, Guangkai; An, Le; Shi, Feng; Zhang, Pei; Lalush, David S.; Wu, Xi; Pu, Yifei; Zhou, Jiliu; Shen, Dinggang
2017-01-01
Objective To obtain high-quality positron emission tomography (PET) image with low-dose tracer injection, this study attempts to predict the standard-dose PET (S-PET) image from both its low-dose PET (L-PET) counterpart and corresponding magnetic resonance imaging (MRI). Methods It was achieved by patch-based sparse representation (SR), using the training samples with a complete set of MRI, L-PET and S-PET modalities for dictionary construction. However, the number of training samples with complete modalities is often limited. In practice, many samples generally have incomplete modalities (i.e., with one or two missing modalities) that thus cannot be used in the prediction process. In light of this, we develop a semi-supervised tripled dictionary learning (SSTDL) method for S-PET image prediction, which can utilize not only the samples with complete modalities (called complete samples) but also the samples with incomplete modalities (called incomplete samples), to take advantage of the large number of available training samples and thus further improve the prediction performance. Results Validation was done on a real human brain dataset consisting of 18 subjects, and the results show that our method is superior to the SR and other baseline methods. Conclusion This work proposed a new S-PET prediction method, which can significantly improve the PET image quality with low-dose injection. Significance The proposed method is favorable in clinical application since it can decrease the potential radiation risk for patients. PMID:27187939
Structure and Activities of Nuclear Medicine in Kuwait.
Elgazzar, Abdelhamid H; Owunwanne, Azuwuike; Alenezi, Saud
2016-07-01
The practice of nuclear medicine in Kuwait began in 1965 as a clinic for treating thyroid diseases. The practice developed gradually and until 1981 when the Faculty of Medicine established the Division of Nuclear Medicine in the Department of Radiology, which later became a separate department responsible for establishing and managing the practice in all hospitals of Kuwait. In 1987, a nuclear medicine residency program was begun and it is administered by Kuwait Institute for Medical Specializations originally as a 4-year but currently as a 5-year program. Currently there are 11 departments in the ministry of health hospitals staffed by 49 qualified attending physicians, mostly the diplomats of the Kuwait Institute for Medical Specializations nuclear medicine residency program, 4 academic physicians, 2 radiopharmacists, 2 physicists, and 130 technologists. These departments are equipped with 33 dual-head gamma cameras, 10 SPET/CT, 5 PET/CT, 2 cyclotrons, 1 breast-specific gamma imaging, 1 positron-emitting mammography, 10 thyroid uptake units, 8 technegas machines, 7 PET infusion systems, and 8 treadmills. Activities of nuclear medicine in Kuwait include education and training, clinical service, and research. Education includes nuclear medicine technology program in the Faculty of Allied Health Sciences, the 5-year residency program, medical school teaching distributed among different modules of the integrated curriculum with 14 didactic lecture, and other teaching sessions in nuclear medicine MSc program, which run concurrently with the first part of the residency program. The team of Nuclear Medicine in Kuwait has been active in research and has published more than 300 paper, 11 review articles, 12 book chapters, and 17 books in addition to 36 grants and 2 patents. A PhD program approved by Kuwait University Council would begin in 2016. Copyright © 2016 Elsevier Inc. All rights reserved.
Polito, Ennio; Burroni, Luca; Pichierri, Patrizia; Loffredo, Antonio; Vattimo, Angelo G
2005-12-01
To evaluate technetium Tc 99m (99mTc) red blood cell scintigraphy as a diagnostic tool for orbital cavernous hemangioma and to differentiate between orbital masses on the basis of their vascularization. We performed 99mTc red blood cell scintigraphy on 23 patients (8 female and 15 male; mean age, 47 years) affected by an orbital mass previously revealed with computed tomography (CT) and magnetic resonance imaging (MRI) and suggesting cavernous hemangioma. In our diagnosis, we considered the orbital increase delayed uptake with the typical scintigraphic pattern known as perfusion blood pool mismatch. The patients underwent biopsy or surgical treatment with transconjunctival cryosurgical extraction when possible. Single-photon emission tomography (SPET) showed intense focal uptake in the orbit corresponding to radiologic findings in 11 patients who underwent surgical treatment and pathologic evaluation (9 cavernous hemangiomas, 1 hemangiopericytoma, and 1 lymphangioma). Clinical or histologic examination of the remaining 22 patients revealed the presence of 5 lymphoid pseudotumors, 2 lymphomas, 2 pleomorphic adenomas of the lacrimal gland, 1 astrocytoma, 1 ophthalmic vein thrombosis, and 1 orbital varix. The confirmation of the preoperative diagnosis by 99mTc red blood cell scintigraphy shows that this technique is a reliable tool for differentiating cavernous hemangiomas from other orbital masses (sensitivity, 100%; specificity, 86%) when ultrasound, CT, and MRI are not diagnostic. Unfortunately, 99mTc red blood cell scintigraphy results were positive in 1 patient with hemangiopericytoma and 1 patient with lymphangioma, which showed increased uptake in the lesion on SPET images because of the vascular nature of these tumors. Therefore, in these cases, the SPET images have to be integrated with data regarding clinical preoperative evaluation and CT scans or MRI studies. On the basis of our study, a complete diagnostic picture, CT scans or MRI studies, and scintigraphic patterns can establish the preoperative diagnosis of vascular orbital tumors such as cavernous hemangioma, adult-type lymphangioma, and hemangiopericytoma.
NASA Astrophysics Data System (ADS)
Kersten, T. P.; Stallmann, D.; Tschirschwitz, F.
2016-06-01
For mapping of building interiors various 2D and 3D indoor surveying systems are available today. These systems essentially differ from each other by price and accuracy as well as by the effort required for fieldwork and post-processing. The Laboratory for Photogrammetry & Laser Scanning of HafenCity University (HCU) Hamburg has developed, as part of an industrial project, a lowcost indoor mapping system, which enables systematic inventory mapping of interior facilities with low staffing requirements and reduced, measurable expenditure of time and effort. The modelling and evaluation of the recorded data take place later in the office. The indoor mapping system of HCU Hamburg consists of the following components: laser range finder, panorama head (pan-tilt-unit), single-board computer (Raspberry Pi) with digital camera and battery power supply. The camera is pre-calibrated in a photogrammetric test field under laboratory conditions. However, remaining systematic image errors are corrected simultaneously within the generation of the panorama image. Due to cost reasons the camera and laser range finder are not coaxially arranged on the panorama head. Therefore, eccentricity and alignment of the laser range finder against the camera must be determined in a system calibration. For the verification of the system accuracy and the system calibration, the laser points were determined from measurements with total stations. The differences to the reference were 4-5mm for individual coordinates.
A 3D camera for improved facial recognition
NASA Astrophysics Data System (ADS)
Lewin, Andrew; Orchard, David A.; Scott, Andrew M.; Walton, Nicholas A.; Austin, Jim
2004-12-01
We describe a camera capable of recording 3D images of objects. It does this by projecting thousands of spots onto an object and then measuring the range to each spot by determining the parallax from a single frame. A second frame can be captured to record a conventional image, which can then be projected onto the surface mesh to form a rendered skin. The camera is able of locating the images of the spots to a precision of better than one tenth of a pixel, and from this it can determine range to an accuracy of less than 1 mm at 1 meter. The data can be recorded as a set of two images, and is reconstructed by forming a 'wire mesh' of range points and morphing the 2 D image over this structure. The camera can be used to record the images of faces and reconstruct the shape of the face, which allows viewing of the face from various angles. This allows images to be more critically inspected for the purpose of identifying individuals. Multiple images can be stitched together to create full panoramic images of head sized objects that can be viewed from any direction. The system is being tested with a graph matching system capable of fast and accurate shape comparisons for facial recognition. It can also be used with "models" of heads and faces to provide a means of obtaining biometric data.
High-performance dual-speed CCD camera system for scientific imaging
NASA Astrophysics Data System (ADS)
Simpson, Raymond W.
1996-03-01
Traditionally, scientific camera systems were partitioned with a `camera head' containing the CCD and its support circuitry and a camera controller, which provided analog to digital conversion, timing, control, computer interfacing, and power. A new, unitized high performance scientific CCD camera with dual speed readout at 1 X 106 or 5 X 106 pixels per second, 12 bit digital gray scale, high performance thermoelectric cooling, and built in composite video output is described. This camera provides all digital, analog, and cooling functions in a single compact unit. The new system incorporates the A/C converter, timing, control and computer interfacing in the camera, with the power supply remaining a separate remote unit. A 100 Mbyte/second serial link transfers data over copper or fiber media to a variety of host computers, including Sun, SGI, SCSI, PCI, EISA, and Apple Macintosh. Having all the digital and analog functions in the camera made it possible to modify this system for the Woods Hole Oceanographic Institution for use on a remote controlled submersible vehicle. The oceanographic version achieves 16 bit dynamic range at 1.5 X 105 pixels/second, can be operated at depths of 3 kilometers, and transfers data to the surface via a real time fiber optic link.
Constrained space camera assembly
Heckendorn, Frank M.; Anderson, Erin K.; Robinson, Casandra W.; Haynes, Harriet B.
1999-01-01
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras.
A single camera roentgen stereophotogrammetry method for static displacement analysis.
Gussekloo, S W; Janssen, B A; George Vosselman, M; Bout, R G
2000-06-01
A new method to quantify motion or deformation of bony structures has been developed, since quantification is often difficult due to overlaying tissue, and the currently used roentgen stereophotogrammetry method requires significant investment. In our method, a single stationary roentgen source is used, as opposed to the usual two, which, in combination with a fixed radiogram cassette holder, forms a camera with constant interior orientation. By rotating the experimental object, it is possible to achieve a sufficient angle between the various viewing directions, enabling photogrammetric calculations. The photogrammetric procedure was performed on digitised radiograms and involved template matching to increase accuracy. Co-ordinates of spherical markers in the head of a bird (Rhea americana), were calculated with an accuracy of 0.12mm. When these co-ordinates were used in a deformation analysis, relocations of about 0.5mm could be accurately determined.
Retinal fundus imaging with a plenoptic sensor
NASA Astrophysics Data System (ADS)
Thurin, Brice; Bloch, Edward; Nousias, Sotiris; Ourselin, Sebastien; Keane, Pearse; Bergeles, Christos
2018-02-01
Vitreoretinal surgery is moving towards 3D visualization of the surgical field. This require acquisition system capable of recording such 3D information. We propose a proof of concept imaging system based on a light-field camera where an array of micro-lenses is placed in front of a conventional sensor. With a single snapshot, a stack of images focused at different depth are produced on the fly, which provides enhanced depth perception for the surgeon. Difficulty in depth localization of features and frequent focus-change during surgery are making current vitreoretinal heads-up surgical imaging systems cumbersome to use. To improve the depth perception and eliminate the need to manually refocus on the instruments during the surgery, we designed and implemented a proof-of-concept ophthalmoscope equipped with a commercial light-field camera. The sensor of our camera is composed of an array of micro-lenses which are projecting an array of overlapped micro-images. We show that with a single light-field snapshot we can digitally refocus between the retina and a tool located in front of the retina or display an extended depth-of-field image where everything is in focus. The design and system performances of the plenoptic fundus camera are detailed. We will conclude by showing in vivo data recorded with our device.
Design and implementation of a remote UAV-based mobile health monitoring system
NASA Astrophysics Data System (ADS)
Li, Songwei; Wan, Yan; Fu, Shengli; Liu, Mushuang; Wu, H. Felix
2017-04-01
Unmanned aerial vehicles (UAVs) play increasing roles in structure health monitoring. With growing mobility in modern Internet-of-Things (IoT) applications, the health monitoring of mobile structures becomes an emerging application. In this paper, we develop a UAV-carried vision-based monitoring system that allows a UAV to continuously track and monitor a mobile infrastructure and transmit back the monitoring information in real- time from a remote location. The monitoring system uses a simple UAV-mounted camera and requires only a single feature located on the mobile infrastructure for target detection and tracking. The computation-effective vision-based tracking solution based on a single feature is an improvement over existing vision-based lead-follower tracking systems that either have poor tracking performance due to the use of a single feature, or have improved tracking performance at a cost of the usage of multiple features. In addition, a UAV-carried aerial networking infrastructure using directional antennas is used to enable robust real-time transmission of monitoring video streams over a long distance. Automatic heading control is used to self-align headings of directional antennas to enable robust communication in mobility. Compared to existing omni-communication systems, the directional communication solution significantly increases the operation range of remote monitoring systems. In this paper, we develop the integrated modeling framework of camera and mobile platforms, design the tracking algorithm, develop a testbed of UAVs and mobile platforms, and evaluate system performance through both simulation studies and field tests.
Improved head-controlled TV system produces high-quality remote image
NASA Technical Reports Server (NTRS)
Goertz, R.; Lindberg, J.; Mingesz, D.; Potts, C.
1967-01-01
Manipulator operator uses an improved resolution tv camera/monitor positioning system to view the remote handling and processing of reactive, flammable, explosive, or contaminated materials. The pan and tilt motions of the camera and monitor are slaved to follow the corresponding motions of the operators head.
Regional cerebral blood flow in childhood autism: a SPET study with SPM evaluation.
Burroni, Luca; Orsi, Alessandra; Monti, Lucia; Hayek, Youssef; Rocchi, Raffaele; Vattimo, Angelo G
2008-02-01
To establish a link between rCBF assessed with Tc-ECD SPET and the clinical manifestation of the disease. We performed the study on 11 patients (five girls and six boys; mean age 11.2 years) displaying autistic behaviour and we compared their data with that of an age-matched reference group of eight normal children. A quantitative analysis of rCBF was performed calculating a perfusion index (PI) and an asymmetry index (AI) in each lobe. Images were analysed with statistical parametric mapping software, following the spatial normalization of SPET images for a standard brain. A statistically significant (P=0.003) global reduction of CBF was found in the group of autistic children (PI=1.07+/-0.07) when compared with the reference group (PI=1.25+/-0.12). Moreover, a significant difference was also observed for the right-to-left asymmetry of hemispheric perfusion between the control group and autistic patients (P=0.0085) with a right prevalence greater in autistic (2.90+/-1.68) with respect to normal children (1.12+/-0.49). Our data show a significant decrease of global cerebral perfusion in autistic children in comparison with their normal counterparts and the existence of left-hemispheric dysfunction, especially in the temporo-parietal areas devoted to language and the comprehension of music and sounds. We suggest that these abnormal areas are related to the cognitive impairment observed in autistic children, such as language deficits, impairment of cognitive development and object representation, and abnormal perception and responses to sensory stimuli. Tc-ECD SPET seems to be sensitive in revealing brain blood flow alterations and left-to-right asymmetries, when neuroradiological patterns are normal.
Automatic Calibration Method for Driver’s Head Orientation in Natural Driving Environment
Fu, Xianping; Guan, Xiao; Peli, Eli; Liu, Hongbo; Luo, Gang
2013-01-01
Gaze tracking is crucial for studying driver’s attention, detecting fatigue, and improving driver assistance systems, but it is difficult in natural driving environments due to nonuniform and highly variable illumination and large head movements. Traditional calibrations that require subjects to follow calibrators are very cumbersome to be implemented in daily driving situations. A new automatic calibration method, based on a single camera for determining the head orientation and which utilizes the side mirrors, the rear-view mirror, the instrument board, and different zones in the windshield as calibration points, is presented in this paper. Supported by a self-learning algorithm, the system tracks the head and categorizes the head pose in 12 gaze zones based on facial features. The particle filter is used to estimate the head pose to obtain an accurate gaze zone by updating the calibration parameters. Experimental results show that, after several hours of driving, the automatic calibration method without driver’s corporation can achieve the same accuracy as a manual calibration method. The mean error of estimated eye gazes was less than 5°in day and night driving. PMID:24639620
Constrained space camera assembly
Heckendorn, F.M.; Anderson, E.K.; Robinson, C.W.; Haynes, H.B.
1999-05-11
A constrained space camera assembly which is intended to be lowered through a hole into a tank, a borehole or another cavity is disclosed. The assembly includes a generally cylindrical chamber comprising a head and a body and a wiring-carrying conduit extending from the chamber. Means are included in the chamber for rotating the body about the head without breaking an airtight seal formed therebetween. The assembly may be pressurized and accompanied with a pressure sensing means for sensing if a breach has occurred in the assembly. In one embodiment, two cameras, separated from their respective lenses, are installed on a mounting apparatus disposed in the chamber. The mounting apparatus includes means allowing both longitudinal and lateral movement of the cameras. Moving the cameras longitudinally focuses the cameras, and moving the cameras laterally away from one another effectively converges the cameras so that close objects can be viewed. The assembly further includes means for moving lenses of different magnification forward of the cameras. 17 figs.
Stegmayr, Armin; Fessl, Benjamin; Hörtnagl, Richard; Marcadella, Michael; Perkhofer, Susanne
2013-08-01
The aim of the study was to assess the potential negative impact of cellular phones and digitally enhanced cordless telecommunication (DECT) devices on the quality of static and dynamic scintigraphy to avoid repeated testing in infant and teenage patients to protect them from unnecessary radiation exposure. The assessment was conducted by performing phantom measurements under real conditions. A functional renal-phantom acting as a pair of kidneys in dynamic scans was created. Data were collected using the setup of cellular phones and DECT phones placed in different positions in relation to a camera head to test the potential interference of cellular phones and DECT phones with the cameras. Cellular phones reproducibly interfered with the oldest type of gamma camera, which, because of its single-head specification, is the device most often used for renal examinations. Curves indicating the renal function were considerably disrupted; cellular phones as well as DECT phones showed a disturbance concerning static acquisition. Variable electromagnetic tolerance in different types of γ-cameras could be identified. Moreover, a straightforward, low-cost method of testing the susceptibility of equipment to interference caused by cellular phones and DECT phones was generated. Even though some departments use newer models of γ-cameras, which are less susceptible to electromagnetic interference, we recommend testing examination rooms to avoid any interference caused by cellular phones. The potential electromagnetic interference should be taken into account when the purchase of new sensitive medical equipment is being considered, not least because the technology of mobile communication is developing fast, which also means that different standards of wave bands will be issued in the future.
Tracking a Head-Mounted Display in a Room-Sized Environment with Head-Mounted Cameras
1990-04-01
poor resolution and a very limited working volume [Wan90]. 4 OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each...Nor88] Northern Digital. Trade literature on Optotrak - Northern Digital’s Three Dimensional Optical Motion Tracking and Analysis System. Northern Digital
Extrinsic Calibration of Camera Networks Based on Pedestrians
Guan, Junzhi; Deboeverie, Francis; Slembrouck, Maarten; Van Haerenborgh, Dirk; Van Cauwelaert, Dimitri; Veelaert, Peter; Philips, Wilfried
2016-01-01
In this paper, we propose a novel extrinsic calibration method for camera networks by analyzing tracks of pedestrians. First of all, we extract the center lines of walking persons by detecting their heads and feet in the camera images. We propose an easy and accurate method to estimate the 3D positions of the head and feet w.r.t. a local camera coordinate system from these center lines. We also propose a RANSAC-based orthogonal Procrustes approach to compute relative extrinsic parameters connecting the coordinate systems of cameras in a pairwise fashion. Finally, we refine the extrinsic calibration matrices using a method that minimizes the reprojection error. While existing state-of-the-art calibration methods explore epipolar geometry and use image positions directly, the proposed method first computes 3D positions per camera and then fuses the data. This results in simpler computations and a more flexible and accurate calibration method. Another advantage of our method is that it can also handle the case of persons walking along straight lines, which cannot be handled by most of the existing state-of-the-art calibration methods since all head and feet positions are co-planar. This situation often happens in real life. PMID:27171080
Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Della Rocca, David; Della Rocca, Robert C; Andron, Aleza; Jain, Vandana
2015-10-01
To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery.
Nair, Akshay Gopinathan; Kamal, Saurabh; Dave, Tarjani Vivek; Mishra, Kapil; Reddy, Harsha S; Rocca, David Della; Rocca, Robert C Della; Andron, Aleza; Jain, Vandana
2015-01-01
Objective: To study the utility of a commercially available small, portable ultra-high definition (HD) camera (GoPro Hero 4) for intraoperative recording. Methods: A head mount was used to fix the camera on the operating surgeon's head. Due care was taken to protect the patient's identity. The recorded video was subsequently edited and used as a teaching tool. This retrospective, noncomparative study was conducted at three tertiary eye care centers. The surgeries recorded were ptosis correction, ectropion correction, dacryocystorhinostomy, angular dermoid excision, enucleation, blepharoplasty and lid tear repair surgery (one each). The recorded videos were reviewed, edited, and checked for clarity, resolution, and reproducibility. Results: The recorded videos were found to be high quality, which allowed for zooming and visualization of the surgical anatomy clearly. Minimal distortion is a drawback that can be effectively addressed during postproduction. The camera, owing to its lightweight and small size, can be mounted on the surgeon's head, thus offering a unique surgeon point-of-view. In our experience, the results were of good quality and reproducible. Conclusions: A head-mounted ultra-HD video recording system is a cheap, high quality, and unobtrusive technique to record surgery and can be a useful teaching tool in external facial and ophthalmic plastic surgery. PMID:26655001
Tanaka, Hirokazu; Chikamori, Taishiro; Hida, Satoshi; Uchida, Kenji; Igarashi, Yuko; Yokoyama, Tsuyoshi; Takahashi, Masaki; Shiba, Chie; Yoshimura, Mana; Tokuuye, Koichi; Yamashina, Akira
2013-01-01
Cadmium-zinc-telluride (CZT) solid-state detectors have been recently introduced into the field of myocardial perfusion imaging. The aim of this study was to prospectively compare the diagnostic performance of the CZT high-speed gamma camera (Discovery NM 530c) with that of the standard 3-head gamma camera in the same group of patients. The study group consisted of 150 consecutive patients who underwent a 1-day stress-rest (99m)Tc-sestamibi or tetrofosmin imaging protocol. Image acquisition was performed first on a standard gamma camera with a 15-min scan time each for stress and for rest. All scans were immediately repeated on a CZT camera with a 5-min scan time for stress and a 3-min scan time for rest, using list mode. The correlations between the CZT camera and the standard camera for perfusion and function analyses were strong within narrow Bland-Altman limits of agreement. Using list mode analysis, image quality for stress was rated as good or excellent in 97% of the 3-min scans, and in 100% of the ≥4-min scans. For CZT scans at rest, similarly, image quality was rated as good or excellent in 94% of the 1-min scans, and in 100% of the ≥2-min scans. The novel CZT camera provides excellent image quality, which is equivalent to standard myocardial single-photon emission computed tomography, despite a short scan time of less than half of the standard time.
NASA Astrophysics Data System (ADS)
Blackford, Ethan B.; Estepp, Justin R.
2015-03-01
Non-contact, imaging photoplethysmography uses cameras to facilitate measurements including pulse rate, pulse rate variability, respiration rate, and blood perfusion by measuring characteristic changes in light absorption at the skin's surface resulting from changes in blood volume in the superficial microvasculature. Several factors may affect the accuracy of the physiological measurement including imager frame rate, resolution, compression, lighting conditions, image background, participant skin tone, and participant motion. Before this method can gain wider use outside basic research settings, its constraints and capabilities must be well understood. Recently, we presented a novel approach utilizing a synchronized, nine-camera, semicircular array backed by measurement of an electrocardiogram and fingertip reflectance photoplethysmogram. Twenty-five individuals participated in six, five-minute, controlled head motion artifact trials in front of a black and dynamic color backdrop. Increasing the input channel space for blind source separation using the camera array was effective in mitigating error from head motion artifact. Herein we present the effects of lower frame rates at 60 and 30 (reduced from 120) frames per second and reduced image resolution at 329x246 pixels (one-quarter of the original 658x492 pixel resolution) using bilinear and zero-order downsampling. This is the first time these factors have been examined for a multiple imager array and align well with previous findings utilizing a single imager. Examining windowed pulse rates, there is little observable difference in mean absolute error or error distributions resulting from reduced frame rates or image resolution, thus lowering requirements for systems measuring pulse rate over sufficient length time windows.
Rotation and direction judgment from visual images head-slaved in two and three degrees-of-freedom.
Adelstein, B D; Ellis, S R
2000-03-01
The contribution to spatial awareness of adding a roll degree-of-freedom (DOF) to telepresence camera platform yaw and pitch was examined in an experiment where subjects judged direction and rotation of stationary target markers in a remote scene. Subjects viewed the scene via head-slaved camera images in a head-mounted display. Elimination of the roll DOF affected rotation judgment, but only at extreme yaw and pitch combinations, and did not affect azimuth and elevation judgement. Systematic azimuth overshoot occurred regardless of roll condition. Observed rotation misjudgments are explained by kinematic models for eye-head direction of gaze.
Computing camera heading: A study
NASA Astrophysics Data System (ADS)
Zhang, John Jiaxiang
2000-08-01
An accurate estimate of the motion of a camera is a crucial first step for the 3D reconstruction of sites, objects, and buildings from video. Solutions to the camera heading problem can be readily applied to many areas, such as robotic navigation, surgical operation, video special effects, multimedia, and lately even in internet commerce. From image sequences of a real world scene, the problem is to calculate the directions of the camera translations. The presence of rotations makes this problem very hard. This is because rotations and translations can have similar effects on the images, and are thus hard to tell apart. However, the visual angles between the projection rays of point pairs are unaffected by rotations, and their changes over time contain sufficient information to determine the direction of camera translation. We developed a new formulation of the visual angle disparity approach, first introduced by Tomasi, to the camera heading problem. Our new derivation makes theoretical analysis possible. Most notably, a theorem is obtained that locates all possible singularities of the residual function for the underlying optimization problem. This allows identifying all computation trouble spots beforehand, and to design reliable and accurate computational optimization methods. A bootstrap-jackknife resampling method simultaneously reduces complexity and tolerates outliers well. Experiments with image sequences show accurate results when compared with the true camera motion as measured with mechanical devices.
Testbed for remote telepresence research
NASA Astrophysics Data System (ADS)
Adnan, Sarmad; Cheatham, John B., Jr.
1992-11-01
Teleoperated robots offer solutions to problems associated with operations in remote and unknown environments, such as space. Teleoperated robots can perform tasks related to inspection, maintenance, and retrieval. A video camera can be used to provide some assistance in teleoperations, but for fine manipulation and control, a telepresence system that gives the operator a sense of actually being at the remote location is more desirable. A telepresence system comprised of a head-tracking stereo camera system, a kinematically redundant arm, and an omnidirectional mobile robot has been developed at the mechanical engineering department at Rice University. This paper describes the design and implementation of this system, its control hardware, and software. The mobile omnidirectional robot has three independent degrees of freedom that permit independent control of translation and rotation, thereby simulating a free flying robot in a plane. The kinematically redundant robot arm has eight degrees of freedom that assist in obstacle and singularity avoidance. The on-board control computers permit control of the robot from the dual hand controllers via a radio modem system. A head-mounted display system provides the user with a stereo view from a pair of cameras attached to the mobile robotics system. The head tracking camera system moves stereo cameras mounted on a three degree of freedom platform to coordinate with the operator's head movements. This telepresence system provides a framework for research in remote telepresence, and teleoperations for space.
The development of automated behavior analysis software
NASA Astrophysics Data System (ADS)
Jaana, Yuki; Prima, Oky Dicky A.; Imabuchi, Takashi; Ito, Hisayoshi; Hosogoe, Kumiko
2015-03-01
The measurement of behavior for participants in a conversation scene involves verbal and nonverbal communications. The measurement validity may vary depending on the observers caused by some aspects such as human error, poorly designed measurement systems, and inadequate observer training. Although some systems have been introduced in previous studies to automatically measure the behaviors, these systems prevent participants to talk in a natural way. In this study, we propose a software application program to automatically analyze behaviors of the participants including utterances, facial expressions (happy or neutral), head nods, and poses using only a single omnidirectional camera. The camera is small enough to be embedded into a table to allow participants to have spontaneous conversation. The proposed software utilizes facial feature tracking based on constrained local model to observe the changes of the facial features captured by the camera, and the Japanese female facial expression database to recognize expressions. Our experiment results show that there are significant correlations between measurements observed by the observers and by the software.
Instrumentation for Infrared Airglow Clutter.
1987-03-10
gain, and filter position to the Camera Head, and monitors these parameters as well as preamp video. GAZER is equipped with a Lenzar wide angle, low...Specifications/Parameters VIDEO SENSOR: Camera ...... . LENZAR Intensicon-8 LLLTV using 2nd gen * micro-channel intensifier and proprietary camera tube
CMOS Imaging Sensor Technology for Aerial Mapping Cameras
NASA Astrophysics Data System (ADS)
Neumann, Klaus; Welzenbach, Martin; Timm, Martin
2016-06-01
In June 2015 Leica Geosystems launched the first large format aerial mapping camera using CMOS sensor technology, the Leica DMC III. This paper describes the motivation to change from CCD sensor technology to CMOS for the development of this new aerial mapping camera. In 2002 the DMC first generation was developed by Z/I Imaging. It was the first large format digital frame sensor designed for mapping applications. In 2009 Z/I Imaging designed the DMC II which was the first digital aerial mapping camera using a single ultra large CCD sensor to avoid stitching of smaller CCDs. The DMC III is now the third generation of large format frame sensor developed by Z/I Imaging and Leica Geosystems for the DMC camera family. It is an evolution of the DMC II using the same system design with one large monolithic PAN sensor and four multi spectral camera heads for R,G, B and NIR. For the first time a 391 Megapixel large CMOS sensor had been used as PAN chromatic sensor, which is an industry record. Along with CMOS technology goes a range of technical benefits. The dynamic range of the CMOS sensor is approx. twice the range of a comparable CCD sensor and the signal to noise ratio is significantly better than with CCDs. Finally results from the first DMC III customer installations and test flights will be presented and compared with other CCD based aerial sensors.
Overview of a Hybrid Underwater Camera System
2014-07-01
meters), in increments of 200ps. The camera is also equipped with 6:1 motorized zoom lens. A precision miniature attitude, heading reference system ( AHRS ...LUCIE Control & Power Distribution System AHRS Pulsed LASER Gated Camera -^ Sonar Transducer (b) LUCIE sub-systems Proc. ofSPIEVol. 9111
An automatic markerless registration method for neurosurgical robotics based on an optical camera.
Meng, Fanle; Zhai, Fangwen; Zeng, Bowei; Ding, Hui; Wang, Guangzhi
2018-02-01
Current markerless registration methods for neurosurgical robotics use the facial surface to match the robot space with the image space, and acquisition of the facial surface usually requires manual interaction and constrains the patient to a supine position. To overcome these drawbacks, we propose a registration method that is automatic and does not constrain patient position. An optical camera attached to the robot end effector captures images around the patient's head from multiple views. Then, high coverage of the head surface is reconstructed from the images through multi-view stereo vision. Since the acquired head surface point cloud contains color information, a specific mark that is manually drawn on the patient's head prior to the capture procedure can be extracted to automatically accomplish coarse registration rather than using facial anatomic landmarks. Then, fine registration is achieved by registering the high coverage of the head surface without relying solely on the facial region, thus eliminating patient position constraints. The head surface was acquired by the camera with a good repeatability accuracy. The average target registration error of 8 different patient positions measured with targets inside a head phantom was [Formula: see text], while the mean surface registration error was [Formula: see text]. The method proposed in this paper achieves automatic markerless registration in multiple patient positions and guarantees registration accuracy inside the head. This method provides a new approach for establishing the spatial relationship between the image space and the robot space.
NASA Astrophysics Data System (ADS)
Tokuoka, Nobuyuki; Miyoshi, Hitoshi; Kusano, Hideaki; Hata, Hidehiro; Hiroe, Tetsuyuki; Fujiwara, Kazuhito; Yasushi, Kondo
2008-11-01
Visualization of explosion phenomena is very important and essential to evaluate the performance of explosive effects. The phenomena, however, generate blast waves and fragments from cases. We must protect our visualizing equipment from any form of impact. In the tests described here, the front lens was separated from the camera head by means of a fiber-optic cable in order to be able to use the camera, a Shimadzu Hypervision HPV-1, for tests in severe blast environment, including the filming of explosions. It was possible to obtain clear images of the explosion that were not inferior to the images taken by the camera with the lens directly coupled to the camera head. It could be confirmed that this system is very useful for the visualization of dangerous events, e.g., at an explosion site, and for visualizations at angles that would be unachievable under normal circumstances.
Noise Reduction in Brainwaves by Using Both EEG Signals and Frontal Viewing Camera Images
Bang, Jae Won; Choi, Jong-Suk; Park, Kang Ryoung
2013-01-01
Electroencephalogram (EEG)-based brain-computer interfaces (BCIs) have been used in various applications, including human–computer interfaces, diagnosis of brain diseases, and measurement of cognitive status. However, EEG signals can be contaminated with noise caused by user's head movements. Therefore, we propose a new method that combines an EEG acquisition device and a frontal viewing camera to isolate and exclude the sections of EEG data containing these noises. This method is novel in the following three ways. First, we compare the accuracies of detecting head movements based on the features of EEG signals in the frequency and time domains and on the motion features of images captured by the frontal viewing camera. Second, the features of EEG signals in the frequency domain and the motion features captured by the frontal viewing camera are selected as optimal ones. The dimension reduction of the features and feature selection are performed using linear discriminant analysis. Third, the combined features are used as inputs to support vector machine (SVM), which improves the accuracy in detecting head movements. The experimental results show that the proposed method can detect head movements with an average error rate of approximately 3.22%, which is smaller than that of other methods. PMID:23669713
Fast calibration of electromagnetically tracked oblique-viewing rigid endoscopes.
Liu, Xinyang; Rice, Christina E; Shekhar, Raj
2017-10-01
The oblique-viewing (i.e., angled) rigid endoscope is a commonly used tool in conventional endoscopic surgeries. The relative rotation between its two moveable parts, the telescope and the camera head, creates a rotation offset between the actual and the projection of an object in the camera image. A calibration method tailored to compensate such offset is needed. We developed a fast calibration method for oblique-viewing rigid endoscopes suitable for clinical use. In contrast to prior approaches based on optical tracking, we used electromagnetic (EM) tracking as the external tracking hardware to improve compactness and practicality. Two EM sensors were mounted on the telescope and the camera head, respectively, with considerations to minimize EM tracking errors. Single-image calibration was incorporated into the method, and a sterilizable plate, laser-marked with the calibration pattern, was also developed. Furthermore, we proposed a general algorithm to estimate the rotation center in the camera image. Formulas for updating the camera matrix in terms of clockwise and counterclockwise rotations were also developed. The proposed calibration method was validated using a conventional [Formula: see text], 5-mm laparoscope. Freehand calibrations were performed using the proposed method, and the calibration time averaged 2 min and 8 s. The calibration accuracy was evaluated in a simulated clinical setting with several surgical tools present in the magnetic field of EM tracking. The root-mean-square re-projection error averaged 4.9 pixel (range 2.4-8.5 pixel, with image resolution of [Formula: see text] for rotation angles ranged from [Formula: see text] to [Formula: see text]. We developed a method for fast and accurate calibration of oblique-viewing rigid endoscopes. The method was also designed to be performed in the operating room and will therefore support clinical translation of many emerging endoscopic computer-assisted surgical systems.
Covariance analysis for evaluating head trackers
NASA Astrophysics Data System (ADS)
Kang, Donghoon
2017-10-01
Existing methods for evaluating the performance of head trackers usually rely on publicly available face databases, which contain facial images and the ground truths of their corresponding head orientations. However, most of the existing publicly available face databases are constructed by assuming that a frontal head orientation can be determined by compelling the person under examination to look straight ahead at the camera on the first video frame. Since nobody can accurately direct one's head toward the camera, this assumption may be unrealistic. Rather than obtaining estimation errors, we present a method for computing the covariance of estimation error rotations to evaluate the reliability of head trackers. As an uncertainty measure of estimators, the Schatten 2-norm of a square root of error covariance (or the algebraic average of relative error angles) can be used. The merit of the proposed method is that it does not disturb the person under examination by asking him to direct his head toward certain directions. Experimental results using real data validate the usefulness of our method.
JackIn Head: Immersive Visual Telepresence System with Omnidirectional Wearable Camera.
Kasahara, Shunichi; Nagai, Shohei; Rekimoto, Jun
2017-03-01
Sharing one's own immersive experience over the Internet is one of the ultimate goals of telepresence technology. In this paper, we present JackIn Head, a visual telepresence system featuring an omnidirectional wearable camera with image motion stabilization. Spherical omnidirectional video footage taken around the head of a local user is stabilized and then broadcast to others, allowing remote users to explore the immersive visual environment independently of the local user's head direction. We describe the system design of JackIn Head and report the evaluation results of real-time image stabilization and alleviation of cybersickness. Then, through an exploratory observation study, we investigate how individuals can remotely interact, communicate with, and assist each other with our system. We report our observation and analysis of inter-personal communication, demonstrating the effectiveness of our system in augmenting remote collaboration.
Photorefractor ocular screening system
NASA Technical Reports Server (NTRS)
Richardson, John R. (Inventor); Kerr, Joseph H. (Inventor)
1987-01-01
A method and apparatus for detecting human eye defects, particularly detection of refractive error is presented. Eye reflex is recorded on color film when the eyes are exposed to a flash of light. The photographs are compared with predetermined standards to detect eye defects. The base structure of the ocular screening system is a folding interconnect structure, comprising hinged sections. Attached to one end of the structure is a head positioning station which comprises vertical support, a head positioning bracket having one end attached to the top of the support, and two head positioning lamps to verify precise head positioning. At the opposite end of the interconnect structure is a camera station with camera, electronic flash unit, and blinking fixation lamp, for photographing the eyes of persons being evaluated.
Precise Head Tracking in Hearing Applications
NASA Astrophysics Data System (ADS)
Helle, A. M.; Pilinski, J.; Luhmann, T.
2015-05-01
The paper gives an overview about two research projects, both dealing with optical head tracking in hearing applications. As part of the project "Development of a real-time low-cost tracking system for medical and audiological problems (ELCoT)" a cost-effective single camera 3D tracking system has been developed which enables the detection of arm and head movements of human patients. Amongst others, the measuring system is designed for a new hearing test (based on the "Mainzer Kindertisch"), which analyzes the directional hearing capabilities of children in cooperation with the research project ERKI (Evaluation of acoustic sound source localization for children). As part of the research project framework "Hearing in everyday life (HALLO)" a stereo tracking system is being used for analyzing the head movement of human patients during complex acoustic events. Together with the consideration of biosignals like skin conductance the speech comprehension and listening effort of persons with reduced hearing ability, especially in situations with background noise, is evaluated. For both projects the system design, accuracy aspects and results of practical tests are discussed.
Spatial calibration of an optical see-through head mounted display
Gilson, Stuart J.; Fitzgibbon, Andrew W.; Glennerster, Andrew
2010-01-01
We present here a method for calibrating an optical see-through Head Mounted Display (HMD) using techniques usually applied to camera calibration (photogrammetry). Using a camera placed inside the HMD to take pictures simultaneously of a tracked object and features in the HMD display, we could exploit established camera calibration techniques to recover both the intrinsic and extrinsic properties of the HMD (width, height, focal length, optic centre and principal ray of the display). Our method gives low re-projection errors and, unlike existing methods, involves no time-consuming and error-prone human measurements, nor any prior estimates about the HMD geometry. PMID:18599125
Latha, M; Pari, L
2004-04-01
The effects of an aqueous extract of the plant Scoparia dulcis (200 mg/kg) on the polyol pathway and lipid peroxidation were examined in the liver of streptozotocin adult diabetic male albino Wistar rats. The diabetic control rats (N = 6) presented a significant increase in blood glucose, sorbitol dehydrogenase, glycosylated hemoglobin and lipid peroxidation markers such as thiobarbituric acid reactive substances (TBARS) and hydroperoxides, and a significant decrease in plasma insulin and antioxidant enzymes such as glutathione peroxidase (GPx), glutathione-S-transferase (GST) and reduced glutathione (GSH) compared to normal rats (N = 6). Scoparia dulcis plant extract (SPEt, 200 mg kg-1 day-1) and glibenclamide (600 microg kg-1 day-1), a reference drug, were administered by gavage for 6 weeks to diabetic rats (N = 6 for each group) and significantly reduced blood glucose, sorbitol dehydrogenase, glycosylated hemoglobin, TBARS, and hydroperoxides, and significantly increased plasma insulin, GPx, GST and GSH activities in liver. The effect of the SPEt was compared with that of glibenclamide. The effect of the extract may have been due to the decreased influx of glucose into the polyol pathway leading to increased activities of antioxidant enzymes and plasma insulin and decreased activity of sorbitol dehydrogenase. These results indicate that the SPEt was effective in attenuating hyperglycemia in rats and their susceptibility to oxygen free radicals.
A probabilistic model of overt visual attention for cognitive robots.
Begum, Momotaz; Karray, Fakhri; Mann, George K I; Gosine, Raymond G
2010-10-01
Visual attention is one of the major requirements for a robot to serve as a cognitive companion for human. The robotic visual attention is mostly concerned with overt attention which accompanies head and eye movements of a robot. In this case, each movement of the camera head triggers a number of events, namely transformation of the camera and the image coordinate systems, change of content of the visual field, and partial appearance of the stimuli. All of these events contribute to the reduction in probability of meaningful identification of the next focus of attention. These events are specific to overt attention with head movement and, therefore, their effects are not addressed in the classical models of covert visual attention. This paper proposes a Bayesian model as a robot-centric solution for the overt visual attention problem. The proposed model, while taking inspiration from the primates visual attention mechanism, guides a robot to direct its camera toward behaviorally relevant and/or visually demanding stimuli. A particle filter implementation of this model addresses the challenges involved in overt attention with head movement. Experimental results demonstrate the performance of the proposed model.
Multiview face detection based on position estimation over multicamera surveillance system
NASA Astrophysics Data System (ADS)
Huang, Ching-chun; Chou, Jay; Shiu, Jia-Hou; Wang, Sheng-Jyh
2012-02-01
In this paper, we propose a multi-view face detection system that locates head positions and indicates the direction of each face in 3-D space over a multi-camera surveillance system. To locate 3-D head positions, conventional methods relied on face detection in 2-D images and projected the face regions back to 3-D space for correspondence. However, the inevitable false face detection and rejection usually degrades the system performance. Instead, our system searches for the heads and face directions over the 3-D space using a sliding cube. Each searched 3-D cube is projected onto the 2-D camera views to determine the existence and direction of human faces. Moreover, a pre-process to estimate the locations of candidate targets is illustrated to speed-up the searching process over the 3-D space. In summary, our proposed method can efficiently fuse multi-camera information and suppress the ambiguity caused by detection errors. Our evaluation shows that the proposed approach can efficiently indicate the head position and face direction on real video sequences even under serious occlusion.
NASA Astrophysics Data System (ADS)
Lin, Chern-Sheng; Ho, Chien-Wa; Chang, Kai-Chieh; Hung, San-Shan; Shei, Hung-Jung; Yeh, Mau-Shiun
2006-06-01
This study describes the design and combination of an eye-controlled and a head-controlled human-machine interface system. This system is a highly effective human-machine interface, detecting head movement by changing positions and numbers of light sources on the head. When the users utilize the head-mounted display to browse a computer screen, the system will catch the images of the user's eyes with CCD cameras, which can also measure the angle and position of the light sources. In the eye-tracking system, the program in the computer will locate each center point of the pupils in the images, and record the information on moving traces and pupil diameters. In the head gesture measurement system, the user wears a double-source eyeglass frame, so the system catches images of the user's head by using a CCD camera in front of the user. The computer program will locate the center point of the head, transferring it to the screen coordinates, and then the user can control the cursor by head motions. We combine the eye-controlled and head-controlled human-machine interface system for the virtual reality applications.
Photogrammetric accuracy measurements of head holder systems used for fractionated radiotherapy.
Menke, M; Hirschfeld, F; Mack, T; Pastyr, O; Sturm, V; Schlegel, W
1994-07-30
We describe how stereo photogrammetry can be used to determine immobilization and repositioning accuracies of head holder systems used for fractionated radiotherapy of intracranial lesions. The apparatus consists of two video cameras controlled by a personal computer and a bite block based landmark system. Position and spatial orientation of the landmarks are monitored by the cameras and processed for the real-time calculation of a target point's actual position relative to its initializing position. The target's position is assumed to be invariant with respect to the landmark system. We performed two series of 30 correlated head motion measurements on two test persons. One of the series was done with a thermoplastic device, the other one with a cast device developed for stereotactic treatment at the German Cancer Research Center. Immobilization and repositioning accuracies were determined with respect to a target point situated near the base of the skull. The repositioning accuracies were described in terms of the distributions of the mean displacements of the single motion measurements. Movements of the target in the order of 0.05 mm caused by breathing could be detected with a maximum resolution in time of 12 ms. The data derived from the investigation of the two test persons indicated similar immobilization accuracies for the two devices, but the repositioning errors were larger for the thermoplastic device than for the cast device. Apart from this, we found that for the thermoplastic mask the lateral repositioning error depended on the order in which the mask was closed. The photogrammetric apparatus is a versatile tool for accuracy measurements of head holder devices used for fractionated radiotherapy.
Recording stereoscopic 3D neurosurgery with a head-mounted 3D camera system.
Lee, Brian; Chen, Brian R; Chen, Beverly B; Lu, James Y; Giannotta, Steven L
2015-06-01
Stereoscopic three-dimensional (3D) imaging can present more information to the viewer and further enhance the learning experience over traditional two-dimensional (2D) video. Most 3D surgical videos are recorded from the operating microscope and only feature the crux, or the most important part of the surgery, leaving out other crucial parts of surgery including the opening, approach, and closing of the surgical site. In addition, many other surgeries including complex spine, trauma, and intensive care unit procedures are also rarely recorded. We describe and share our experience with a commercially available head-mounted stereoscopic 3D camera system to obtain stereoscopic 3D recordings of these seldom recorded aspects of neurosurgery. The strengths and limitations of using the GoPro(®) 3D system as a head-mounted stereoscopic 3D camera system in the operating room are reviewed in detail. Over the past several years, we have recorded in stereoscopic 3D over 50 cranial and spinal surgeries and created a library for education purposes. We have found the head-mounted stereoscopic 3D camera system to be a valuable asset to supplement 3D footage from a 3D microscope. We expect that these comprehensive 3D surgical videos will become an important facet of resident education and ultimately lead to improved patient care.
NASA Astrophysics Data System (ADS)
Schonlau, William J.
2006-05-01
An immersive viewing engine providing basic telepresence functionality for a variety of application types is presented. Augmented reality, teleoperation and virtual reality applications all benefit from the use of head mounted display devices that present imagery appropriate to the user's head orientation at full frame rates. Our primary application is the viewing of remote environments, as with a camera equipped teleoperated vehicle. The conventional approach where imagery from a narrow field camera onboard the vehicle is presented to the user on a small rectangular screen is contrasted with an immersive viewing system where a cylindrical or spherical format image is received from a panoramic camera on the vehicle, resampled in response to sensed user head orientation and presented via wide field eyewear display, approaching 180 degrees of horizontal field. Of primary interest is the user's enhanced ability to perceive and understand image content, even when image resolution parameters are poor, due to the innate visual integration and 3-D model generation capabilities of the human visual system. A mathematical model for tracking user head position and resampling the panoramic image to attain distortion free viewing of the region appropriate to the user's current head pose is presented and consideration is given to providing the user with stereo viewing generated from depth map information derived using stereo from motion algorithms.
Electron-tracking Compton gamma-ray camera for small animal and phantom imaging
NASA Astrophysics Data System (ADS)
Kabuki, Shigeto; Kimura, Hiroyuki; Amano, Hiroo; Nakamoto, Yuji; Kubo, Hidetoshi; Miuchi, Kentaro; Kurosawa, Shunsuke; Takahashi, Michiaki; Kawashima, Hidekazu; Ueda, Masashi; Okada, Tomohisa; Kubo, Atsushi; Kunieda, Etuso; Nakahara, Tadaki; Kohara, Ryota; Miyazaki, Osamu; Nakazawa, Tetsuo; Shirahata, Takashi; Yamamoto, Etsuji; Ogawa, Koichi; Togashi, Kaori; Saji, Hideo; Tanimori, Toru
2010-11-01
We have developed an electron-tracking Compton camera (ETCC) for medical use. Our ETCC has a wide energy dynamic range (200-1300 keV) and wide field of view (3 sr), and thus has potential for advanced medical use. To evaluate the ETCC, we imaged the head (brain) and bladder of mice that had been administered with F-18-FDG. We also imaged the head and thyroid gland of mice using double tracers of F-18-FDG and I-131 ions.
Video Analysis Verification of Head Impact Events Measured by Wearable Sensors.
Cortes, Nelson; Lincoln, Andrew E; Myer, Gregory D; Hepburn, Lisa; Higgins, Michael; Putukian, Margot; Caswell, Shane V
2017-08-01
Wearable sensors are increasingly used to quantify the frequency and magnitude of head impact events in multiple sports. There is a paucity of evidence that verifies head impact events recorded by wearable sensors. To utilize video analysis to verify head impact events recorded by wearable sensors and describe the respective frequency and magnitude. Cohort study (diagnosis); Level of evidence, 2. Thirty male (mean age, 16.6 ± 1.2 years; mean height, 1.77 ± 0.06 m; mean weight, 73.4 ± 12.2 kg) and 35 female (mean age, 16.2 ± 1.3 years; mean height, 1.66 ± 0.05 m; mean weight, 61.2 ± 6.4 kg) players volunteered to participate in this study during the 2014 and 2015 lacrosse seasons. Participants were instrumented with GForceTracker (GFT; boys) and X-Patch sensors (girls). Simultaneous game video was recorded by a trained videographer using a single camera located at the highest midfield location. One-third of the field was framed and panned to follow the ball during games. Videographic and accelerometer data were time synchronized. Head impact counts were compared with video recordings and were deemed valid if (1) the linear acceleration was ≥20 g, (2) the player was identified on the field, (3) the player was in camera view, and (4) the head impact mechanism could be clearly identified. Descriptive statistics of peak linear acceleration (PLA) and peak rotational velocity (PRV) for all verified head impacts ≥20 g were calculated. For the boys, a total recorded 1063 impacts (2014: n = 545; 2015: n = 518) were logged by the GFT between game start and end times (mean PLA, 46 ± 31 g; mean PRV, 1093 ± 661 deg/s) during 368 player-games. Of these impacts, 690 were verified via video analysis (65%; mean PLA, 48 ± 34 g; mean PRV, 1242 ± 617 deg/s). The X-Patch sensors, worn by the girls, recorded a total 180 impacts during the course of the games, and 58 (2014: n = 33; 2015: n = 25) were verified via video analysis (32%; mean PLA, 39 ± 21 g; mean PRV, 1664 ± 619 rad/s). The current data indicate that existing wearable sensor technologies may substantially overestimate head impact events. Further, while the wearable sensors always estimated a head impact location, only 48% of the impacts were a result of direct contact to the head as characterized on video. Using wearable sensors and video to verify head impacts may decrease the inclusion of false-positive impacts during game activity in the analysis.
The visual system of male scale insects
NASA Astrophysics Data System (ADS)
Buschbeck, Elke K.; Hauser, Martin
2009-03-01
Animal eyes generally fall into two categories: (1) their photoreceptive array is convex, as is typical for camera eyes, including the human eye, or (2) their photoreceptive array is concave, as is typical for the compound eye of insects. There are a few rare examples of the latter eye type having secondarily evolved into the former one. When viewed in a phylogenetic framework, the head morphology of a variety of male scale insects suggests that this group could be one such example. In the Margarodidae (Hemiptera, Coccoidea), males have been described as having compound eyes, while males of some more derived groups only have two single-chamber eyes on each side of the head. Those eyes are situated in the place occupied by the compound eye of other insects. Since male scale insects tend to be rare, little is known about how their visual systems are organized, and what anatomical traits are associated with this evolutionary transition. In adult male Margarodidae, one single-chamber eye (stemmateran ocellus) is present in addition to a compound eye-like region. Our histological investigation reveals that the stemmateran ocellus has an extended retina which is formed by concrete clusters of receptor cells that connect to its own first-order neuropil. In addition, we find that the ommatidia of the compound eyes also share several anatomical characteristics with simple camera eyes. These include shallow units with extended retinas, each of which is connected by its own small nerve to the lamina. These anatomical changes suggest that the margarodid compound eye represents a transitional form to the giant unicornal eyes that have been described in more derived species.
Latha, M; Pari, L
2003-01-01
In light of evidence that diabetes mellitus is associated with oxidative stress and altered antioxidant status, we investigated the effect of Scoparia dulcis plant extracts (SPEt) (aqueous, ethanolic, and chloroform) in streptozotocin diabetic rats. Significant increases in the activities of insulin, superoxide dismutase, catalase, glutathione peroxidase, glutathione-S-transferase, reduced glutathione, vitamin C, and vitamin E were observed in liver, kidney, and brain on treatment with SPEt. In addition, the treated groups also showed significant decreases in blood glucose, thiobarbituric acid-reactive substances, and hydroperoxide formation in tissues, suggesting its role in protection against lipid peroxidation-induced membrane damage. Thus, the results of the present study indicate that extracts of S. dulcis, especially the aqueous extract, showed a modulatory effect by attenuating the above lipid peroxidation in streptozotocin diabetes.
Parallel robot for micro assembly with integrated innovative optical 3D-sensor
NASA Astrophysics Data System (ADS)
Hesselbach, Juergen; Ispas, Diana; Pokar, Gero; Soetebier, Sven; Tutsch, Rainer
2002-10-01
Recent advances in the fields of MEMS and MOEMS often require precise assembly of very small parts with an accuracy of a few microns. In order to meet this demand, a new approach using a robot based on parallel mechanisms in combination with a novel 3D-vision system has been chosen. The planar parallel robot structure with 2 DOF provides a high resolution in the XY-plane. It carries two additional serial axes for linear and rotational movement in/about z direction. In order to achieve high precision as well as good dynamic capabilities, the drive concept for the parallel (main) axes incorporates air bearings in combination with a linear electric servo motors. High accuracy position feedback is provided by optical encoders with a resolution of 0.1 μm. To allow for visualization and visual control of assembly processes, a camera module fits into the hollow tool head. It consists of a miniature CCD camera and a light source. In addition a modular gripper support is integrated into the tool head. To increase the accuracy a control loop based on an optoelectronic sensor will be implemented. As a result of an in-depth analysis of different approaches a photogrammetric system using one single camera and special beam-splitting optics was chosen. A pattern of elliptical marks is applied to the surfaces of workpiece and gripper. Using a model-based recognition algorithm the image processing software identifies the gripper and the workpiece and determines their relative position. A deviation vector is calculated and fed into the robot control to guide the gripper.
Lancioni, Giulio E; Bellini, Domenico; Oliva, Doretta; Singh, Nirbhay N; O'Reilly, Mark F; Green, Vanessa A; Furniss, Fred
2014-06-01
Assessing a new camera-based microswitch technology, which did not require the use of color marks on the participants' face. Two children with extensive multiple disabilities participated. The responses selected for them consisted of small, lateral head movements and mouth closing or opening. The intervention was carried out according to a multiple probe design across responses. The technology involved a computer with a CPU using a 2-GHz clock, a USB video camera with a 16-mm lens, a USB cable connecting the camera and the computer, and a special software program written in ISO C++ language. The new technology was satisfactorily used with both children. Large increases in their responding were observed during the intervention periods (i.e. when the responses were followed by preferred stimulation). The new technology may be an important resource for persons with multiple disabilities and minimal motor behavior.
Registration of an on-axis see-through head-mounted display and camera system
NASA Astrophysics Data System (ADS)
Luo, Gang; Rensing, Noa M.; Weststrate, Evan; Peli, Eli
2005-02-01
An optical see-through head-mounted display (HMD) system integrating a miniature camera that is aligned with the user's pupil is developed and tested. Such an HMD system has a potential value in many augmented reality applications, in which registration of the virtual display to the real scene is one of the critical aspects. The camera alignment to the user's pupil results in a simple yet accurate calibration and a low registration error across a wide range of depth. In reality, a small camera-eye misalignment may still occur in such a system due to the inevitable variations of HMD wearing position with respect to the eye. The effects of such errors are measured. Calculation further shows that the registration error as a function of viewing distance behaves nearly the same for different virtual image distances, except for a shift. The impact of prismatic effect of the display lens on registration is also discussed.
The Mars Hand Lens Imager (MAHLI) aboard the Mars rover, Curiosity
NASA Astrophysics Data System (ADS)
Edgett, K. S.; Ravine, M. A.; Caplinger, M. A.; Ghaemi, F. T.; Schaffner, J. A.; Malin, M. C.; Baker, J. M.; Dibiase, D. R.; Laramee, J.; Maki, J. N.; Willson, R. G.; Bell, J. F., III; Cameron, J. F.; Dietrich, W. E.; Edwards, L. J.; Hallet, B.; Herkenhoff, K. E.; Heydari, E.; Kah, L. C.; Lemmon, M. T.; Minitti, M. E.; Olson, T. S.; Parker, T. J.; Rowland, S. K.; Schieber, J.; Sullivan, R. J.; Sumner, D. Y.; Thomas, P. C.; Yingst, R. A.
2009-08-01
The Mars Science Laboratory (MSL) rover, Curiosity, is expected to land on Mars in 2012. The Mars Hand Lens Imager (MAHLI) will be used to document martian rocks and regolith with a 2-megapixel RGB color CCD camera with a focusable macro lens mounted on an instrument-bearing turret on the end of Curiosity's robotic arm. The flight MAHLI can focus on targets at working distances of 20.4 mm to infinity. At 20.4 mm, images have a pixel scale of 13.9 μm/pixel. The pixel scale at 66 mm working distance is about the same (31 μm/pixel) as that of the Mars Exploration Rover (MER) Microscopic Imager (MI). MAHLI camera head placement is dependent on the capabilities of the MSL robotic arm, the design for which presently has a placement uncertainty of ~20 mm in 3 dimensions; hence, acquisition of images at the minimum working distance may be challenging. The MAHLI consists of 3 parts: a camera head, a Digital Electronics Assembly (DEA), and a calibration target. The camera head and DEA are connected by a JPL-provided cable which transmits data, commands, and power. JPL is also providing a contact sensor. The camera head will be mounted on the rover's robotic arm turret, the DEA will be inside the rover body, and the calibration target will be mounted on the robotic arm azimuth motor housing. Camera Head. MAHLI uses a Kodak KAI-2020CM interline transfer CCD (1600 x 1200 active 7.4 μm square pixels with RGB filtered microlenses arranged in a Bayer pattern). The optics consist of a group of 6 fixed lens elements, a movable group of 3 elements, and a fixed sapphire window front element. Undesired near-infrared radiation is blocked using a coating deposited on the inside surface of the sapphire window. The lens is protected by a dust cover with a Lexan window through which imaging can be ac-complished if necessary, and targets can be illuminated by sunlight or two banks of two white light LEDs. Two 365 nm UV LEDs are included to search for fluores-cent materials at night. DEA and Onboard Processing. The DEA incorpo-rates the circuit elements required for data processing, compression, and buffering. It also includes all power conversion and regulation capabilities for both the DEA and the camera head. The DEA has an 8 GB non-volatile flash memory plus 128 MB volatile storage. Images can be commanded as full-frame or sub-frame and the camera has autofocus and autoexposure capa-bilities. MAHLI can also acquire 720p, ~7 Hz high definition video. Onboard processing includes options for Bayer pattern filter interpolation, JPEG-based compression, and focus stack merging (z-stacking). Malin Space Science Systems (MSSS) built and will operate the MAHLI. Alliance Spacesystems, LLC, designed and built the lens mechanical assembly. MAHLI shares common electronics, detector, and software designs with the MSL Mars Descent Imager (MARDI) and the 2 MSL Mast Cameras (Mastcam). Pre-launch images of geologic materials imaged by MAHLI are online at: http://www.msss.com/msl/mahli/prelaunch_images/.
A remote camera operation system using a marker attached cap
NASA Astrophysics Data System (ADS)
Kawai, Hironori; Hama, Hiromitsu
2005-12-01
In this paper, we propose a convenient system to control a remote camera according to the eye-gazing direction of the operator, which is approximately obtained through calculating the face direction by means of image processing. The operator put a marker attached cap on his head, and the system takes an image of the operator from above with only one video camera. Three markers are set up on the cap, and 'three' is the minimum number to calculate the tilt angle of the head. The more markers are used, the robuster system may be made to occlusion, and the wider moving range of the head is tolerated. It is supposed that the markers must not exist on any three dimensional straight line. To compensate the marker's color change due to illumination conditions, the threshold for the marker extraction is adaptively decided using a k-means clustering method. The system was implemented with MATLAB on a personal computer, and the real-time operation was realized. Through the experimental results, robustness of the system was confirmed and tilt and pan angles of the head could be calculated with enough accuracy to use.
Multi-band infrared camera systems
NASA Astrophysics Data System (ADS)
Davis, Tim; Lang, Frank; Sinneger, Joe; Stabile, Paul; Tower, John
1994-12-01
The program resulted in an IR camera system that utilizes a unique MOS addressable focal plane array (FPA) with full TV resolution, electronic control capability, and windowing capability. Two systems were delivered, each with two different camera heads: a Stirling-cooled 3-5 micron band head and a liquid nitrogen-cooled, filter-wheel-based, 1.5-5 micron band head. Signal processing features include averaging up to 16 frames, flexible compensation modes, gain and offset control, and real-time dither. The primary digital interface is a Hewlett-Packard standard GPID (IEEE-488) port that is used to upload and download data. The FPA employs an X-Y addressed PtSi photodiode array, CMOS horizontal and vertical scan registers, horizontal signal line (HSL) buffers followed by a high-gain preamplifier and a depletion NMOS output amplifier. The 640 x 480 MOS X-Y addressed FPA has a high degree of flexibility in operational modes. By changing the digital data pattern applied to the vertical scan register, the FPA can be operated in either an interlaced or noninterlaced format. The thermal sensitivity performance of the second system's Stirling-cooled head was the best of the systems produced.
Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung
2016-08-31
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user's head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest.
Pan, Weiyuan; Jung, Dongwook; Yoon, Hyo Sik; Lee, Dong Eun; Naqvi, Rizwan Ali; Lee, Kwan Woo; Park, Kang Ryoung
2016-01-01
Gaze tracking is the technology that identifies a region in space that a user is looking at. Most previous non-wearable gaze tracking systems use a near-infrared (NIR) light camera with an NIR illuminator. Based on the kind of camera lens used, the viewing angle and depth-of-field (DOF) of a gaze tracking camera can be different, which affects the performance of the gaze tracking system. Nevertheless, to our best knowledge, most previous researches implemented gaze tracking cameras without ground truth information for determining the optimal viewing angle and DOF of the camera lens. Eye-tracker manufacturers might also use ground truth information, but they do not provide this in public. Therefore, researchers and developers of gaze tracking systems cannot refer to such information for implementing gaze tracking system. We address this problem providing an empirical study in which we design an optimal gaze tracking camera based on experimental measurements of the amount and velocity of user’s head movements. Based on our results and analyses, researchers and developers might be able to more easily implement an optimal gaze tracking system. Experimental results show that our gaze tracking system shows high performance in terms of accuracy, user convenience and interest. PMID:27589768
An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring.
Zhao, Yifan; Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, Alexandros
2017-11-22
Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers' behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone.
Enhanced video indirect ophthalmoscopy (VIO) via robust mosaicing.
Estrada, Rolando; Tomasi, Carlo; Cabrera, Michelle T; Wallace, David K; Freedman, Sharon F; Farsiu, Sina
2011-10-01
Indirect ophthalmoscopy (IO) is the standard of care for evaluation of the neonatal retina. When recorded on video from a head-mounted camera, IO images have low quality and narrow Field of View (FOV). We present an image fusion methodology for converting a video IO recording into a single, high quality, wide-FOV mosaic that seamlessly blends the best frames in the video. To this end, we have developed fast and robust algorithms for automatic evaluation of video quality, artifact detection and removal, vessel mapping, registration, and multi-frame image fusion. Our experiments show the effectiveness of the proposed methods.
A detailed comparison of single-camera light-field PIV and tomographic PIV
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Ding, Junfei; Atkinson, Callum; Soria, Julio; New, T. H.
2018-03-01
This paper conducts a comprehensive study between the single-camera light-field particle image velocimetry (LF-PIV) and the multi-camera tomographic particle image velocimetry (Tomo-PIV). Simulation studies were first performed using synthetic light-field and tomographic particle images, which extensively examine the difference between these two techniques by varying key parameters such as pixel to microlens ratio (PMR), light-field camera Tomo-camera pixel ratio (LTPR), particle seeding density and tomographic camera number. Simulation results indicate that the single LF-PIV can achieve accuracy consistent with that of multi-camera Tomo-PIV, but requires the use of overall greater number of pixels. Experimental studies were then conducted by simultaneously measuring low-speed jet flow with single-camera LF-PIV and four-camera Tomo-PIV systems. Experiments confirm that given a sufficiently high pixel resolution, a single-camera LF-PIV system can indeed deliver volumetric velocity field measurements for an equivalent field of view with a spatial resolution commensurate with those of multi-camera Tomo-PIV system, enabling accurate 3D measurements in applications where optical access is limited.
Quality assurance of a gimbaled head swing verification using feature point tracking.
Miura, Hideharu; Ozawa, Shuichi; Enosaki, Tsubasa; Kawakubo, Atsushi; Hosono, Fumika; Yamada, Kiyoshi; Nagata, Yasushi
2017-01-01
To perform dynamic tumor tracking (DTT) for clinical applications safely and accurately, gimbaled head swing verification is important. We propose a quantitative gimbaled head swing verification method for daily quality assurance (QA), which uses feature point tracking and a web camera. The web camera was placed on a couch at the same position for every gimbaled head swing verification, and could move based on a determined input function (sinusoidal patterns; amplitude: ± 20 mm; cycle: 3 s) in the pan and tilt directions at isocenter plane. Two continuous images were then analyzed for each feature point using the pyramidal Lucas-Kanade (LK) method, which is an optical flow estimation algorithm. We used a tapped hole as a feature point of the gimbaled head. The period and amplitude were analyzed to acquire a quantitative gimbaled head swing value for daily QA. The mean ± SD of the period were 3.00 ± 0.03 (range: 3.00-3.07) s and 3.00 ± 0.02 (range: 3.00-3.07) s in the pan and tilt directions, respectively. The mean ± SD of the relative displacement were 19.7 ± 0.08 (range: 19.6-19.8) mm and 18.9 ± 0.2 (range: 18.4-19.5) mm in the pan and tilt directions, respectively. The gimbaled head swing was reliable for DTT. We propose a quantitative gimbaled head swing verification method for daily QA using the feature point tracking method and a web camera. Our method can quantitatively assess the gimbaled head swing for daily QA from baseline values, measured at the time of acceptance and commissioning. © 2016 The Authors. Journal of Applied Clinical Medical Physics published by Wiley Periodicals, Inc. on behalf of American Association of Physicists in Medicine.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ogunmolu, O; Gans, N; Jiang, S
Purpose: We propose a surface-image-guided soft robotic patient positioning system for maskless head-and-neck radiotherapy. The ultimate goal of this project is to utilize a soft robot to realize non-rigid patient positioning and real-time motion compensation. In this proof-of-concept study, we design a position-based visual servoing control system for an air-bladder-based soft robot and investigate its performance in controlling the flexion/extension cranial motion on a mannequin head phantom. Methods: The current system consists of Microsoft Kinect depth camera, an inflatable air bladder (IAB), pressured air source, pneumatic valve actuators, custom-built current regulators, and a National Instruments myRIO microcontroller. The performance ofmore » the designed system was evaluated on a mannequin head, with a ball joint fixed below its neck to simulate torso-induced head motion along flexion/extension direction. The IAB is placed beneath the mannequin head. The Kinect camera captures images of the mannequin head, extracts the face, and measures the position of the head relative to the camera. This distance is sent to the myRIO, which runs control algorithms and sends actuation commands to the valves, inflating and deflating the IAB to induce head motion. Results: For a step input, i.e. regulation of the head to a constant displacement, the maximum error was a 6% overshoot, which the system then reduces to 0% steady-state error. In this initial investigation, the settling time to reach the regulated position was approximately 8 seconds, with 2 seconds of delay between the command start of motion due to capacitance of the pneumatics, for a total of 10 seconds to regulate the error. Conclusion: The surface image-guided soft robotic patient positioning system can achieve accurate mannequin head flexion/extension motion. Given this promising initial Result, the extension of the current one-dimensional soft robot control to multiple IABs for non-rigid positioning control will be pursued.« less
Ortensi, Andrea; Panunzi, Andrea; Trombetta, Silvia; Cattaneo, Alberto; Sorrenti, Salvatore; D'Orazi, Valerio
2017-05-01
The aim of this study was to test two different video cameras and recording systems used in thyroid surgery in our Department. This is meant to be an attempt to record the real point of view of the magnified vision of surgeon, so as to make the viewer aware of the difference with the naked eye vision. In this retrospective study, we recorded and compared twenty thyroidectomies performed using loupes magnification and microsurgical technique: ten were recorded with GoPro ® 4 Session action cam (commercially available) and ten with our new prototype of head mounted video camera. Settings were selected before surgery for both cameras. The recording time is about from 1 to 2 h for GoPro ® and from 3 to 5 h for our prototype. The average time of preparation to fit the camera on the surgeon's head and set the functionality is about 5 min for GoPro ® and 7-8 min for the prototype, mostly due to HDMI wiring cable. Videos recorded with the prototype require no further editing, which is mandatory for videos recorded with GoPro ® to highlight the surgical details. the present study showed that our prototype of video camera, compared with GoPro ® 4 Session, guarantees best results in terms of surgical video recording quality, provides to the viewer the exact perspective of the microsurgeon and shows accurately his magnified view through the loupes in thyroid surgery. These recordings are surgical aids for teaching and education and might be a method of self-analysis of surgical technique. Copyright © 2017 IJS Publishing Group Ltd. Published by Elsevier Ltd. All rights reserved.
Using the time shift in single pushbroom datatakes to detect ships and their heading
NASA Astrophysics Data System (ADS)
Willburger, Katharina A. M.; Schwenk, Kurt
2017-10-01
The detection of ships from remote sensing data has become an essential task for maritime security. The variety of application scenarios includes piracy, illegal fishery, ocean dumping and ships carrying refugees. While techniques using data from SAR sensors for ship detection are widely common, there is only few literature discussing algorithms based on imagery of optical camera systems. A ship detection algorithm for optical pushbroom data has been developed. It takes advantage of the special detector assembly of most of those scanners, which allows apart from the detection of a ship also the calculation of its heading out of a single acquisition. The proposed algorithm for the detection of moving ships was developed with RapidEye imagery. It algorithm consists mainly of three steps: the creation of a land-watermask, the object extraction and the deeper examination of each single object. The latter step is built up by several spectral and geometric filters, making heavy use of the inter-channel displacement typical for pushbroom sensors with multiple CCD lines, finally yielding a set of ships and their direction of movement. The working principle of time-shifted pushbroom sensors and the developed algorithm is explained in detail. Furthermore, we present our first results and give an outlook to future improvements.
Autocalibration of multiprojector CAVE-like immersive environments.
Sajadi, Behzad; Majumder, Aditi
2012-03-01
In this paper, we present the first method for the geometric autocalibration of multiple projectors on a set of CAVE-like immersive display surfaces including truncated domes and 4 or 5-wall CAVEs (three side walls, floor, and/or ceiling). All such surfaces can be categorized as swept surfaces and multiple projectors can be registered on them using a single uncalibrated camera without using any physical markers on the surface. Our method can also handle nonlinear distortion in the projectors, common in compact setups where a short throw lens is mounted on each projector. Further, when the whole swept surface is not visible from a single camera view, we can register the projectors using multiple pan and tilted views of the same camera. Thus, our method scales well with different size and resolution of the display. Since we recover the 3D shape of the display, we can achieve registration that is correct from any arbitrary viewpoint appropriate for head-tracked single-user virtual reality systems. We can also achieve wallpapered registration, more appropriate for multiuser collaborative explorations. Though much more immersive than common surfaces like planes and cylinders, general swept surfaces are used today only for niche display environments. Even the more popular 4 or 5-wall CAVE is treated as a piecewise planar surface for calibration purposes and hence projectors are not allowed to be overlapped across the corners. Our method opens up the possibility of using such swept surfaces to create more immersive VR systems without compromising the simplicity of having a completely automatic calibration technique. Such calibration allows completely arbitrary positioning of the projectors in a 5-wall CAVE, without respecting the corners.
Patterned Video Sensors For Low Vision
NASA Technical Reports Server (NTRS)
Juday, Richard D.
1996-01-01
Miniature video cameras containing photoreceptors arranged in prescribed non-Cartesian patterns to compensate partly for some visual defects proposed. Cameras, accompanied by (and possibly integrated with) miniature head-mounted video display units restore some visual function in humans whose visual fields reduced by defects like retinitis pigmentosa.
Pre-impact fall detection system using dynamic threshold and 3D bounding box
NASA Astrophysics Data System (ADS)
Otanasap, Nuth; Boonbrahm, Poonpong
2017-02-01
Fall prevention and detection system have to subjugate many challenges in order to develop an efficient those system. Some of the difficult problems are obtrusion, occlusion and overlay in vision based system. Other associated issues are privacy, cost, noise, computation complexity and definition of threshold values. Estimating human motion using vision based usually involves with partial overlay, caused either by direction of view point between objects or body parts and camera, and these issues have to be taken into consideration. This paper proposes the use of dynamic threshold based and bounding box posture analysis method with multiple Kinect cameras setting for human posture analysis and fall detection. The proposed work only uses two Kinect cameras for acquiring distributed values and differentiating activities between normal and falls. If the peak value of head velocity is greater than the dynamic threshold value, bounding box posture analysis will be used to confirm fall occurrence. Furthermore, information captured by multiple Kinect placed in right angle will address the skeleton overlay problem due to single Kinect. This work contributes on the fusion of multiple Kinect based skeletons, based on dynamic threshold and bounding box posture analysis which is the only research work reported so far.
NASA Astrophysics Data System (ADS)
Masciotti, James M.; Rahim, Shaheed; Grover, Jarrett; Hielscher, Andreas H.
2007-02-01
We present a design for frequency domain instrument that allows for simultaneous gathering of magnetic resonance and diffuse optical tomographic imaging data. This small animal imaging system combines the high anatomical resolution of magnetic resonance imaging (MRI) with the high temporal resolution and physiological information provided by diffuse optical tomography (DOT). The DOT hardware comprises laser diodes and an intensified CCD camera, which are modulated up to 1 GHz by radio frequency (RF) signal generators. An optical imaging head is designed to fit inside the 4 cm inner diameter of a 9.4 T MRI system. Graded index fibers are used to transfer light between the optical hardware and the imaging head within the RF coil. Fiducial markers are integrated into the imaging head to allow the determination of the positions of the source and detector fibers on the MR images and to permit co-registration of MR and optical tomographic images. Detector fibers are arranged compactly and focused through a camera lens onto the photocathode of the intensified CCD camera.
Cascianelli, S; Tranfaglia, C; Fravolini, M L; Bianconi, F; Minestrini, M; Nuvoli, S; Tambasco, N; Dottorini, M E; Palumbo, B
2017-01-01
The differential diagnosis of Parkinson's disease (PD) and other conditions, such as essential tremor and drug-induced parkinsonian syndrome or normal aging brain, represents a diagnostic challenge. 123 I-FP-CIT brain SPET is able to contribute to the differential diagnosis. Semiquantitative analysis of radiopharmaceutical uptake in basal ganglia (caudate nuclei and putamina) is very useful to support the diagnostic process. An artificial neural network classifier using 123 I-FP-CIT brain SPET data, a classification tree (CIT), was applied. CIT is an automatic classifier composed of a set of logical rules, organized as a decision tree to produce an optimised threshold based classification of data to provide discriminative cut-off values. We applied a CIT to 123 I-FP-CIT brain SPET semiquantitave data, to obtain cut-off values of radiopharmaceutical uptake ratios in caudate nuclei and putamina with the aim to diagnose PD versus other conditions. We retrospectively investigated 187 patients undergoing 123 I-FP-CIT brain SPET (Millenium VG, G.E.M.S.) with semiquantitative analysis performed with Basal Ganglia (BasGan) V2 software according to EANM guidelines; among them 113 resulted affected by PD (PD group) and 74 (N group) by other non parkinsonian conditions, such as Essential Tremor and drug-induced PD. PD group included 113 subjects (60M and 53F of age: 60-81yrs) having Hoehn and Yahr score (HY): 0.5-1.5; Unified Parkinson Disease Rating Scale (UPDRS) score: 6-38; N group included 74 subjects (36M and 38 F range of age 60-80 yrs). All subjects were clinically followed for at least 6-18 months to confirm the diagnosis. To examinate data obtained by using CIT, for each of the 1,000 experiments carried out, 10% of patients were randomly selected as the CIT training set, while the remaining 90% validated the trained CIT, and the percentage of the validation data correctly classified in the two groups of patients was computed. The expected performance of an "average performance CIT" was evaluated. For CIT, the probability of correct classification in patients with PD was 84.19±11.67% (mean±SD) and in N patients 93.48±6.95%. For CIT, the first decision rule provided a value for the right putamen of 2.32±0.16. This means that patients with right putamen values <2.32 were classified as having PD. Patients with putamen values ≥2.32 underwent further analysis. They were classified as N if the right putamen uptake value was ≥3.02 or if the value for the right putamen was <3.02 and the age was ≥67.5 years. Otherwise the patients were classified as having PD. Other similar rules on the values of both caudate nuclei and left putamen could be used to refine the classification, but in our data analysis of these data did not significantly contribute to the differential diagnosis. This could be due to an increased number of more severe patients with initial prevalence of left clinical symptoms having a worsening in right putamen uptake distribution. These results show that CIT was able to accurately classify PD and non-PD patients by means of 123 I-FP-CIT brain SPET data and provided also cut-off values able to differentially diagnose these groups of patients. Right putamen uptake values resulted as the most discriminant to correctly classify our patients, probably due to a certain number of subjects with initial prevalence of left clinical symptoms. Finally, the selective evaluation of the group of subjects having putamen values ≥2.32 disclosed that age was a further important feature to classify patients for certain right putamen values.
3-dimensional telepresence system for a robotic environment
Anderson, Matthew O.; McKay, Mark D.
2000-01-01
A telepresence system includes a camera pair remotely controlled by a control module affixed to an operator. The camera pair provides for three dimensional viewing and the control module, affixed to the operator, affords hands-free operation of the camera pair. In one embodiment, the control module is affixed to the head of the operator and an initial position is established. A triangulating device is provided to track the head movement of the operator relative to the initial position. A processor module receives input from the triangulating device to determine where the operator has moved relative to the initial position and moves the camera pair in response thereto. The movement of the camera pair is predetermined by a software map having a plurality of operation zones. Each zone therein corresponds to unique camera movement parameters such as speed of movement. Speed parameters include constant speed, or increasing or decreasing. Other parameters include pan, tilt, slide, raise or lowering of the cameras. Other user interface devices are provided to improve the three dimensional control capabilities of an operator in a local operating environment. Such other devices include a pair of visual display glasses, a microphone and a remote actuator. The pair of visual display glasses are provided to facilitate three dimensional viewing, hence depth perception. The microphone affords hands-free camera movement by utilizing voice commands. The actuator allows the operator to remotely control various robotic mechanisms in the remote operating environment.
An Orientation Sensor-Based Head Tracking System for Driver Behaviour Monitoring
Görne, Lorenz; Yuen, Iek-Man; Cao, Dongpu; Sullman, Mark; Auger, Daniel; Lv, Chen; Wang, Huaji; Matthias, Rebecca; Skrypchuk, Lee; Mouzakitis, Alexandros
2017-01-01
Although at present legislation does not allow drivers in a Level 3 autonomous vehicle to engage in a secondary task, there may become a time when it does. Monitoring the behaviour of drivers engaging in various non-driving activities (NDAs) is crucial to decide how well the driver will be able to take over control of the vehicle. One limitation of the commonly used face-based head tracking system, using cameras, is that sufficient features of the face must be visible, which limits the detectable angle of head movement and thereby measurable NDAs, unless multiple cameras are used. This paper proposes a novel orientation sensor based head tracking system that includes twin devices, one of which measures the movement of the vehicle while the other measures the absolute movement of the head. Measurement error in the shaking and nodding axes were less than 0.4°, while error in the rolling axis was less than 2°. Comparison with a camera-based system, through in-house tests and on-road tests, showed that the main advantage of the proposed system is the ability to detect angles larger than 20° in the shaking and nodding axes. Finally, a case study demonstrated that the measurement of the shaking and nodding angles, produced from the proposed system, can effectively characterise the drivers’ behaviour while engaged in the NDAs of chatting to a passenger and playing on a smartphone. PMID:29165331
The Subject Headings of the Morris Swett Library, USAFAS. Revised.
1980-05-15
Royal Armoured Corps. x Armored force. Armored troops. Armored units. Mechanized force. Mechanized units. Mechanized warfare. Tank companies. Tank...g., U. S. Ary- Physical training. CAMERA MOUNTS. CAMERAS, AERIAL. II I • • ! I CAMOUFLAGE. (U 166.3h) x Air arm - amouflage. - Bibliography. - Drape
Using commodity accelerometers and gyroscopes to improve speed and accuracy of JanusVF
NASA Astrophysics Data System (ADS)
Hutson, Malcolm; Reiners, Dirk
2010-01-01
Several critical limitations exist in the currently available commercial tracking technologies for fully-enclosed virtual reality (VR) systems. While several 6DOF solutions can be adapted to work in fully-enclosed spaces, they still include elements of hardware that can interfere with the user's visual experience. JanusVF introduced a tracking solution for fully-enclosed VR displays that achieves comparable performance to available commercial solutions but without artifacts that can obscure the user's view. JanusVF employs a small, high-resolution camera that is worn on the user's head, but faces backwards. The VR rendering software draws specific fiducial markers with known size and absolute position inside the VR scene behind the user but in view of the camera. These fiducials are tracked by ARToolkitPlus and integrated by a single-constraint-at-a-time (SCAAT) filter to update the head pose. In this paper we investigate the addition of low-cost accelerometers and gyroscopes such as those in Nintendo Wii remotes, the Wii Motion Plus, and the Sony Sixaxis controller to improve the precision and accuracy of JanusVF. Several enthusiast projects have implemented these units as basic trackers or for gesture recognition, but none so far have created true 6DOF trackers using only the accelerometers and gyroscopes. Our original experiments were repeated after adding the low-cost inertial sensors, showing considerable improvements and noise reduction.
Prism-based single-camera system for stereo display
NASA Astrophysics Data System (ADS)
Zhao, Yue; Cui, Xiaoyu; Wang, Zhiguo; Chen, Hongsheng; Fan, Heyu; Wu, Teresa
2016-06-01
This paper combines the prism and single camera and puts forward a method of stereo imaging with low cost. First of all, according to the principle of geometrical optics, we can deduce the relationship between the prism single-camera system and dual-camera system, and according to the principle of binocular vision we can deduce the relationship between binoculars and dual camera. Thus we can establish the relationship between the prism single-camera system and binoculars and get the positional relation of prism, camera, and object with the best effect of stereo display. Finally, using the active shutter stereo glasses of NVIDIA Company, we can realize the three-dimensional (3-D) display of the object. The experimental results show that the proposed approach can make use of the prism single-camera system to simulate the various observation manners of eyes. The stereo imaging system, which is designed by the method proposed by this paper, can restore the 3-D shape of the object being photographed factually.
A study of emergency American football helmet removal techniques.
Swartz, Erik E; Mihalik, Jason P; Decoster, Laura C; Hernandez, Adam E
2012-09-01
The purpose was to compare head kinematics between the Eject Helmet Removal System and manual football helmet removal. This quasi-experimental study was conducted in a controlled laboratory setting. Thirty-two certified athletic trainers (sex, 19 male and 13 female; age, 33 ± 10 years; height, 175 ± 12 cm; mass, 86 ± 20 kg) removed a football helmet from a healthy model under 2 conditions: manual helmet removal and Eject system helmet removal. A 6-camera motion capture system recorded 3-dimensional head position. Our outcome measures consisted of the average angular velocity and acceleration of the head in each movement plane (sagittal, frontal, and transverse), the resultant angular velocity and acceleration, and total motion. Paired-samples t tests compared each variable across the 2 techniques. Manual helmet removal elicited greater average angular velocity in the sagittal and transverse planes and greater resultant angular velocity compared with the Eject system. No differences were observed in average angular acceleration in any single plane of movement; however, the resultant angular acceleration was greater during manual helmet removal. The Eject Helmet Removal System induced greater total head motion. Although the Eject system created more motion at the head, removing a helmet manually resulted in more sudden perturbations as identified by resultant velocity and acceleration of the head. The implications of these findings relate to the care of all cervical spine-injured patients in emergency medical settings, particularly in scenarios where helmet removal is necessary. Copyright © 2012 Elsevier Inc. All rights reserved.
Hoffmann, G; Schmidt, M; Ammon, C
2016-09-01
In this study, a video-based infrared camera (IRC) was investigated as a tool to monitor the body temperature of calves. Body surface temperatures were measured contactless using videos from an IRC fixed at a certain location in the calf feeder. The body surface temperatures were analysed retrospectively at three larger areas: the head area (in front of the forehead), the body area (behind forehead) and the area of the entire animal. The rectal temperature served as a reference temperature and was measured with a digital thermometer at the corresponding time point. A total of nine calves (Holstein-Friesians, 8 to 35 weeks old) were examined. The average maximum temperatures of the area of the entire animal (mean±SD: 37.66±0.90°C) and the head area (37.64±0.86°C) were always higher than that of the body area (36.75±1.06°C). The temperatures of the head area and of the entire animal were very similar. However, the maximum temperatures as measured using IRC increased with an increase in calf rectal temperature. The maximum temperatures of each video picture for the entire visible body area of the calves appeared to be sufficient to measure the superficial body temperature. The advantage of the video-based IRC over conventional IR single-picture cameras is that more than one picture per animal can be analysed in a short period of time. This technique provides more data for analysis. Thus, this system shows potential as an indicator for continuous temperature measurements in calves.
A continuous dry 300 mK cooler for THz sensing applications.
Klemencic, G M; Ade, P A R; Chase, S; Sudiwala, R; Woodcraft, A L
2016-04-01
We describe and demonstrate the automated operation of a novel cryostat design that is capable of maintaining an unloaded base temperature of less than 300 mK continuously, without the need to recycle the gases within the final cold head, as is the case for conventional single shot sorption pumped (3)He cooling systems. This closed dry system uses only 5 l of (3)He gas, making this an economical alternative to traditional systems where a long hold time is required. During testing, a temperature of 365 mK was maintained with a constant 20 μW load, simulating the cooling requirement of a far infrared camera.
Single chip camera active pixel sensor
NASA Technical Reports Server (NTRS)
Shaw, Timothy (Inventor); Pain, Bedabrata (Inventor); Olson, Brita (Inventor); Nixon, Robert H. (Inventor); Fossum, Eric R. (Inventor); Panicacci, Roger A. (Inventor); Mansoorian, Barmak (Inventor)
2003-01-01
A totally digital single chip camera includes communications to operate most of its structure in serial communication mode. The digital single chip camera include a D/A converter for converting an input digital word into an analog reference signal. The chip includes all of the necessary circuitry for operating the chip using a single pin.
19. Walkway on top of caisson and capston heads. View ...
19. Walkway on top of caisson and capston heads. View shows water from Paget Sound on right and an empty drydock on left. Camera is pointed E from a raised platforms. - Puget Sound Naval Shipyard, Drydock No. 3, Farragut Avenue, Bremerton, Kitsap County, WA
Antihyperlipidemic effect of Scoparia dulcis (sweet broomweed) in streptozotocin diabetic rats.
Pari, Leelavinothan; Latha, Muniappan
2006-01-01
We have investigated Scoparia dulcis, an indigenous plant used in Ayurvedic medicine in India, for its possible antihyperlipidemic effect in rats with streptozotocin-induced experimental diabetes. Oral administration of an aqueous extract of S. dulcis plant (200 mg/kg of body weight) to streptozotocin diabetic rats for 6 weeks resulted in a significant reduction in blood glucose, serum and tissue cholesterol, triglycerides, free fatty acids, phospholipids, 3-hydroxy-3-methylglutaryl (HMG)-CoA reductase activity, and very low-density lipoprotein and low-density lipoprotein cholesterol levels. The decreased serum high-density lipoprotein cholesterol, anti-atherogenic index, and HMG-CoA reductase activity in diabetic rats were also reversed towards normalization after the treatment. Similarly, the administration of S. dulcis plant extract (SPEt) to normal animals resulted in a hypolipidemic effect. The effect was compared with glibenclamide (600 microg/kg of body weight). The results showed that SPEt had antihyperlipidemic action in normal and experimental diabetic rats in addition to its antidiabetic effect.
He, Daoping; Li, Yamei; Ooka, Hideshi; Go, Yoo Kyung; Jin, Fangming; Kim, Sun Hee; Nakamura, Ryuhei
2018-02-14
The development of denitrification catalysts which can reduce nitrate and nitrite to dinitrogen is critical for sustaining the nitrogen cycle. However, regulating the selectivity has proven to be a challenge, due to the difficulty of controlling complex multielectron/proton reactions. Here we report that utilizing sequential proton-electron transfer (SPET) pathways is a viable strategy to enhance the selectivity of electrochemical reactions. The selectivity of an oxo-molybdenum sulfide electrocatalyst toward nitrite reduction to dinitrogen exhibited a volcano-type pH dependence with a maximum at pH 5. The pH-dependent formation of the intermediate species (distorted Mo(V) oxo species) identified using operando electron paramagnetic resonance (EPR) and Raman spectroscopy was in accord with a mathematical prediction that the pK a of the reaction intermediates determines the pH-dependence of the SPET-derived product. By utilizing this acute pH dependence, we achieved a Faradaic efficiency of 13.5% for nitrite reduction to dinitrogen, which is the highest value reported to date under neutral conditions.
Help for the Visually Impaired
NASA Technical Reports Server (NTRS)
1995-01-01
The Low Vision Enhancement System (LVES) is a video headset that offers people with low vision a view of their surroundings equivalent to the image on a five-foot television screen four feet from the viewer. It will not make the blind see but for many people with low vision, it eases everyday activities such as reading, watching TV and shopping. LVES was developed over almost a decade of cooperation between Stennis Space Center, the Wilmer Eye Institute of the Johns Hopkins Medical Institutions, the Department of Veteran Affairs, and Visionics Corporation. With the aid of Stennis scientists, Wilmer researchers used NASA technology for computer processing of satellite images and head-mounted vision enhancement systems originally intended for the space station. The unit consists of a head-mounted video display, three video cameras, and a control unit for the cameras. The cameras feed images to the video display in the headset.
Motion Sickness When Driving With a Head-Slaved Camera System
2003-02-01
YPR-765 under armour (Report TM-97-A026). Soesterberg, The Netherlands: TNO Human Factors Research Institute. Van Erp, J.B.F., Padmos, P. & Tenkink, E...Institute. Van Erp, J.B.F., Van den Dobbelsteen, J.J. & Padmos, P. (1998). Improved camera-monitor system for driving YPR-765 under armour (Report TM-98
An Intraocular Camera for Retinal Prostheses: Restoring Sight to the Blind
NASA Astrophysics Data System (ADS)
Stiles, Noelle R. B.; McIntosh, Benjamin P.; Nasiatka, Patrick J.; Hauer, Michelle C.; Weiland, James D.; Humayun, Mark S.; Tanguay, Armand R., Jr.
Implantation of an intraocular retinal prosthesis represents one possible approach to the restoration of sight in those with minimal light perception due to photoreceptor degenerating diseases such as retinitis pigmentosa and age-related macular degeneration. In such an intraocular retinal prosthesis, a microstimulator array attached to the retina is used to electrically stimulate still-viable retinal ganglion cells that transmit retinotopic image information to the visual cortex by means of the optic nerve, thereby creating an image percept. We describe herein an intraocular camera that is designed to be implanted in the crystalline lens sac and connected to the microstimulator array. Replacement of an extraocular (head-mounted) camera with the intraocular camera restores the natural coupling of head and eye motion associated with foveation, thereby enhancing visual acquisition, navigation, and mobility tasks. This research is in no small part inspired by the unique scientific style and research methodologies that many of us have learned from Prof. Richard K. Chang of Yale University, and is included herein as an example of the extent and breadth of his impact and legacy.
Enhanced Video-Oculography System
NASA Technical Reports Server (NTRS)
Moore, Steven T.; MacDougall, Hamish G.
2009-01-01
A previously developed video-oculography system has been enhanced for use in measuring vestibulo-ocular reflexes of a human subject in a centrifuge, motor vehicle, or other setting. The system as previously developed included a lightweight digital video camera mounted on goggles. The left eye was illuminated by an infrared light-emitting diode via a dichroic mirror, and the camera captured images of the left eye in infrared light. To extract eye-movement data, the digitized video images were processed by software running in a laptop computer. Eye movements were calibrated by having the subject view a target pattern, fixed with respect to the subject s head, generated by a goggle-mounted laser with a diffraction grating. The system as enhanced includes a second camera for imaging the scene from the subject s perspective, and two inertial measurement units (IMUs) for measuring linear accelerations and rates of rotation for computing head movements. One IMU is mounted on the goggles, the other on the centrifuge or vehicle frame. All eye-movement and head-motion data are time-stamped. In addition, the subject s point of regard is superimposed on each scene image to enable analysis of patterns of gaze in real time.
SFDT-1 Camera Pointing and Sun-Exposure Analysis and Flight Performance
NASA Technical Reports Server (NTRS)
White, Joseph; Dutta, Soumyo; Striepe, Scott
2015-01-01
The Supersonic Flight Dynamics Test (SFDT) vehicle was developed to advance and test technologies of NASA's Low Density Supersonic Decelerator (LDSD) Technology Demonstration Mission. The first flight test (SFDT-1) occurred on June 28, 2014. In order to optimize the usefulness of the camera data, analysis was performed to optimize parachute visibility in the camera field of view during deployment and inflation and to determine the probability of sun-exposure issues with the cameras given the vehicle heading and launch time. This paper documents the analysis, results and comparison with flight video of SFDT-1.
Laser-Directed Ranging System Implementing Single Camera System for Telerobotics Applications
NASA Technical Reports Server (NTRS)
Wells, Dennis L. (Inventor); Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1995-01-01
The invention relates generally to systems for determining the range of an object from a reference point and, in one embodiment, to laser-directed ranging systems useful in telerobotics applications. Digital processing techniques are employed which minimize the complexity and cost of the hardware and software for processing range calculations, thereby enhancing the commercial attractiveness of the system for use in relatively low-cost robotic systems. The system includes a video camera for generating images of the target, image digitizing circuitry, and an associated frame grabber circuit. The circuit first captures one of the pairs of stereo video images of the target, and then captures a second video image of the target as it is partly illuminated by the light beam, suitably generated by a laser. The two video images, taken sufficiently close together in time to minimize camera and scene motion, are converted to digital images and then compared. Common pixels are eliminated, leaving only a digital image of the laser-illuminated spot on the target. Mw centroid of the laser illuminated spot is dm obtained and compared with a predetermined reference point, predetermined by design or calibration, which represents the coordinate at the focal plane of the laser illumination at infinite range. Preferably, the laser and camera are mounted on a servo-driven platform which can be oriented to direct the camera and the laser toward the target. In one embodiment the platform is positioned in response to movement of the operator's head. Position and orientation sensors are used to monitor head movement. The disparity between the digital image of the laser spot and the reference point is calculated for determining range to the target. Commercial applications for the system relate to active range-determination systems, such as those used with robotic systems in which it is necessary to determine the, range to a workpiece or object to be grasped or acted upon by a robot arm end-effector in response to commands generated by an operator. In one embodiment, the system provides a real-time image of the target for the operator as the robot approaches the object. The system is also adapted for use in virtual reality systems in which a remote object or workpiece is to be acted upon by a remote robot arm or other mechanism controlled by an operator.
Protective laser beam viewing device
Neil, George R.; Jordan, Kevin Carl
2012-12-18
A protective laser beam viewing system or device including a camera selectively sensitive to laser light wavelengths and a viewing screen receiving images from the laser sensitive camera. According to a preferred embodiment of the invention, the camera is worn on the head of the user or incorporated into a goggle-type viewing display so that it is always aimed at the area of viewing interest to the user and the viewing screen is incorporated into a video display worn as goggles over the eyes of the user.
Three-dimensional face pose detection and tracking using monocular videos: tool and application.
Dornaika, Fadi; Raducanu, Bogdan
2009-08-01
Recently, we have proposed a real-time tracker that simultaneously tracks the 3-D head pose and facial actions in monocular video sequences that can be provided by low quality cameras. This paper has two main contributions. First, we propose an automatic 3-D face pose initialization scheme for the real-time tracker by adopting a 2-D face detector and an eigenface system. Second, we use the proposed methods-the initialization and tracking-for enhancing the human-machine interaction functionality of an AIBO robot. More precisely, we show how the orientation of the robot's camera (or any active vision system) can be controlled through the estimation of the user's head pose. Applications based on head-pose imitation such as telepresence, virtual reality, and video games can directly exploit the proposed techniques. Experiments on real videos confirm the robustness and usefulness of the proposed methods.
Embodied attention and word learning by toddlers
Yu, Chen; Smith, Linda B.
2013-01-01
Many theories of early word learning begin with the uncertainty inherent to learning a word from its co-occurrence with a visual scene. However, the relevant visual scene for infant word learning is neither from the adult theorist’s view nor the mature partner’s view, but is rather from the learner’s personal view. Here we show that when 18-month old infants interacted with objects in play with their parents, they created moments in which a single object was visually dominant. If parents named the object during these moments of bottom-up selectivity, later forced-choice tests showed that infants learned the name, but did not when naming occurred during a less visually selective moment. The momentary visual input for parents and toddlers was captured via head cameras placed low on each participant’s forehead as parents played with and named objects for their infant. Frame-by-frame analyses of the head camera images at and around naming moments were conducted to determine the visual properties at input that were associated with learning. The analyses indicated that learning occurred when bottom-up visual information was clean and uncluttered. The sensory-motor behaviors of infants and parents were also analyzed to determine how their actions on the objects may have created these optimal visual moments for learning. The results are discussed with respect to early word learning, embodied attention, and the social role of parents in early word learning. PMID:22878116
Biomarker Discovery in Gulf War Veterans: Development of a War Illness Diagnostic Panel
2013-10-01
Med Genomics. 2009;2:12. 11. Sullivan K, Krengel M, Proctor SP,et al. Cognitive functioning in treatment-seeking Gulf War veterans: pyridostigmine ... bromide use and PTSD. J Psychopathology and Behavioral Assessment. 2003;25:95-103. 12. Toomey R, Alpern R, Vasterling JJ, et al. Neuropsychological
SU-G-IeP4-12: Performance of In-111 Coincident Gamma-Ray Counting: A Monte Carlo Simulation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pahlka, R; Kappadath, S; Mawlawi, O
2016-06-15
Purpose: The decay of In-111 results in a non-isotropic gamma-ray cascade, which is normally imaged using a gamma camera. Creating images with a gamma camera using coincident gamma-rays from In-111 has not been previously studied. Our objective was to explore the feasibility of imaging this cascade as coincidence events and to determine the optimal timing resolution and source activity using Monte Carlo simulations. Methods: GEANT4 was used to simulate the decay of the In-111 nucleus and to model the gamma camera. Each photon emission was assigned a timestamp, and the time delay and angular separation for the second gamma-ray inmore » the cascade was consistent with the known intermediate state half-life of 85ns. The gamma-rays are transported through a model of a Siemens dual head Symbia “S” gamma camera with a 5/8-inch thick crystal and medium energy collimators. A true coincident event was defined as a single 171keV gamma-ray followed by a single 245keV gamma-ray within a specified time window (or vice versa). Several source activities (ranging from 10uCi to 5mCi) with and without incorporation of background counts were then simulated. Each simulation was analyzed using varying time windows to assess random events. The noise equivalent count rate (NECR) was computed based on the number of true and random counts for each combination of activity and time window. No scatter events were assumed since sources were simulated in air. Results: As expected, increasing the timing window increased the total number of observed coincidences albeit at the expense of true coincidences. A timing window range of 200–500ns maximizes the NECR at clinically-used source activities. The background rate did not significantly alter the maximum NECR. Conclusion: This work suggests coincident measurements of In-111 gamma-ray decay can be performed with commercial gamma cameras at clinically-relevant activities. Work is ongoing to assess useful clinical applications.« less
Red ball ranging optimization based on dual camera ranging method
NASA Astrophysics Data System (ADS)
Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung
2018-05-01
In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.
A continuous dry 300 mK cooler for THz sensing applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klemencic, G. M., E-mail: Georgina.Klemencic@astro.cf.ac.uk; Ade, P. A. R.; Sudiwala, R.
We describe and demonstrate the automated operation of a novel cryostat design that is capable of maintaining an unloaded base temperature of less than 300 mK continuously, without the need to recycle the gases within the final cold head, as is the case for conventional single shot sorption pumped {sup 3}He cooling systems. This closed dry system uses only 5 l of {sup 3}He gas, making this an economical alternative to traditional systems where a long hold time is required. During testing, a temperature of 365 mK was maintained with a constant 20 μW load, simulating the cooling requirement ofmore » a far infrared camera.« less
A Real-Time Optical 3D Tracker for Head-Mounted Display Systems
1990-03-01
paper. OPTOTRAK [Nor88] uses one camera with two dual-axis CCD infrared position sensors. Each position sen- sor has a dedicated processor board to...enhance the use- [Nor88] Northern Digital. Trade literature on Optotrak fulness of head-mounted display systems. - Northern Digital’s Three Dimensional
A guide to SPECT equipment for brain imaging
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hoffer, P.B.; Zubal, G.
1991-12-31
Single photon emission computed tomography (SPECT) was started by Kuhl and Edwards about 30 years ago. Their original instrument consisted of four focused Nal probes mounted on a moving gantry. During the 1980s, clinical SPECT imaging was most frequently performed using single-headed Anger-type cameras which were modified for rotational as well as static imaging. Such instruments are still available and may be useful in settings where there are few patients and SPECT is used only occasionally. More frequently, however, dedicated SPECT devices are purchased which optimize equipment potential while being user-friendly. Modern SPECT instrumentation incorporates improvements in the detector, computers,more » mathematical formulations, electronics and display systems. A comprehensive discussion of all aspects of SPECT is beyond the scope of this article. The authors, however, discuss general concepts of SPECT, the current state-of-the-art in clinical SPECT instrumentation, and areas of common misunderstanding. 9 refs.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conder, A.; Mummolo, F. J.
The goal of the project was to develop a compact, large active area, high spatial resolution, high dynamic range, charge-coupled device (CCD) camera to replace film for digital imaging of visible light, ultraviolet radiation, and soft to penetrating X-rays. The camera head and controller needed to be capable of operation within a vacuum environment and small enough to be fielded within the small vacuum target chambers at LLNL.
General theory of remote gaze estimation using the pupil center and corneal reflections.
Guestrin, Elias Daniel; Eizenman, Moshe
2006-06-01
This paper presents a general theory for the remote estimation of the point-of-gaze (POG) from the coordinates of the centers of the pupil and corneal reflections. Corneal reflections are produced by light sources that illuminate the eye and the centers of the pupil and corneal reflections are estimated in video images from one or more cameras. The general theory covers the full range of possible system configurations. Using one camera and one light source, the POG can be estimated only if the head is completely stationary. Using one camera and multiple light sources, the POG can be estimated with free head movements, following the completion of a multiple-point calibration procedure. When multiple cameras and multiple light sources are used, the POG can be estimated following a simple one-point calibration procedure. Experimental and simulation results suggest that the main sources of gaze estimation errors are the discrepancy between the shape of real corneas and the spherical corneal shape assumed in the general theory, and the noise in the estimation of the centers of the pupil and corneal reflections. A detailed example of a system that uses the general theory to estimate the POG on a computer screen is presented.
Opportunity Science Using the Juno Magnetometer Investigation Star Trackers
NASA Astrophysics Data System (ADS)
Joergensen, J. L.; Connerney, J. E.; Bang, A. M.; Denver, T.; Oliversen, R. J.; Benn, M.; Lawton, P.
2013-12-01
The magnetometer experiment onboard Juno is equipped with four non-magnetic star tracker camera heads, two of which reside on each of the magnetometer sensor optical benches. These are located 10 and 12 m from the spacecraft body at the end of one of the three solar panel wings. The star tracker, collectively referred to as the Advanced Stellar Compass (ASC), provides high accuracy attitude information for the magnetometer sensors throughout science operations. The star tracker camera heads are pointed +/- 13 deg off the spin vector, in the anti-sun direction, imaging a 13 x 20 deg field of view every ¼ second as Juno rotates at 1 or 2 rpm. The ASC is a fully autonomous star tracker, producing a time series of attitude quaternions for each camera head, utilizing a suite of internal support functions. These include imaging capabilities, autonomous object tracking, automatic dark-sky monitoring, and related capabilities; these internal functions may be accessed via telecommand. During Juno's cruise phase, this capability can be tapped to provide unique science and engineering data available along the Juno trajectory. We present a few examples of the JUNO ASC opportunity science here. As the Juno spacecraft approached the Earth-Moon system for the close encounter with the Earth on October 9, 2013, one of the ASC camera heads obtained imagery of the Earth-Moon system while the other three remained in full science (attitude determination) operation. This enabled the first movie of the Earth and Moon obtained by a spacecraft flying past the Earth in gravity assist. We also use the many artificial satellites in orbit about the Earth as calibration targets for the autonomous asteroid detection system inherent to the ASC autonomous star tracker. We shall also profile the zodiacal dust disk, using the interstellar image data, and present the outlook for small asteroid body detection and distribution being performed during Juno's passage from Earth flyby to Jovian orbit insertion.
Biomechanical analyses of whiplash injuries using an experimental model.
Yoganandan, Narayan; Pintar, Frank A; Cusick, Joseph F
2002-09-01
Neck pain and headaches are the two most common symptoms of whiplash. The working hypothesis is that pain originates from excessive motions in the upper and lower cervical segments. The research design used an intact human cadaver head-neck complex as an experimental model. The intact head-neck preparation was fixed at the thoracic end with the head unconstrained. Retroreflective targets were placed on the mastoid process, anterior regions of the vertebral bodies, and lateral masses at every spinal level. Whiplash loading was delivered using a mini-sled pendulum device. A six-axis load cell and an accelerometer were attached to the inferior fixation of the specimen. High-speed video cameras were used to obtain the kinematics. During the initial stages of loading, a transient decoupling of the head occurs with respect to the neck exhibiting a lag of the cranium. The upper cervical spine-head undergoes local flexion concomitant with a lag of the head while the lower column is in local extension. This establishes a reverse curvature to the head-neck complex. With continuing application of whiplash loading, the inertia of the head catches up with the neck. Later, the entire head-neck complex is under an extension mode with a single extension curvature. The lower cervical facet joint kinematics demonstrates varying local compression and sliding. While the anterior- and posterior-most regions of the facet joint slide, the posterior-most region of the joint compresses more than the anterior-most region. These varying kinematics at the two ends of the facet joint result in a pinching mechanism. Excessive flexion of the posterior upper cervical regions can be correlated to headaches. The pinching mechanism of the facet joints can be correlated to neck pain. The kinematics of the soft tissue-related structures explain the mechanism of these common whiplash associated disorders.
Multi-Head Very High Power Strobe System For Motion Picture Special Effects
NASA Astrophysics Data System (ADS)
Lovoi, P. A.; Fink, Michael L.
1983-10-01
A very large camera synchronizable strobe system has been developed for motion picture special effects. This system, the largest ever built, was delivered to MGM/UA to be used in the movie "War Games". The system consists of 12 individual strobe heads and a power supply distribution system. Each strobe head operates independently and may be flashed up to 24 times per second under computer control. An energy of 480 Joules per flash is used in six strobe heads and 240 Joules per flash in the remaining six strobe heads. The beam pattern is rectangular with a FWHM of 60° x 48°.
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2016-12-01
A low-cost, easy-to-implement but practical single-camera stereo-digital image correlation (DIC) system using a four-mirror adapter is established for accurate shape and three-dimensional (3D) deformation measurements. The mirrors assisted pseudo-stereo imaging system can convert a single camera into two virtual cameras, which view a specimen from different angles and record the surface images of the test object onto two halves of the camera sensor. To enable deformation measurement in non-laboratory conditions or extreme high temperature environments, an active imaging optical design, combining an actively illuminated monochromatic source with a coupled band-pass optical filter, is compactly integrated to the pseudo-stereo DIC system. The optical design, basic principles and implementation procedures of the established system for 3D profile and deformation measurements are described in detail. The effectiveness and accuracy of the established system are verified by measuring the profile of a regular cylinder surface and displacements of a translated planar plate. As an application example, the established system is used to determine the tensile strains and Poisson's ratio of a composite solid propellant specimen during stress relaxation test. Since the established single-camera stereo-DIC system only needs a single camera and presents strong robustness against variations in ambient light or the thermal radiation of a hot object, it demonstrates great potential in determining transient deformation in non-laboratory or high-temperature environments with the aid of a single high-speed camera.
A Robust Camera-Based Interface for Mobile Entertainment
Roig-Maimó, Maria Francesca; Manresa-Yee, Cristina; Varona, Javier
2016-01-01
Camera-based interfaces in mobile devices are starting to be used in games and apps, but few works have evaluated them in terms of usability or user perception. Due to the changing nature of mobile contexts, this evaluation requires extensive studies to consider the full spectrum of potential users and contexts. However, previous works usually evaluate these interfaces in controlled environments such as laboratory conditions, therefore, the findings cannot be generalized to real users and real contexts. In this work, we present a robust camera-based interface for mobile entertainment. The interface detects and tracks the user’s head by processing the frames provided by the mobile device’s front camera, and its position is then used to interact with the mobile apps. First, we evaluate the interface as a pointing device to study its accuracy, and different factors to configure such as the gain or the device’s orientation, as well as the optimal target size for the interface. Second, we present an in the wild study to evaluate the usage and the user’s perception when playing a game controlled by head motion. Finally, the game is published in an application store to make it available to a large number of potential users and contexts and we register usage data. Results show the feasibility of using this robust camera-based interface for mobile entertainment in different contexts and by different people. PMID:26907288
Babcock, Hazen P
2018-01-29
This work explores the use of industrial grade CMOS cameras for single molecule localization microscopy (SMLM). We show that industrial grade CMOS cameras approach the performance of scientific grade CMOS cameras at a fraction of the cost. This makes it more economically feasible to construct high-performance imaging systems with multiple cameras that are capable of a diversity of applications. In particular we demonstrate the use of industrial CMOS cameras for biplane, multiplane and spectrally resolved SMLM. We also provide open-source software for simultaneous control of multiple CMOS cameras and for the reduction of the movies that are acquired to super-resolution images.
NASA Astrophysics Data System (ADS)
Muraishi, Hiroshi; Hara, Hidetake; Abe, Shinji; Yokose, Mamoru; Watanabe, Takara; Takeda, Tohoru; Koba, Yusuke; Fukuda, Shigekazu
2016-03-01
We have developed a heavy-ion computed tomography (IonCT) system using a scintillation screen and an electron-multiplying charged coupled device (EMCCD) camera that can measure a large object such as a human head. In this study, objective with the development of the system was to investigate the possibility of applying this system to heavy-ion treatment planning from the point of view of spatial resolution in a reconstructed image. Experiments were carried out on a rotation phantom using 12C accelerated up to 430 MeV/u by the Heavy-Ion Medical Accelerator in Chiba (HIMAC) at the National Institute of Radiological Sciences (NIRS). We demonstrated that the reconstructed image of an object with a water equivalent thickness (WET) of approximately 18 cm was successfully achieved with the spatial resolution of 1 mm, which would make this IonCT system worth applying to the heavy-ion treatment planning for head and neck cancers.
NASA Astrophysics Data System (ADS)
Brauchle, Joerg; Berger, Ralf; Hein, Daniel; Bucher, Tilman
2017-04-01
The DLR Institute of Optical Sensor Systems has developed the MACS-Himalaya, a custom built Modular Aerial Camera System specifically designed for the extreme geometric (steep slopes) and radiometric (high contrast) conditions of high mountain areas. It has an overall field of view of 116° across-track consisting of a nadir and two oblique looking RGB camera heads and a fourth nadir looking near-infrared camera. This design provides the capability to fly along narrow valleys and simultaneously cover ground and steep valley flank topography with similar ground resolution. To compensate for extreme contrasts between fresh snow and dark shadows in high altitudes a High Dynamic Range (HDR) mode was implemented, which typically takes a sequence of 3 images with graded integration times, each covering 12 bit radiometric depth, resulting in a total dynamic range of 15-16 bit. This enables dense image matching and interpretation for sunlit snow and glaciers as well as for dark shaded rock faces in the same scene. Small and lightweight industrial grade camera heads are used and operated at a rate of 3.3 frames per second with 3-step HDR, which is sufficient to achieve a longitudinal overlap of approximately 90% per exposure time at 1,000 m above ground at a velocity of 180 km/h. Direct georeferencing and multitemporal monitoring without the need of ground control points is possible due to the use of a high end GPS/INS system, a stable calibrated inner geometry of the camera heads and a fully photogrammetric workflow at DLR. In 2014 a survey was performed on the Nepalese side of the Himalayas. The remote sensing system was carried in a wingpod by a Stemme S10 motor glider. Amongst other targets, the Seti Valley, Kali-Gandaki Valley and the Mt. Everest/Khumbu Region were imaged at altitudes up to 9,200 m. Products such as dense point clouds, DSMs and true orthomosaics with a ground pixel resolution of up to 15 cm were produced in regions and outcrops normally inaccessible to aerial imagery. These data are used in the fields of natural hazards, geomorphology and glaciology (see Thompson et al., CR4.3). In the presentation the camera system is introduced and examples and applications from the Nepal campaign are given.
Positron emission particle tracking using a modular positron camera
NASA Astrophysics Data System (ADS)
Parker, D. J.; Leadbeater, T. W.; Fan, X.; Hausard, M. N.; Ingram, A.; Yang, Z.
2009-06-01
The technique of positron emission particle tracking (PEPT), developed at Birmingham in the early 1990s, enables a radioactively labelled tracer particle to be accurately tracked as it moves between the detectors of a "positron camera". In 1999 the original Birmingham positron camera, which consisted of a pair of MWPCs, was replaced by a system comprising two NaI(Tl) gamma camera heads operating in coincidence. This system has been successfully used for PEPT studies of a wide range of granular and fluid flow processes. More recently a modular positron camera has been developed using a number of the bismuth germanate (BGO) block detectors from standard PET scanners (CTI ECAT 930 and 950 series). This camera has flexible geometry, is transportable, and is capable of delivering high data rates. This paper presents simple models of its performance, and initial experience of its use in a range of geometries and applications.
A&M. Outdoor turntable. Aerial view of trackage as of 1954. ...
A&M. Outdoor turntable. Aerial view of trackage as of 1954. Camera faces northeast along line of track heading for the IET. Upper set of east/west tracks head for the hot shop; the other, for the cold shop. Date: November 24, 1954. INEEL negative no. 13203 - Idaho National Engineering Laboratory, Test Area North, Scoville, Butte County, ID
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kohn, Silvia
1993-01-01
The pilot's ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays, commonly used in Apache and Cobra helicopter night operations, originates from a relatively narrow field-of-view Forward Looking Infrared Radiation Camera, gimbal-mounted at the nose of the aircraft and slaved to the pilot's line-of-sight, in order to obtain a wide-angle field-of-regard. Pilots have encountered considerable difficulties in controlling the aircraft by these devices. Experimental simulator results presented here indicate that part of these difficulties can be attributed to head/camera slaving system phase lags and errors. In the presence of voluntary head rotation, these slaving system imperfections are shown to impair the Control-Oriented Visual Field Information vital in vehicular control, such as the perception of the anticipated flight path or the vehicle yaw rate. Since, in the presence of slaving system imperfections, the pilot will tend to minimize head rotation, the full wide-angle field-of-regard of the line-of-sight slaved Helmet-Mounted Display, is not always fully utilized.
Passive perception system for day/night autonomous off-road navigation
NASA Astrophysics Data System (ADS)
Rankin, Arturo L.; Bergh, Charles F.; Goldberg, Steven B.; Bellutta, Paolo; Huertas, Andres; Matthies, Larry H.
2005-05-01
Passive perception of terrain features is a vital requirement for military related unmanned autonomous vehicle operations, especially under electromagnetic signature management conditions. As a member of Team Raptor, the Jet Propulsion Laboratory developed a self-contained passive perception system under the DARPA funded PerceptOR program. An environmentally protected forward-looking sensor head was designed and fabricated in-house to straddle an off-the-shelf pan-tilt unit. The sensor head contained three color cameras for multi-baseline daytime stereo ranging, a pair of cooled mid-wave infrared cameras for nighttime stereo ranging, and supporting electronics to synchronize captured imagery. Narrow-baseline stereo provided improved range data density in cluttered terrain, while wide-baseline stereo provided more accurate ranging for operation at higher speeds in relatively open areas. The passive perception system processed stereo images and outputted over a local area network terrain maps containing elevation, terrain type, and detected hazards. A novel software architecture was designed and implemented to distribute the data processing on a 533MHz quad 7410 PowerPC single board computer under the VxWorks real-time operating system. This architecture, which is general enough to operate on N processors, has been subsequently tested on Pentium-based processors under Windows and Linux, and a Sparc based-processor under Unix. The passive perception system was operated during FY04 PerceptOR program evaluations at Fort A. P. Hill, Virginia, and Yuma Proving Ground, Arizona. This paper discusses the Team Raptor passive perception system hardware and software design, implementation, and performance, and describes a road map to faster and improved passive perception.
Very High-Speed Digital Video Capability for In-Flight Use
NASA Technical Reports Server (NTRS)
Corda, Stephen; Tseng, Ting; Reaves, Matthew; Mauldin, Kendall; Whiteman, Donald
2006-01-01
digital video camera system has been qualified for use in flight on the NASA supersonic F-15B Research Testbed aircraft. This system is capable of very-high-speed color digital imaging at flight speeds up to Mach 2. The components of this system have been ruggedized and shock-mounted in the aircraft to survive the severe pressure, temperature, and vibration of the flight environment. The system includes two synchronized camera subsystems installed in fuselage-mounted camera pods (see Figure 1). Each camera subsystem comprises a camera controller/recorder unit and a camera head. The two camera subsystems are synchronized by use of an MHub(TradeMark) synchronization unit. Each camera subsystem is capable of recording at a rate up to 10,000 pictures per second (pps). A state-of-the-art complementary metal oxide/semiconductor (CMOS) sensor in the camera head has a maximum resolution of 1,280 1,024 pixels at 1,000 pps. Exposure times of the electronic shutter of the camera range from 1/200,000 of a second to full open. The recorded images are captured in a dynamic random-access memory (DRAM) and can be downloaded directly to a personal computer or saved on a compact flash memory card. In addition to the high-rate recording of images, the system can display images in real time at 30 pps. Inter Range Instrumentation Group (IRIG) time code can be inserted into the individual camera controllers or into the M-Hub unit. The video data could also be used to obtain quantitative, three-dimensional trajectory information. The first use of this system was in support of the Space Shuttle Return to Flight effort. Data were needed to help in understanding how thermally insulating foam is shed from a space shuttle external fuel tank during launch. The cameras captured images of simulated external tank debris ejected from a fixture mounted under the centerline of the F-15B aircraft. Digital video was obtained at subsonic and supersonic flight conditions, including speeds up to Mach 2 and altitudes up to 50,000 ft (15.24 km). The digital video was used to determine the structural survivability of the debris in a real flight environment and quantify the aerodynamic trajectories of the debris.
Stroup-Benham, C A; Treviño, F M; Treviño, D B
1990-01-01
Data from the southwestern United States sample of the Hispanic Health and Nutrition Examination Survey were employed to compare the patterns of alcohol use among Mexican American mothers and children in female-headed households with use patterns among mothers and children in couple-headed households. Single female heads of household drank more alcoholic beverages on more days than females from dual-headed households. As a whole, the children of single heads of household still living at home did not demonstrate significantly different drinking patterns from their dual-headed household counterparts. While male children of single-headed households drank more days and total drinks than their dual-headed household counterparts, female children of dual-headed households drank more days and total drinks than female children from single-headed households. PMID:9187580
The "Body Mass Index" of Flexible Ureteroscopes.
Proietti, Silvia; Somani, Bhaskar; Sofer, Mario; Pietropaolo, Amelia; Rosso, Marco; Saitta, Giuseppe; Gaboardi, Franco; Traxer, Olivier; Giusti, Guido
2017-10-01
To assess the "body mass index" (BMI) (weight and length) of 12 flexible ureteroscopes (digital and fiber optic) along with the light cables and camera heads, to make the best use of our instruments. Twelve different brand-new flexible ureteroscopes from four different manufacturers, along with eight camera heads and three light cables were evaluated. Each ureteroscope, camera head, and light cable was weighted; the total length of each ureteroscope, shaft, handle, flexible end-tip, and cable were all measured. According to our measurements (in grams [g]), the lightest ureteroscope was the LithoVue (277.5 g), while the heaviest was the URF-V2 (942.5 g). The lightest fiber optic endoscope was the Viper (309 g), while the heaviest was the Cobra (351.5 g). Taking into account the entirety of the endoscopes, the lightest ureteroscope was the Lithovue and the heaviest was the Wolf Cobra with the Wolf camera "3 CHIP HD KAMERA KOPF ENDOCAM LOGIC HD" (1474 g). The longest ureteroscope was the URF-P6 (101.6 cm) and the shortest was the LithoVue (95.5 cm); whereas the Viper and Cobra had the longest shaft (69 cm) and URF-V had the shortest shaft (67.2 cm). The URF-V2 had the longest flexible end-tip (7.6 cm), while the LithoVue had the shortest end-tip (5.7 cm) in both directions (up/down), while the URF-V had the shortest upward deflection (3.7 cm). Newer more versatile digital endoscopes were lighter than their traditional fiber optic counterparts in their entirety, with disposable endoscope having a clear advantage over other reusable ureteroscopes. Knowing the "BMI" of our flexible ureteroscopes is an important information that every endourologist should always take into consideration.
Archer, Hilary A; Smailagic, Nadja; John, Christeena; Holmes, Robin B; Takwoingi, Yemisi; Coulthard, Elizabeth J; Cullum, Sarah
2015-06-23
In the UK, dementia affects 5% of the population aged over 65 years and 25% of those over 85 years. Frontotemporal dementia (FTD) represents one subtype and is thought to account for up to 16% of all degenerative dementias. Although the core of the diagnostic process in dementia rests firmly on clinical and cognitive assessments, a wide range of investigations are available to aid diagnosis.Regional cerebral blood flow (rCBF) single-photon emission computed tomography (SPECT) is an established clinical tool that uses an intravenously injected radiolabelled tracer to map blood flow in the brain. In FTD the characteristic pattern seen is hypoperfusion of the frontal and anterior temporal lobes. This pattern of blood flow is different to patterns seen in other subtypes of dementia and so can be used to differentiate FTD.It has been proposed that a diagnosis of FTD, (particularly early stage), should be made not only on the basis of clinical criteria but using a combination of other diagnostic findings, including rCBF SPECT. However, more extensive testing comes at a financial cost, and with a potential risk to patient safety and comfort. To determine the diagnostic accuracy of rCBF SPECT for diagnosing FTD in populations with suspected dementia in secondary/tertiary healthcare settings and in the differential diagnosis of FTD from other dementia subtypes. Our search strategy used two concepts: (a) the index test and (b) the condition of interest. We searched citation databases, including MEDLINE (Ovid SP), EMBASE (Ovid SP), BIOSIS (Ovid SP), Web of Science Core Collection (ISI Web of Science), PsycINFO (Ovid SP), CINAHL (EBSCOhost) and LILACS (Bireme), using structured search strategies appropriate for each database. In addition we searched specialised sources of diagnostic test accuracy studies and reviews including: MEDION (Universities of Maastricht and Leuven), DARE (Database of Abstracts of Reviews of Effects) and HTA (Health Technology Assessment) database.We requested a search of the Cochrane Register of Diagnostic Test Accuracy Studies and used the related articles feature in PubMed to search for additional studies. We tracked key studies in citation databases such as Science Citation Index and Scopus to ascertain any further relevant studies. We identified 'grey' literature, mainly in the form of conference abstracts, through the Web of Science Core Collection, including Conference Proceedings Citation Index and Embase. The most recent search for this review was run on the 1 June 2013.Following title and abstract screening of the search results, full-text papers were obtained for each potentially eligible study. These papers were then independently evaluated for inclusion or exclusion. We included both case-control and cohort (delayed verification of diagnosis) studies. Where studies used a case-control design we included all participants who had a clinical diagnosis of FTD or other dementia subtype using standard clinical diagnostic criteria. For cohort studies, we included studies where all participants with suspected dementia were administered rCBF SPECT at baseline. We excluded studies of participants from selected populations (e.g. post-stroke) and studies of participants with a secondary cause of cognitive impairment. Two review authors extracted information on study characteristics and data for the assessment of methodological quality and the investigation of heterogeneity. We assessed the methodological quality of each study using the QUADAS-2 (Quality Assessment of Diagnostic Accuracy Studies) tool. We produced a narrative summary describing numbers of studies that were found to have high/low/unclear risk of bias as well as concerns regarding applicability. To produce 2 x 2 tables, we dichotomised the rCBF SPECT results (scan positive or negative for FTD) and cross-tabulated them against the results for the reference standard. These tables were then used to calculate the sensitivity and specificity of the index test. Meta-analysis was not performed due to the considerable between-study variation in clinical and methodological characteristics. Eleven studies (1117 participants) met our inclusion criteria. These consisted of six case-control studies, two retrospective cohort studies and three prospective cohort studies. Three studies used single-headed camera SPECT while the remaining eight used multiple-headed camera SPECT. Study design and methods varied widely. Overall, participant selection was not well described and the studies were judged as having either high or unclear risk of bias. Often the threshold used to define a positive SPECT result was not predefined and the results were reported with knowledge of the reference standard. Concerns regarding applicability of the studies to the review question were generally low across all three domains (participant selection, index test and reference standard).Sensitivities and specificities for differentiating FTD from non-FTD ranged from 0.73 to 1.00 and from 0.80 to 1.00, respectively, for the three multiple-headed camera studies. Sensitivities were lower for the two single-headed camera studies; one reported a sensitivity and specificity of 0.40 (95% confidence interval (CI) 0.05 to 0.85) and 0.95 (95% CI 0.90 to 0.98), respectively, and the other a sensitivity and specificity of 0.36 (95% CI 0.24 to 0.50) and 0.92 (95% CI 0.88 to 0.95), respectively.Eight of the 11 studies which used SPECT to differentiate FTD from Alzheimer's disease used multiple-headed camera SPECT. Of these studies, five used a case-control design and reported sensitivities of between 0.52 and 1.00, and specificities of between 0.41 and 0.86. The remaining three studies used a cohort design and reported sensitivities of between 0.73 and 1.00, and specificities of between 0.94 and 1.00. The three studies that used single-headed camera SPECT reported sensitivities of between 0.40 and 0.80, and specificities of between 0.61 and 0.97. At present, we would not recommend the routine use of rCBF SPECT in clinical practice because there is insufficient evidence from the available literature to support this.Further research into the use of rCBF SPECT for differentiating FTD from other dementias is required. In particular, protocols should be standardised, study populations should be well described, the threshold for 'abnormal' scans predefined and clear details given on how scans are analysed. More prospective cohort studies that verify the presence or absence of FTD during a period of follow up should be undertaken.
Target Acquisition for Projectile Vision-Based Navigation
2014-03-01
Future Work 20 8. References 21 Appendix A. Simulation Results 23 Appendix B. Derivation of Ground Resolution for a Diffraction-Limited Pinhole Camera...results for visual acquisition (left) and target recognition (right). ..........19 Figure B-1. Differential object and image areas for pinhole camera...projectile and target (measured in terms of the angle ) will depend on target heading. In particular, because we have aligned the x axis along the
Optical measurement of sound using time-varying laser speckle patterns
NASA Astrophysics Data System (ADS)
Leung, Terence S.; Jiang, Shihong; Hebden, Jeremy
2011-02-01
In this work, we introduce an optical technique to measure sound. The technique involves pointing a coherent pulsed laser beam on the surface of the measurement site and capturing the time-varying speckle patterns using a CCD camera. Sound manifests itself as vibrations on the surface which induce a periodic translation of the speckle pattern over time. Using a parallel speckle detection scheme, the dynamics of the time-varying speckle patterns can be captured and processed to produce spectral information of the sound. One potential clinical application is to measure pathological sounds from the brain as a screening test. We performed experiments to demonstrate the principle of the detection scheme using head phantoms. The results show that the detection scheme can measure the spectra of single frequency sounds between 100 and 2000 Hz. The detection scheme worked equally well in both a flat geometry and an anatomical head geometry. However, the current detection scheme is too slow for use in living biological tissues which has a decorrelation time of a few milliseconds. Further improvements have been suggested.
A restraint-free small animal SPECT imaging system with motion tracking
DOE Office of Scientific and Technical Information (OSTI.GOV)
Weisenberger, A.G.; Gleason, S.S.; Goddard, J.
2005-06-01
We report on an approach toward the development of a high-resolution single photon emission computed tomography (SPECT) system to image the biodistribution of radiolabeled tracers such as Tc-99m and I-125 in unrestrained/unanesthetized mice. An infrared (IR)-based position tracking apparatus has been developed and integrated into a SPECT gantry. The tracking system is designed to measure the spatial position of a mouse's head at a rate of 10-15 frames per second with submillimeter accuracy. The high-resolution, gamma imaging detectors are based on pixellated NaI(Tl) crystal scintillator arrays, position-sensitive photomultiplier tubes, and novel readout circuitry requiring fewer analog-digital converter (ADC) channels whilemore » retaining high spatial resolution. Two SPECT gamma camera detector heads based upon position-sensitive photomultiplier tubes have been built and installed onto the gantry. The IR landmark-based pose measurement and tracking system is under development to provide animal position data during a SPECT scan. The animal position and orientation data acquired by the tracking system will be used for motion correction during the tomographic image reconstruction.« less
Advanced illumination control algorithm for medical endoscopy applications
NASA Astrophysics Data System (ADS)
Sousa, Ricardo M.; Wäny, Martin; Santos, Pedro; Morgado-Dias, F.
2015-05-01
CMOS image sensor manufacturer, AWAIBA, is providing the world's smallest digital camera modules to the world market for minimally invasive surgery and one time use endoscopic equipment. Based on the world's smallest digital camera head and the evaluation board provided to it, the aim of this paper is to demonstrate an advanced fast response dynamic control algorithm of the illumination LED source coupled to the camera head, over the LED drivers embedded on the evaluation board. Cost efficient and small size endoscopic camera modules nowadays embed minimal size image sensors capable of not only adjusting gain and exposure time but also LED illumination with adjustable illumination power. The LED illumination power has to be dynamically adjusted while navigating the endoscope over changing illumination conditions of several orders of magnitude within fractions of the second to guarantee a smooth viewing experience. The algorithm is centered on the pixel analysis of selected ROIs enabling it to dynamically adjust the illumination intensity based on the measured pixel saturation level. The control core was developed in VHDL and tested in a laboratory environment over changing light conditions. The obtained results show that it is capable of achieving correction speeds under 1 s while maintaining a static error below 3% relative to the total number of pixels on the image. The result of this work will allow the integration of millimeter sized high brightness LED sources on minimal form factor cameras enabling its use in endoscopic surgical robotic or micro invasive surgery.
Single exposure three-dimensional imaging of dusty plasma clusters.
Hartmann, Peter; Donkó, István; Donkó, Zoltán
2013-02-01
We have worked out the details of a single camera, single exposure method to perform three-dimensional imaging of a finite particle cluster. The procedure is based on the plenoptic imaging principle and utilizes a commercial Lytro light field still camera. We demonstrate the capabilities of our technique on a single layer particle cluster in a dusty plasma, where the camera is aligned and inclined at a small angle to the particle layer. The reconstruction of the third coordinate (depth) is found to be accurate and even shadowing particles can be identified.
Single lens 3D-camera with extended depth-of-field
NASA Astrophysics Data System (ADS)
Perwaß, Christian; Wietzke, Lennart
2012-03-01
Placing a micro lens array in front of an image sensor transforms a normal camera into a single lens 3D camera, which also allows the user to change the focus and the point of view after a picture has been taken. While the concept of such plenoptic cameras is known since 1908, only recently the increased computing power of low-cost hardware and the advances in micro lens array production, have made the application of plenoptic cameras feasible. This text presents a detailed analysis of plenoptic cameras as well as introducing a new type of plenoptic camera with an extended depth of field and a maximal effective resolution of up to a quarter of the sensor resolution.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
NASA Astrophysics Data System (ADS)
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D.; Shekhar, Raj
2017-01-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery. PMID:28943703
Application of single-image camera calibration for ultrasound augmented laparoscopic visualization.
Liu, Xinyang; Su, He; Kang, Sukryool; Kane, Timothy D; Shekhar, Raj
2015-03-01
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool ( rdCalib ; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker ® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
Improving land vehicle situational awareness using a distributed aperture system
NASA Astrophysics Data System (ADS)
Fortin, Jean; Bias, Jason; Wells, Ashley; Riddle, Larry; van der Wal, Gooitzen; Piacentino, Mike; Mandelbaum, Robert
2005-05-01
U.S. Army Research, Development, and Engineering Command (RDECOM) Communications Electronics Research, Development and Engineering Center (CERDEC) Night Vision and Electronic Sensors Directorate (NVESD) has performed early work to develop a Distributed Aperture System (DAS). The DAS aims at improving the situational awareness of armored fighting vehicle crews under closed-hatch conditions. The concept is based on a plurality of sensors configured to create a day and night dome of surveillance coupled with heads up displays slaved to the operator's head to give a "glass turret" feel. State-of-the-art image processing is used to produce multiple seamless hemispherical views simultaneously available to the vehicle commander, crew members and dismounting infantry. On-the-move automatic cueing of multiple moving/pop-up low silhouette threats is also done with the possibility to save/revisit/share past events. As a first step in this development program, a contract was awarded to United Defense to further develop the Eagle VisionTM system. The second-generation prototype features two camera heads, each comprising four high-resolution (2048x1536) color sensors, and each covering a field of view of 270°hx150°v. High-bandwidth digital links interface the camera heads with a field programmable gate array (FPGA) based custom processor developed by Sarnoff Corporation. The processor computes the hemispherical stitch and warp functions required for real-time, low latency, immersive viewing (360°hx120°v, 30° down) and generates up to six simultaneous extended graphics array (XGA) video outputs for independent display either on a helmet-mounted display (with associated head tracking device) or a flat panel display (and joystick). The prototype is currently in its last stage of development and will be integrated on a vehicle for user evaluation and testing. Near-term improvements include the replacement of the color camera heads with a pixel-level fused combination of uncooled long wave infrared (LWIR) and low light level intensified imagery. It is believed that the DAS will significantly increase situational awareness by providing the users with a day and night, wide area coverage, immersive visualization capability.
The Trans-Visible Navigator: A See-Through Neuronavigation System Using Augmented Reality.
Watanabe, Eiju; Satoh, Makoto; Konno, Takehiko; Hirai, Masahiro; Yamaguchi, Takashi
2016-03-01
The neuronavigator has become indispensable for brain surgery and works in the manner of point-to-point navigation. Because the positional information is indicated on a personal computer (PC) monitor, surgeons are required to rotate the dimension of the magnetic resonance imaging/computed tomography scans to match the surgical field. In addition, they must frequently alternate their gaze between the surgical field and the PC monitor. To overcome these difficulties, we developed an augmented reality-based navigation system with whole-operation-room tracking. A tablet PC is used for visualization. The patient's head is captured by the back-face camera of the tablet. Three-dimensional images of intracranial structures are extracted from magnetic resonance imaging/computed tomography and are superimposed on the video image of the head. When viewed from various directions around the head, intracranial structures are displayed with corresponding angles as viewed from the camera direction, thus giving the surgeon the sensation of seeing through the head. Whole-operation-room tracking is realized using a VICON tracking system with 6 cameras. A phantom study showed a spatial resolution of about 1 mm. The present system was evaluated in 6 patients who underwent tumor resection surgery, and we showed that the system is useful for planning skin incisions as well as craniotomy and the localization of superficial tumors. The main advantage of the present system is that it achieves volumetric navigation in contrast to conventional point-to-point navigation. It extends augmented reality images directly onto real surgical images, thus helping the surgeon to integrate these 2 dimensions intuitively. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
Cheng, Yufeng; Jin, Shuying; Wang, Mi; Zhu, Ying; Dong, Zhipeng
2017-06-20
The linear array push broom imaging mode is widely used for high resolution optical satellites (HROS). Using double-cameras attached by a high-rigidity support along with push broom imaging is one method to enlarge the field of view while ensuring high resolution. High accuracy image mosaicking is the key factor of the geometrical quality of complete stitched satellite imagery. This paper proposes a high accuracy image mosaicking approach based on the big virtual camera (BVC) in the double-camera system on the GaoFen2 optical remote sensing satellite (GF2). A big virtual camera can be built according to the rigorous imaging model of a single camera; then, each single image strip obtained by each TDI-CCD detector can be re-projected to the virtual detector of the big virtual camera coordinate system using forward-projection and backward-projection to obtain the corresponding single virtual image. After an on-orbit calibration and relative orientation, the complete final virtual image can be obtained by stitching the single virtual images together based on their coordinate information on the big virtual detector image plane. The paper subtly uses the concept of the big virtual camera to obtain a stitched image and the corresponding high accuracy rational function model (RFM) for concurrent post processing. Experiments verified that the proposed method can achieve seamless mosaicking while maintaining the geometric accuracy.
Mitsubishi thermal imager using the 512 x 512 PtSi focal plane arrays
NASA Astrophysics Data System (ADS)
Fujino, Shotaro; Miyoshi, Tetsuo; Yokoh, Masataka; Kitahara, Teruyoshi
1990-01-01
MITSUBISHI THERMAL IMAGER model IR-5120A is high resolution and high sensitivity infrared television imaging system. It was exhibited in SPIE'S 1988 Technical Symposium on OPTICS, ELECTRO-OPTICS, and SENSORS, held at April 1988 Orlando, and acquired interest of many attendants of the symposium for it's high performance. The detector is a Platinium Silicide Charge Sweep Device (CSD) array containing more than 260,000 individual pixels manufactured by Mitsubishi Electric Co. The IR-5120A consists of a Camera Head. containing the CSD, a stirling cycle cooler and support electronics, and a Camera Control Unit containing the pixel fixed pattern noise corrector, video controllor, cooler driver and support power supplies. The stirling cycle cooler built into the Camera Head is used for keeping CSD temperature of approx. 80K with the features such as light weight, long life of more than 2000 hours and low acoustical noise. This paper describes an improved Thermal Imager, with more light weight, compact size and higher performance, and it's design philosophy, characteristics and field image.
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
American Carrier Air Power at the Dawn of a New Century
2005-01-01
Systems, Office of the Secretary of Defense (Operational Test and Evaluation); then–Commander Calvin Craig, OPNAV N81; Captain Kenneth Neubauer and...TACP Tactical Air Control Party TARPS Tactical Air Reconnaissance Pod System TCS Television Camera System TLAM Tomahawk Land-Attack Missile TST Time...store any video imagery acquired by the aircraft’s systems, including the TARPS pod, the pilot’s head-up display (HUD), the Television Camera System (TCS
NASA Astrophysics Data System (ADS)
Ji, Peng; Song, Aiguo; Song, Zimo; Liu, Yuqing; Jiang, Guohua; Zhao, Guopu
2017-02-01
In this paper, we describe a heading direction correction algorithm for a tracked mobile robot. To save hardware resources as far as possible, the mobile robot’s wrist camera is used as the only sensor, which is rotated to face stairs. An ensemble heading deviation detector is proposed to help the mobile robot correct its heading direction. To improve the generalization ability, a multi-scale Gabor filter is used to process the input image previously. Final deviation result is acquired by applying the majority vote strategy on all the classifiers’ results. The experimental results show that our detector is able to enable the mobile robot to correct its heading direction adaptively while it is climbing the stairs.
ERIC Educational Resources Information Center
Prandota, Joseph
2010-01-01
Anatomic, histopathologic, and MRI/SPET studies of autistic spectrum disorders (ASD) patients' brains confirm existence of very early developmental deficits. In congenital and chronic murine toxoplasmosis several cerebral anomalies also have been reported, and worldwide, approximately two billion people are chronically infected with T. "gondii"…
NASA Astrophysics Data System (ADS)
Yu, Liping; Pan, Bing
2017-08-01
Full-frame, high-speed 3D shape and deformation measurement using stereo-digital image correlation (stereo-DIC) technique and a single high-speed color camera is proposed. With the aid of a skillfully designed pseudo stereo-imaging apparatus, color images of a test object surface, composed of blue and red channel images from two different optical paths, are recorded by a high-speed color CMOS camera. The recorded color images can be separated into red and blue channel sub-images using a simple but effective color crosstalk correction method. These separated blue and red channel sub-images are processed by regular stereo-DIC method to retrieve full-field 3D shape and deformation on the test object surface. Compared with existing two-camera high-speed stereo-DIC or four-mirror-adapter-assisted singe-camera high-speed stereo-DIC, the proposed single-camera high-speed stereo-DIC technique offers prominent advantages of full-frame measurements using a single high-speed camera but without sacrificing its spatial resolution. Two real experiments, including shape measurement of a curved surface and vibration measurement of a Chinese double-side drum, demonstrated the effectiveness and accuracy of the proposed technique.
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
3D surface pressure measurement with single light-field camera and pressure-sensitive paint
NASA Astrophysics Data System (ADS)
Shi, Shengxian; Xu, Shengming; Zhao, Zhou; Niu, Xiaofu; Quinn, Mark Kenneth
2018-05-01
A novel technique that simultaneously measures three-dimensional model geometry, as well as surface pressure distribution, with single camera is demonstrated in this study. The technique takes the advantage of light-field photography which can capture three-dimensional information with single light-field camera, and combines it with the intensity-based pressure-sensitive paint method. The proposed single camera light-field three-dimensional pressure measurement technique (LF-3DPSP) utilises a similar hardware setup to the traditional two-dimensional pressure measurement technique, with exception that the wind-on, wind-off and model geometry images are captured via an in-house-constructed light-field camera. The proposed LF-3DPSP technique was validated with a Mach 5 flared cone model test. Results show that the technique is capable of measuring three-dimensional geometry with high accuracy for relatively large curvature models, and the pressure results compare well with the Schlieren tests, analytical calculations, and numerical simulations.
Single-snapshot 2D color measurement by plenoptic imaging system
NASA Astrophysics Data System (ADS)
Masuda, Kensuke; Yamanaka, Yuji; Maruyama, Go; Nagai, Sho; Hirai, Hideaki; Meng, Lingfei; Tosic, Ivana
2014-03-01
Plenoptic cameras enable capture of directional light ray information, thus allowing applications such as digital refocusing, depth estimation, or multiband imaging. One of the most common plenoptic camera architectures contains a microlens array at the conventional image plane and a sensor at the back focal plane of the microlens array. We leverage the multiband imaging (MBI) function of this camera and develop a single-snapshot, single-sensor high color fidelity camera. Our camera is based on a plenoptic system with XYZ filters inserted in the pupil plane of the main lens. To achieve high color measurement precision of this system, we perform an end-to-end optimization of the system model that includes light source information, object information, optical system information, plenoptic image processing and color estimation processing. Optimized system characteristics are exploited to build an XYZ plenoptic colorimetric camera prototype that achieves high color measurement precision. We describe an application of our colorimetric camera to color shading evaluation of display and show that it achieves color accuracy of ΔE<0.01.
Ranging Apparatus and Method Implementing Stereo Vision System
NASA Technical Reports Server (NTRS)
Li, Larry C. (Inventor); Cox, Brian J. (Inventor)
1997-01-01
A laser-directed ranging system for use in telerobotics applications and other applications involving physically handicapped individuals. The ranging system includes a left and right video camera mounted on a camera platform, and a remotely positioned operator. The position of the camera platform is controlled by three servo motors to orient the roll axis, pitch axis and yaw axis of the video cameras, based upon an operator input such as head motion. A laser is provided between the left and right video camera and is directed by the user to point to a target device. The images produced by the left and right video cameras are processed to eliminate all background images except for the spot created by the laser. This processing is performed by creating a digital image of the target prior to illumination by the laser, and then eliminating common pixels from the subsequent digital image which includes the laser spot. The horizontal disparity between the two processed images is calculated for use in a stereometric ranging analysis from which range is determined.
Complete supine percutaneous nephrolithotomy with GoPro®. Ten steps for success.
Vicentini, Fabio Carvalho; Dos Santos, Hugo Daniel Barone; Batagello, Carlos Alfredo; Amundson, Julia Rothe; Oliveira, Evaristo Peixoto; Marchini, Giovanni Scala; Srougi, Miguel; Nahas, Willian Carlos; Mazzucchi, Eduardo
2018-03-15
To show a video of a complete supine Percutaneous Nephrolithotomy (csPCNL) performed for the treatment of a staghorn calculus, from the surgeon's point of view. The procedure was recorded with a GoPro® camera, demonstrating the ten essential steps for a successful procedure. The patient was a 38 years-old woman with 2.4cm of left kidney lower pole stone burden who presented with 3 months of lumbar pain and recurrent urinary tract infections. She had a previous diagnosis of polycystic kidney disease and chronic renal failure stage 2. CT scan showed two 1.2cm stones in the lower pole (Guy's Stone Score 2). She had a previous ipsilateral double J insertion due to an obstructive pyelonephritis. The csPCNL was uneventful with a single access in the lower pole. The surgeon had a Full HD GoPro Hero 4 Session® camera mounted on his head, controlled by the surgical team with a remote control. All of the mains steps were recorded. Informed consent was obtained prior to the procedure. The surgical time was 90 minutes. Hemoglobin drop was 0.5g/dL. A post-operative CT scan was stone-free. The patient was discharged 36 hours after surgery. The camera worked properly and didn't cause pain or muscle discomfort to the surgeon. The quality of the recorded movie was excellent. GoPro® camera proved to be a very interesting tool to document surgeries without interfering with the procedure and with great educational potential. More studies should be conducted to evaluate the role of this equipment. Copyright® by the International Brazilian Journal of Urology.
Driver head pose tracking with thermal camera
NASA Astrophysics Data System (ADS)
Bole, S.; Fournier, C.; Lavergne, C.; Druart, G.; Lépine, T.
2016-09-01
Head pose can be seen as a coarse estimation of gaze direction. In automotive industry, knowledge about gaze direction could optimize Human-Machine Interface (HMI) and Advanced Driver Assistance Systems (ADAS). Pose estimation systems are often based on camera when applications have to be contactless. In this paper, we explore uncooled thermal imagery (8-14μm) for its intrinsic night vision capabilities and for its invariance versus lighting variations. Two methods are implemented and compared, both are aided by a 3D model of the head. The 3D model, mapped with thermal texture, allows to synthesize a base of 2D projected models, differently oriented and labeled in yaw and pitch. The first method is based on keypoints. Keypoints of models are matched with those of the query image. These sets of matchings, aided with the 3D shape of the model, allow to estimate 3D pose. The second method is a global appearance approach. Among all 2D models of the base, algorithm searches the one which is the closest to the query image thanks to a weighted least squares difference.
1991-04-03
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
1995-08-29
The USML-1 Glovebox (GBX) is a multi-user facility supporting 16 experiments in fluid dynamics, combustion sciences, crystal growth, and technology demonstration. The GBX has an enclosed working space which minimizes the contamination risks to both Spacelab and experiment samples. The GBX supports four charge-coupled device (CCD) cameras (two of which may be operated simultaneously) with three black-and-white and three color camera CCD heads available. The GBX also has a backlight panel, a 35 mm camera, and a stereomicroscope that offers high-magnification viewing of experiment samples. Video data can also be downlinked in real-time. The GBX also provides electrical power for experiment hardware, a time-temperature display, and cleaning supplies.
Skeletal Scintigraphy (Bone Scan)
... The special camera and imaging techniques used in nuclear medicine include the gamma camera and single-photon emission-computed tomography (SPECT). The gamma camera, also called a scintillation camera, detects radioactive energy that is emitted from the patient's body and ...
Optical observations of the AMPTE artificial comet and magnetotail barium releases
NASA Technical Reports Server (NTRS)
Hallinan, T. J.; Stenbaek-Nielsen, H.; Brown, N.
1985-01-01
The first AMPTE artificial comet was observed with a low light level television camera operated aboard the NASA CV990 flying out of Moffett Field, California. The comet head, neutral cloud, and comet tail were all observed for four minutes with an unifiltered camera. Brief observations at T + 4 minutes through a 4554A Ba(+) filter confirmed the identification of the structures. The ion cloud expanded along with the neutral cloud at a rate of 2.3 km/sec (diameter) until it reached a final diameter of approx. 170 km at approx. T + 90 s. It also drifted with the neutral cloud until approx. 165 s. By T + 190 s it had reached a steady state velocity of 5.4 km/sec southward. A barium release in the magnetotail was observed from the CV990 in California, Eagle, Alaska, and Fairbanks, Alaska. Over a twenty-five minute period, the center of the barium streak drifted southward (approx. 500 m/sec), upward (24 km/sec) and eastward (approx 1 km/sec) in a nonrotating reference frame. An all-sky TV at Eagle showed a single auroral arc in the far North during this period.
NASA Astrophysics Data System (ADS)
Nazifah, A.; Norhanna, S.; Shah, S. I.; Zakaria, A.
2014-11-01
This study aimed to investigate the effects of material filter technique on Tc-99m spectra and performance parameters of Philip ADAC forte dual head gamma camera. Thickness of material filter was selected on the basis of percentage attenuation of various gamma ray energies by different thicknesses of zinc material. A cylindrical source tank of NEMA single photon emission computed tomography (SPECT) Triple Line Source Phantom filled with water and Tc-99m radionuclide injected was used for spectra, uniformity and sensitivity measurements. Vinyl plastic tube was used as a line source for spatial resolution. Images for uniformity were reconstructed by filtered back projection method. Butterworth filter of order 5 and cut off frequency 0.35 cycles/cm was selected. Chang's attenuation correction method was applied by selecting 0.13/cm linear attenuation coefficient. Count rate was decreased with material filter from the compton region of Tc-99m energy spectrum, also from the photopeak region. Spatial resolution was improved. However, uniformity of tomographic image was equivocal, and system volume sensitivity was reduced by material filter. Material filter improved system's spatial resolution. Therefore, the technique may be used for phantom studies to improve the image quality.
Tabletop computed lighting for practical digital photography.
Mohan, Ankit; Bailey, Reynold; Waite, Jonathan; Tumblin, Jack; Grimm, Cindy; Bodenheimer, Bobby
2007-01-01
We apply simplified image-based lighting methods to reduce the equipment, cost, time, and specialized skills required for high-quality photographic lighting of desktop-sized static objects such as museum artifacts. We place the object and a computer-steered moving-head spotlight inside a simple foam-core enclosure and use a camera to record photos as the light scans the box interior. Optimization, guided by interactive user sketching, selects a small set of these photos whose weighted sum best matches the user-defined target sketch. Unlike previous image-based relighting efforts, our method requires only a single area light source, yet it can achieve high-resolution light positioning to avoid multiple sharp shadows. A reduced version uses only a handheld light and may be suitable for battery-powered field photography equipment that fits into a backpack.
A goggle navigation system for cancer resection surgery
NASA Astrophysics Data System (ADS)
Xu, Junbin; Shao, Pengfei; Yue, Ting; Zhang, Shiwu; Ding, Houzhu; Wang, Jinkun; Xu, Ronald
2014-02-01
We describe a portable fluorescence goggle navigation system for cancer margin assessment during oncologic surgeries. The system consists of a computer, a head mount display (HMD) device, a near infrared (NIR) CCD camera, a miniature CMOS camera, and a 780 nm laser diode excitation light source. The fluorescence and the background images of the surgical scene are acquired by the CCD camera and the CMOS camera respectively, co-registered, and displayed on the HMD device in real-time. The spatial resolution and the co-registration deviation of the goggle navigation system are evaluated quantitatively. The technical feasibility of the proposed goggle system is tested in an ex vivo tumor model. Our experiments demonstrate the feasibility of using a goggle navigation system for intraoperative margin detection and surgical guidance.
Multiple Sensor Camera for Enhanced Video Capturing
NASA Astrophysics Data System (ADS)
Nagahara, Hajime; Kanki, Yoshinori; Iwai, Yoshio; Yachida, Masahiko
A resolution of camera has been drastically improved under a current request for high-quality digital images. For example, digital still camera has several mega pixels. Although a video camera has the higher frame-rate, the resolution of a video camera is lower than that of still camera. Thus, the high-resolution is incompatible with the high frame rate of ordinary cameras in market. It is difficult to solve this problem by a single sensor, since it comes from physical limitation of the pixel transfer rate. In this paper, we propose a multi-sensor camera for capturing a resolution and frame-rate enhanced video. Common multi-CCDs camera, such as 3CCD color camera, has same CCD for capturing different spectral information. Our approach is to use different spatio-temporal resolution sensors in a single camera cabinet for capturing higher resolution and frame-rate information separately. We build a prototype camera which can capture high-resolution (2588×1958 pixels, 3.75 fps) and high frame-rate (500×500, 90 fps) videos. We also proposed the calibration method for the camera. As one of the application of the camera, we demonstrate an enhanced video (2128×1952 pixels, 90 fps) generated from the captured videos for showing the utility of the camera.
RANKING TEM CAMERAS BY THEIR RESPONSE TO ELECTRON SHOT NOISE
Grob, Patricia; Bean, Derek; Typke, Dieter; Li, Xueming; Nogales, Eva; Glaeser, Robert M.
2013-01-01
We demonstrate two ways in which the Fourier transforms of images that consist solely of randomly distributed electrons (shot noise) can be used to compare the relative performance of different electronic cameras. The principle is to determine how closely the Fourier transform of a given image does, or does not, approach that of an image produced by an ideal camera, i.e. one for which single-electron events are modeled as Kronecker delta functions located at the same pixels where the electrons were incident on the camera. Experimentally, the average width of the single-electron response is characterized by fitting a single Lorentzian function to the azimuthally averaged amplitude of the Fourier transform. The reciprocal of the spatial frequency at which the Lorentzian function falls to a value of 0.5 provides an estimate of the number of pixels at which the corresponding line-spread function falls to a value of 1/e. In addition, the excess noise due to stochastic variations in the magnitude of the response of the camera (for single-electron events) is characterized by the amount to which the appropriately normalized power spectrum does, or does not, exceed the total number of electrons in the image. These simple measurements provide an easy way to evaluate the relative performance of different cameras. To illustrate this point we present data for three different types of scintillator-coupled camera plus a silicon-pixel (direct detection) camera. PMID:23747527
An automated calibration method for non-see-through head mounted displays.
Gilson, Stuart J; Fitzgibbon, Andrew W; Glennerster, Andrew
2011-08-15
Accurate calibration of a head mounted display (HMD) is essential both for research on the visual system and for realistic interaction with virtual objects. Yet, existing calibration methods are time consuming and depend on human judgements, making them error prone, and are often limited to optical see-through HMDs. Building on our existing approach to HMD calibration Gilson et al. (2008), we show here how it is possible to calibrate a non-see-through HMD. A camera is placed inside a HMD displaying an image of a regular grid, which is captured by the camera. The HMD is then removed and the camera, which remains fixed in position, is used to capture images of a tracked calibration object in multiple positions. The centroids of the markers on the calibration object are recovered and their locations re-expressed in relation to the HMD grid. This allows established camera calibration techniques to be used to recover estimates of the HMD display's intrinsic parameters (width, height, focal length) and extrinsic parameters (optic centre and orientation of the principal ray). We calibrated a HMD in this manner and report the magnitude of the errors between real image features and reprojected features. Our calibration method produces low reprojection errors without the need for error-prone human judgements. Copyright © 2011 Elsevier B.V. All rights reserved.
Satellite markers: a simple method for ground truth car pose on stereo video
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Piantini, Simone; Pierini, Marco
2018-04-01
Artificial prediction of future location of other cars in the context of advanced safety systems is a must. The remote estimation of car pose and particularly its heading angle is key to predict its future location. Stereo vision systems allow to get the 3D information of a scene. Ground truth in this specific context is associated with referential information about the depth, shape and orientation of the objects present in the traffic scene. Creating 3D ground truth is a measurement and data fusion task associated with the combination of different kinds of sensors. The novelty of this paper is the method to generate ground truth car pose only from video data. When the method is applied to stereo video, it also provides the extrinsic camera parameters for each camera at frame level which are key to quantify the performance of a stereo vision system when it is moving because the system is subjected to undesired vibrations and/or leaning. We developed a video post-processing technique which employs a common camera calibration tool for the 3D ground truth generation. In our case study, we focus in accurate car heading angle estimation of a moving car under realistic imagery. As outcomes, our satellite marker method provides accurate car pose at frame level, and the instantaneous spatial orientation for each camera at frame level.
A new spherical scanning system for infrared reflectography of paintings
NASA Astrophysics Data System (ADS)
Gargano, M.; Cavaliere, F.; Viganò, D.; Galli, A.; Ludwig, N.
2017-03-01
Infrared reflectography is an imaging technique used to visualize the underdrawings of ancient paintings; it relies on the fact that most pigment layers are quite transparent to infrared radiation in the spectral band between 0.8 μm and 2.5 μm. InGaAs sensor cameras are nowadays the most used devices to visualize the underdrawings but due to the small size of the detectors, these cameras are usually mounted on scanning systems to record high resolution reflectograms. This work describes a portable scanning system prototype based on a peculiar spherical scanning system built through a light weight and low cost motorized head. The motorized head was built with the purpose of allowing the refocusing adjustment needed to compensate the variable camera-painting distance during the rotation of the camera. The prototype has been tested first in laboratory and then in-situ for the Giotto panel "God the Father with Angels" with a 256 pixel per inch resolution. The system performance is comparable with that of other reflectographic devices with the advantage of extending the scanned area up to 1 m × 1 m, with a 40 min scanning time. The present configuration can be easily modified to increase the resolution up to 560 pixels per inch or to extend the scanned area up to 2 m × 2 m.
Yoshida, Eriko; Terada, Shin-Ichiro; Tanaka, Yasuyo H; Kobayashi, Kenta; Ohkura, Masamichi; Nakai, Junichi; Matsuzaki, Masanori
2018-05-29
In vivo wide-field imaging of neural activity with a high spatio-temporal resolution is a challenge in modern neuroscience. Although two-photon imaging is very powerful, high-speed imaging of the activity of individual synapses is mostly limited to a field of approximately 200 µm on a side. Wide-field one-photon epifluorescence imaging can reveal neuronal activity over a field of ≥1 mm 2 at a high speed, but is not able to resolve a single synapse. Here, to achieve a high spatio-temporal resolution, we combine an 8 K ultra-high-definition camera with spinning-disk one-photon confocal microscopy. This combination allowed us to image a 1 mm 2 field with a pixel resolution of 0.21 µm at 60 fps. When we imaged motor cortical layer 1 in a behaving head-restrained mouse, calcium transients were detected in presynaptic boutons of thalamocortical axons sparsely labeled with GCaMP6s, although their density was lower than when two-photon imaging was used. The effects of out-of-focus fluorescence changes on calcium transients in individual boutons appeared minimal. Axonal boutons with highly correlated activity were detected over the 1 mm 2 field, and were probably distributed on multiple axonal arbors originating from the same thalamic neuron. This new microscopy with an 8 K ultra-high-definition camera should serve to clarify the activity and plasticity of widely distributed cortical synapses.
NASA Astrophysics Data System (ADS)
Chi, Yuxi; Yu, Liping; Pan, Bing
2018-05-01
A low-cost, portable, robust and high-resolution single-camera stereo-digital image correlation (stereo-DIC) system for accurate surface three-dimensional (3D) shape and deformation measurements is described. This system adopts a single consumer-grade high-resolution digital Single Lens Reflex (SLR) camera and a four-mirror adaptor, rather than two synchronized industrial digital cameras, for stereo image acquisition. In addition, monochromatic blue light illumination and coupled bandpass filter imaging are integrated to ensure the robustness of the system against ambient light variations. In contrast to conventional binocular stereo-DIC systems, the developed pseudo-stereo-DIC system offers the advantages of low cost, portability, robustness against ambient light variations, and high resolution. The accuracy and precision of the developed single SLR camera-based stereo-DIC system were validated by measuring the 3D shape of a stationary sphere along with in-plane and out-of-plane displacements of a translated planar plate. Application of the established system to thermal deformation measurement of an alumina ceramic plate and a stainless-steel plate subjected to radiation heating was also demonstrated.
ERIC Educational Resources Information Center
Ruiz, Michael J.
1982-01-01
The camera presents an excellent way to illustrate principles of geometrical optics. Basic camera optics of the single-lens reflex camera are discussed, including interchangeable lenses and accessories available to most owners. Several experiments are described and results compared with theoretical predictions or manufacturer specifications.…
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras.
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-06-24
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system.
Normalized Metadata Generation for Human Retrieval Using Multiple Video Surveillance Cameras
Jung, Jaehoon; Yoon, Inhye; Lee, Seungwon; Paik, Joonki
2016-01-01
Since it is impossible for surveillance personnel to keep monitoring videos from a multiple camera-based surveillance system, an efficient technique is needed to help recognize important situations by retrieving the metadata of an object-of-interest. In a multiple camera-based surveillance system, an object detected in a camera has a different shape in another camera, which is a critical issue of wide-range, real-time surveillance systems. In order to address the problem, this paper presents an object retrieval method by extracting the normalized metadata of an object-of-interest from multiple, heterogeneous cameras. The proposed metadata generation algorithm consists of three steps: (i) generation of a three-dimensional (3D) human model; (ii) human object-based automatic scene calibration; and (iii) metadata generation. More specifically, an appropriately-generated 3D human model provides the foot-to-head direction information that is used as the input of the automatic calibration of each camera. The normalized object information is used to retrieve an object-of-interest in a wide-range, multiple-camera surveillance system in the form of metadata. Experimental results show that the 3D human model matches the ground truth, and automatic calibration-based normalization of metadata enables a successful retrieval and tracking of a human object in the multiple-camera video surveillance system. PMID:27347961
Plenoptic particle image velocimetry with multiple plenoptic cameras
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Thurow, Brian S.
2018-07-01
Plenoptic particle image velocimetry was recently introduced as a viable three-dimensional, three-component velocimetry technique based on light field cameras. One of the main benefits of this technique is its single camera configuration allowing the technique to be applied in facilities with limited optical access. The main drawback of this configuration is decreased accuracy in the out-of-plane dimension. This work presents a solution with the addition of a second plenoptic camera in a stereo-like configuration. A framework for reconstructing volumes with multiple plenoptic cameras including the volumetric calibration and reconstruction algorithms, including: integral refocusing, filtered refocusing, multiplicative refocusing, and MART are presented. It is shown that the addition of a second camera improves the reconstruction quality and removes the ‘cigar’-like elongation associated with the single camera system. In addition, it is found that adding a third camera provides minimal improvement. Further metrics of the reconstruction quality are quantified in terms of a reconstruction algorithm, particle density, number of cameras, camera separation angle, voxel size, and the effect of common image noise sources. In addition, a synthetic Gaussian ring vortex is used to compare the accuracy of the single and two camera configurations. It was determined that the addition of a second camera reduces the RMSE velocity error from 1.0 to 0.1 voxels in depth and 0.2 to 0.1 voxels in the lateral spatial directions. Finally, the technique is applied experimentally on a ring vortex and comparisons are drawn from the four presented reconstruction algorithms, where it was found that MART and multiplicative refocusing produced the cleanest vortex structure and had the least shot-to-shot variability. Filtered refocusing is able to produce the desired structure, albeit with more noise and variability, while integral refocusing struggled to produce a coherent vortex ring.
Improving accuracy of Plenoptic PIV using two light field cameras
NASA Astrophysics Data System (ADS)
Thurow, Brian; Fahringer, Timothy
2017-11-01
Plenoptic particle image velocimetry (PIV) has recently emerged as a viable technique for acquiring three-dimensional, three-component velocity field data using a single plenoptic, or light field, camera. The simplified experimental arrangement is advantageous in situations where optical access is limited and/or it is not possible to set-up the four or more cameras typically required in a tomographic PIV experiment. A significant disadvantage of a single camera plenoptic PIV experiment, however, is that the accuracy of the velocity measurement along the optical axis of the camera is significantly worse than in the two lateral directions. In this work, we explore the accuracy of plenoptic PIV when two plenoptic cameras are arranged in a stereo imaging configuration. It is found that the addition of a 2nd camera improves the accuracy in all three directions and nearly eliminates any differences between them. This improvement is illustrated using both synthetic and real experiments conducted on a vortex ring using both one and two plenoptic cameras.
Technical and instrumental prerequisites for single-port laparoscopic solo surgery: state of art.
Kim, Say-June; Lee, Sang Chul
2015-04-21
With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator's eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder.
Endoscopic fringe projection for in-situ inspection of a sheet-bulk metal forming process
NASA Astrophysics Data System (ADS)
Matthias, Steffen; Kästner, Markus; Reithmeier, Eduard
2015-05-01
Sheet-bulk metal forming is a new production process capable of performing deep-drawing and massive forming steps in a single operation. However, due to the high forming forces of the forming process, continuous process control is required in order to detect wear on the forming tool before production quality is impacted. To be able to measure the geometry of the forming tool in the limited space of forming presses, a new inspection system is being developed within the SFB/TR 73 collaborative research center. In addition to the limited space, the process restricts the amount of time available for inspection. Existing areal optical measurement systems suffer from shadowing when measuring the tool's inner elements, as they cannot be placed in the limited space next to the tool, while tactile measurement systems cannot meet the time restrictions for measuring the areal geometries. The new inspection system uses the fringe projection optical measurement principle to capture areal geometry data from relevant parts of the forming tool in short time. Highresolution image fibers are used to connect the system's compact sensor head to a base unit containing both camera and projector of the fringe projection system, which can be positioned outside of the moving parts of the press. To enable short measurement times, a high intensity laser source is used in the projector in combination with a digital micro-mirror device. Gradient index lenses are featured in the sensor head to allow for a very compact design that can be used in the narrow space above the forming tool inside the press. The sensor head is attached to an extended arm, which also guides the image fibers to the base unit. A rotation stage offers the possibility to capture measurements of different functional elements on the circular forming tool by changing the orientation of the sensor head next to the forming tool. During operation of the press, the arm can be travelled out of the moving parts of the forming press. To further reduce the measurement times of the fringe projection system, the inverse fringe projection principle has been adapted to the system to detect geometry deviations in a single camera image. Challenges arise from vibrations of both the forming machine and the positioning stages, which are transferred via the extended arm to the sensor head. Vibrations interfere with the analysis algorithms of both encoded and inverse fringe projection and thus impair measurement accuracy. To evaluate the impact of vibrations on the endoscopic system, results of measurements of simple geometries under the influence of vibrations are discussed. The effect of vibrations is imitated by displacing the measurement specimen during the measurement with a linear positioning stage. The concept of the new inspection system is presented within the scope of the TR 73 demonstrational sheet-bulk metal forming process. Finally, the capabilities of the endoscopic fringe projection system are shown by measurements of gearing structures on a forming tool compared to a CAD-reference.
Okamura, Hiroyuki; Abe, Hajime; Hasegawa-Baba, Yasuko; Saito, Kenji; Sekiya, Fumiko; Hayashi, Shim-Mo; Mirokuji, Yoshiharu; Maruyama, Shinpei; Ono, Atsushi; Nakajima, Madoka; Degawa, Masakuni; Ozawa, Shogo; Shibutani, Makoto; Maitani, Tamio
2015-01-01
Using the procedure devised by the Joint FAO/WHO Expert Committee on Food Additives (JECFA), we performed safety evaluations on five acetal flavouring substances uniquely used in Japan: acetaldehyde 2,3-butanediol acetal, acetoin dimethyl acetal, hexanal dibutyl acetal, hexanal glyceryl acetal and 4-methyl-2-pentanone propyleneglycol acetal. As no genotoxicity study data were available in the literature, all five substances had no chemical structural alerts predicting genotoxicity. Using Cramer's classification, acetoin dimethyl acetal and hexanal dibutyl acetal were categorised as class I, and acetaldehyde 2,3-butanediol acetal, hexanal glyceryl acetal and 4-methyl-2-pentanone propyleneglycol acetal as class III. The estimated daily intakes for all five substances were within the range of 1.45-6.53 µg/person/day using the method of maximised survey-derived intake based on the annual production data in Japan from 2001, 2005, 2008 and 2010, and 156-720 µg/person/day using the single-portion exposure technique (SPET), based on the average use levels in standard portion sizes of flavoured foods. The daily intakes of the two class I substances were below the threshold of toxicological concern (TTC) - 1800 μg/person/day. The daily intakes of the three class III substances exceeded the TTC (90 μg/person/day). Two of these, acetaldehyde 2,3-butanediol acetal and hexanal glyceryl acetal, were expected to be metabolised into endogenous products after ingestion. For 4-methyl-2-pentanone propyleneglycol acetal, one of its metabolites was not expected to be metabolised into endogenous products. However, its daily intake level, based on the estimated intake calculated by the SPET method, was about 1/15 000th of the no observed effect level. It was thus concluded that all five substances raised no safety concerns when used for flavouring foods at the currently estimated intake levels. While no information on in vitro and in vivo toxicity for all five substances was available, their metabolites were judged as raising no safety concerns at the current levels of intake.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mason, John A.; Looman, Marc R.; Poundall, Adam J.
2013-07-01
This paper describes the measurements, testing and performance validation of a sensitive gamma ray camera designed for radiation detection and quantification in the environment and decommissioning and hold-up measurements in nuclear facilities. The instrument, which is known as RadSearch, combines a sensitive and highly collimated LaBr{sub 3} scintillation detector with an optical (video) camera with controllable zoom and focus and a laser range finder in one detector head. The LaBr{sub 3} detector has a typical energy resolution of between 2.5% and 3% at the 662 keV energy of Cs-137 compared to that of NaI detectors with a resolution of typicallymore » 7% to 8% at the same energy. At this energy the tungsten shielding of the detector provides a shielding ratio of greater than 900:1 in the forward direction and 100:1 on the sides and from the rear. The detector head is mounted on a pan/tile mechanism with a range of motion of ±180 degrees (pan) and ±90 degrees (tilt) equivalent to 4 π steradians. The detector head with pan/tilt is normally mounted on a tripod or wheeled cart. It can also be mounted on vehicles or a mobile robot for access to high dose-rate areas and areas with high levels of contamination. Ethernet connects RadSearch to a ruggedized notebook computer from which it is operated and controlled. Power can be supplied either as 24-volts DC from a battery or as 50 volts DC supplied by a small mains (110 or 230 VAC) power supply unit that is co-located with the controlling notebook computer. In this latter case both power and Ethernet are supplied through a single cable that can be up to 80 metres in length. If a local battery supplies power, the unit can be controlled through wireless Ethernet. Both manual operation and automatic scanning of surfaces and objects is available through the software interface on the notebook computer. For each scan element making up a part of an overall scanned area, the unit measures a gamma ray spectrum. Multiple radionuclides may be selected by the operator and will be identified if present. In scanning operation the unit scans a designated region and superimposes over a video image the distribution of measured radioactivity. For the total scanned area or object RadSearch determines the total activity of operator selected radionuclides present and the gamma dose-rate measured at the detector head. Results of hold-up measurements made in a nuclear facility are presented, as are test measurements of point sources distributed arbitrarily on surfaces. These latter results are compared with the results of benchmarked MCNP Monte Carlo calculations. The use of the device for hold-up and decommissioning measurements is validated. (authors)« less
NASA Astrophysics Data System (ADS)
McIntosh, Benjamin Patrick
Blindness due to Age-Related Macular Degeneration and Retinitis Pigmentosa is unfortunately both widespread and largely incurable. Advances in visual prostheses that can restore functional vision in those afflicted by these diseases have evolved rapidly from new areas of research in ophthalmology and biomedical engineering. This thesis is focused on further advancing the state-of-the-art of both visual prostheses and implantable biomedical devices. A novel real-time system with a high performance head-mounted display is described that enables enhanced realistic simulation of intraocular retinal prostheses. A set of visual psychophysics experiments is presented using the visual prosthesis simulator that quantify, in several ways, the benefit of foveation afforded by an eye-pointed camera (such as an eye-tracked extraocular camera or an implantable intraocular camera) as compared with a head-pointed camera. A visual search experiment demonstrates a significant improvement in the time to locate a target on a screen when using an eye-pointed camera. A reach and grasp experiment demonstrates a 20% to 70% improvement in time to grasp an object when using an eye-pointed camera, with the improvement maximized when the percept is blurred. A navigation and mobility experiment shows a 10% faster walking speed and a 50% better ability to avoid obstacles when using an eye-pointed camera. Improvements to implantable biomedical devices are also described, including the design and testing of VLSI-integrable positive mobile ion contamination sensors and humidity sensors that can validate the hermeticity of biomedical device packages encapsulated by hermetic coatings, and can provide early warning of leaks or contamination that may jeopardize the implant. The positive mobile ion contamination sensors are shown to be sensitive to externally applied contamination. A model is proposed to describe sensitivity as a function of device geometry, and verified experimentally. Guidelines are provided on the use of spare CMOS oxide and metal layers to maximize the hermeticity of an implantable microchip. In addition, results are presented on the design and testing of small form factor, very low power, integrated CMOS clock generation circuits that are stable enough to drive commercial image sensor arrays, and therefore can be incorporated in an intraocular camera for retinal prostheses.
Video-rate nanoscopy enabled by sCMOS camera-specific single-molecule localization algorithms
Huang, Fang; Hartwich, Tobias M. P.; Rivera-Molina, Felix E.; Lin, Yu; Duim, Whitney C.; Long, Jane J.; Uchil, Pradeep D.; Myers, Jordan R.; Baird, Michelle A.; Mothes, Walther; Davidson, Michael W.; Toomre, Derek; Bewersdorf, Joerg
2013-01-01
Newly developed scientific complementary metal–oxide–semiconductor (sCMOS) cameras have the potential to dramatically accelerate data acquisition in single-molecule switching nanoscopy (SMSN) while simultaneously increasing the effective quantum efficiency. However, sCMOS-intrinsic pixel-dependent readout noise substantially reduces the localization precision and introduces localization artifacts. Here we present algorithms that overcome these limitations and provide unbiased, precise localization of single molecules at the theoretical limit. In combination with a multi-emitter fitting algorithm, we demonstrate single-molecule localization super-resolution imaging at up to 32 reconstructed images/second (recorded at 1,600–3,200 camera frames/second) in both fixed and living cells. PMID:23708387
NASA Astrophysics Data System (ADS)
Hsu, S. C.; Moser, A. L.; Merritt, E. C.; Adams, C. S.
2015-11-01
Over the past 4 years on the Plasma Liner Experiment (PLX) at LANL, we have studied obliquely and head-on-merging supersonic plasma jets of an argon/impurity or hydrogen/impurity mixture. The jets are formed/launched by pulsed-power-driven railguns. In successive experimental campaigns, we characterized the (a) evolution of plasma parameters of a single plasma jet as it propagated up to ~ 1 m away from the railgun nozzle, (b) density profiles and 2D morphology of the stagnation layer and oblique shocks that formed between obliquely merging jets, and (c) collisionless interpenetration transitioning to collisional stagnation between head-on-merging jets. Key plasma diagnostics included a fast-framing CCD camera, an 8-chord visible interferometer, a survey spectrometer, and a photodiode array. This talk summarizes the primary results mentioned above, and highlights analyses of inferred post-shock temperatures based on observations of density gradients that we attribute to shock-layer thickness. We also briefly describe more recent PLX experiments on Rayleigh-Taylor-instability evolution with magnetic and viscous effects, and potential future collisionless shock experiments enabled by low-impurity, higher-velocity plasma jets formed by contoured-gap coaxial guns. Supported by DOE Fusion Energy Sciences and LANL LDRD.
Enabling Disabled Persons to Gain Access to Digital Media
NASA Technical Reports Server (NTRS)
Beach, Glenn; OGrady, Ryan
2011-01-01
A report describes the first phase in an effort to enhance the NaviGaze software to enable profoundly disabled persons to operate computers. (Running on a Windows-based computer equipped with a video camera aimed at the user s head, the original NaviGaze software processes the user's head movements and eye blinks into cursor movements and mouse clicks to enable hands-free control of the computer.) To accommodate large variations in movement capabilities among disabled individuals, one of the enhancements was the addition of a graphical user interface for selection of parameters that affect the way the software interacts with the computer and tracks the user s movements. Tracking algorithms were improved to reduce sensitivity to rotations and reduce the likelihood of tracking the wrong features. Visual feedback to the user was improved to provide an indication of the state of the computer system. It was found that users can quickly learn to use the enhanced software, performing single clicks, double clicks, and drags within minutes of first use. Available programs that could increase the usability of NaviGaze were identified. One of these enables entry of text by using NaviGaze as a mouse to select keys on a virtual keyboard.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han; Li, Hecheng
2016-10-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as "ipsilateral, high, single-hand, sideways", which largely improves the comfort and fluency of surgery.
Single chip camera device having double sampling operation
NASA Technical Reports Server (NTRS)
Fossum, Eric R. (Inventor); Nixon, Robert (Inventor)
2002-01-01
A single chip camera device is formed on a single substrate including an image acquisition portion for control portion and the timing circuit formed on the substrate. The timing circuit also controls the photoreceptors in a double sampling mode in which are reset level is first read and then after an integration time a charged level is read.
Streak camera imaging of single photons at telecom wavelength
NASA Astrophysics Data System (ADS)
Allgaier, Markus; Ansari, Vahid; Eigner, Christof; Quiring, Viktor; Ricken, Raimund; Donohue, John Matthew; Czerniuk, Thomas; Aßmann, Marc; Bayer, Manfred; Brecht, Benjamin; Silberhorn, Christine
2018-01-01
Streak cameras are powerful tools for temporal characterization of ultrafast light pulses, even at the single-photon level. However, the low signal-to-noise ratio in the infrared range prevents measurements on weak light sources in the telecom regime. We present an approach to circumvent this problem, utilizing an up-conversion process in periodically poled waveguides in Lithium Niobate. We convert single photons from a parametric down-conversion source in order to reach the point of maximum detection efficiency of commercially available streak cameras. We explore phase-matching configurations to apply the up-conversion scheme in real-world applications.
Benedetti, L. R.; Holder, J. P.; Perkins, M.; ...
2016-02-26
We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement formore » gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. Furthermore, we have developed a device that can be added to the framing camera head to prevent these artifacts.« less
Benedetti, L R; Holder, J P; Perkins, M; Brown, C G; Anderson, C S; Allen, F V; Petre, R B; Hargrove, D; Glenn, S M; Simanovskaia, N; Bradley, D K; Bell, P
2016-02-01
We describe an experimental method to measure the gate profile of an x-ray framing camera and to determine several important functional parameters: relative gain (between strips), relative gain droop (within each strip), gate propagation velocity, gate width, and actual inter-strip timing. Several of these parameters cannot be measured accurately by any other technique. This method is then used to document cross talk-induced gain variations and artifacts created by radiation that arrives before the framing camera is actively amplifying x-rays. Electromagnetic cross talk can cause relative gains to vary significantly as inter-strip timing is varied. This imposes a stringent requirement for gain calibration. If radiation arrives before a framing camera is triggered, it can cause an artifact that manifests as a high-intensity, spatially varying background signal. We have developed a device that can be added to the framing camera head to prevent these artifacts.
1986-03-01
the remainder of the 12 team composed of functional specialists . These would include a demolitions expert» radio operator» weapons expert, and...reconnaissance expert. Cross-training is practiced to various degrees in case a specialist in one field is lost, another who has cross-*trained can take...vulnerable to assault by Spetznaz. (1: 12 > 23 V. SPET2NA2 VULNERABILITIES One of the flrat vulnerabilities Is the Spetznaz soldier himself. He Is
A power propulsion system based on a second-generation thermionic NPS of the ``Topaz'' type
NASA Astrophysics Data System (ADS)
Gryaznov, Georgi M.; Zhabotinski, Eugene E.; Andreev, Pavel V.; Zaritski, Gennadie a.; Koroteev, Anatoly S.; Martishin, Viktor M.; Akimov, Vladimir N.; Ponomarev-Stepnoi, Nikolai N.; Usov, Veniamin A.; Britt, Edward J.
1992-01-01
The paper considers the concept of power propulsion systems-universal space platforms (USPs) on the basis of second-generation thermionic nuclear power system (NPSs) and stationary plasma electric thrusters (SPETs). The composition and the principles of layout of such a system, based on a thermionic NPS with a continuous power of up to 30 kWe allowing power augmentation by a factor of 2-2.5 as long as during a year, as well as SPETs with a specific impulse of at least 20 km/s and a propulsion efficiency of 0.6-0.7 are discussed. The layouts and the basic parameters are presented for a power propulsion system ensuring cargo transportation from an initial radiation-safe 800 km high orbit into a geostationary one using the ``Zenit'' and ``Proton'' launch systems for injection into an initial orbit. It is shown that the mass of mission-oriented equipment in the geostationary orbit in the cases under consideration ranges from 2500 to 5500 kg on condition that the flight time is not longer than a year. The power propulsion system can be applied to autonomous power supply of various spacecraft including remote power delivery. It can be also used for deep space exploration.
NASA Technical Reports Server (NTRS)
Axholt, Magnus; Skoglund, Martin; Peterson, Stephen D.; Cooper, Matthew D.; Schoen, Thomas B.; Gustafsson, Fredrik; Ynnerman, Anders; Ellis, Stephen R.
2010-01-01
Augmented Reality (AR) is a technique by which computer generated signals synthesize impressions that are made to coexist with the surrounding real world as perceived by the user. Human smell, taste, touch and hearing can all be augmented, but most commonly AR refers to the human vision being overlaid with information otherwise not readily available to the user. A correct calibration is important on an application level, ensuring that e.g. data labels are presented at correct locations, but also on a system level to enable display techniques such as stereoscopy to function properly [SOURCE]. Thus, vital to AR, calibration methodology is an important research area. While great achievements already have been made, there are some properties in current calibration methods for augmenting vision which do not translate from its traditional use in automated cameras calibration to its use with a human operator. This paper uses a Monte Carlo simulation of a standard direct linear transformation camera calibration to investigate how user introduced head orientation noise affects the parameter estimation during a calibration procedure of an optical see-through head mounted display.
A low-cost video-oculography system for vestibular function testing.
Jihwan Park; Youngsun Kong; Yunyoung Nam
2017-07-01
In order to remain in focus during head movements, vestibular-ocular reflex causes eyes to move in the opposite direction to head movement. Disorders of vestibular system decrease vision, causing abnormal nystagmus and dizziness. To diagnose abnormal nystagmus, various studies have been reported including the use of rotating chair tests and videonystagmography. However, these tests are unsuitable for home use due to their high costs. Thus, a low-cost video-oculography system is necessary to obtain clinical features at home. In this paper, we present a low-cost video-oculography system using an infrared camera and Raspberry Pi board for tracking the pupils and evaluating a vestibular system. Horizontal eye movement is derived from video data obtained from an infrared camera and infrared light-emitting diodes, and the velocity of head rotation is obtained from a gyroscope sensor. Each pupil was extracted using a morphology operation and a contour detection method. Rotatory chair tests were conducted with our developed device. To evaluate our system, gain, asymmetry, and phase were measured and compared with System 2000. The average IQR errors of gain, phase and asymmetry were 0.81, 2.74 and 17.35, respectively. We showed that our system is able to measure clinical features.
Remote gaze tracking system for 3D environments.
Congcong Liu; Herrup, Karl; Shi, Bertram E
2017-07-01
Eye tracking systems are typically divided into two categories: remote and mobile. Remote systems, where the eye tracker is located near the object being viewed by the subject, have the advantage of being less intrusive, but are typically used for tracking gaze points on fixed two dimensional (2D) computer screens. Mobile systems such as eye tracking glasses, where the eye tracker are attached to the subject, are more intrusive, but are better suited for cases where subjects are viewing objects in the three dimensional (3D) environment. In this paper, we describe how remote gaze tracking systems developed for 2D computer screens can be used to track gaze points in a 3D environment. The system is non-intrusive. It compensates for small head movements by the user, so that the head need not be stabilized by a chin rest or bite bar. The system maps the 3D gaze points of the user onto 2D images from a scene camera and is also located remotely from the subject. Measurement results from this system indicate that it is able to estimate gaze points in the scene camera to within one degree over a wide range of head positions.
2012-08-08
These are the first two full-resolution images of the Martian surface from the Navigation cameras on NASA Curiosity rover, which are located on the rover head or mast. The rim of Gale Crater can be seen in the distance beyond the pebbly ground.
A multi-camera system for real-time pose estimation
NASA Astrophysics Data System (ADS)
Savakis, Andreas; Erhard, Matthew; Schimmel, James; Hnatow, Justin
2007-04-01
This paper presents a multi-camera system that performs face detection and pose estimation in real-time and may be used for intelligent computing within a visual sensor network for surveillance or human-computer interaction. The system consists of a Scene View Camera (SVC), which operates at a fixed zoom level, and an Object View Camera (OVC), which continuously adjusts its zoom level to match objects of interest. The SVC is set to survey the whole filed of view. Once a region has been identified by the SVC as a potential object of interest, e.g. a face, the OVC zooms in to locate specific features. In this system, face candidate regions are selected based on skin color and face detection is accomplished using a Support Vector Machine classifier. The locations of the eyes and mouth are detected inside the face region using neural network feature detectors. Pose estimation is performed based on a geometrical model, where the head is modeled as a spherical object that rotates upon the vertical axis. The triangle formed by the mouth and eyes defines a vertical plane that intersects the head sphere. By projecting the eyes-mouth triangle onto a two dimensional viewing plane, equations were obtained that describe the change in its angles as the yaw pose angle increases. These equations are then combined and used for efficient pose estimation. The system achieves real-time performance for live video input. Testing results assessing system performance are presented for both still images and video.
A four-lens based plenoptic camera for depth measurements
NASA Astrophysics Data System (ADS)
Riou, Cécile; Deng, Zhiyuan; Colicchio, Bruno; Lauffenburger, Jean-Philippe; Kohler, Sophie; Haeberlé, Olivier; Cudel, Christophe
2015-04-01
In previous works, we have extended the principles of "variable homography", defined by Zhang and Greenspan, for measuring height of emergent fibers on glass and non-woven fabrics. This method has been defined for working with fabric samples progressing on a conveyor belt. Triggered acquisition of two successive images was needed to perform the 3D measurement. In this work, we have retained advantages of homography variable for measurements along Z axis, but we have reduced acquisitions number to a single one, by developing an acquisition device characterized by 4 lenses placed in front of a single image sensor. The idea is then to obtain four projected sub-images on a single CCD sensor. The device becomes a plenoptic or light field camera, capturing multiple views on the same image sensor. We have adapted the variable homography formulation for this device and we propose a new formulation to calculate a depth with plenoptic cameras. With these results, we have transformed our plenoptic camera in a depth camera and first results given are very promising.
Effects of Airport Tower Controller Decision Support Tool on Controllers Head-Up Time
NASA Technical Reports Server (NTRS)
Hayashi, Miwa; Cruz Lopez, Jose M.
2013-01-01
Despite that aircraft positions and movements can be easily monitored on the radar displays at major airports nowadays, it is still important for the air traffic control tower (ATCT) controllers to look outside the window as much as possible to assure safe operations of traffic management. The present paper investigates whether an introduction of the NASA's proposed Spot and Runway Departure Advisor (SARDA), a decision support tool for the ATCT controller, would increase or decrease the controllers' head-up time. SARDA provides the controller departure-release schedule advisories, i.e., when to release each departure aircraft in order to minimize individual aircraft's fuel consumption on taxiways and simultaneously maximize the overall runway throughput. The SARDA advisories were presented on electronic flight strips (EFS). To investigate effects on the head-up time, a human-in-the-loop simulation experiment with two retired ATCT controller participants was conducted in a high-fidelity ATCT cab simulator with 360-degree computer-generated out-the-window view. Each controller participant wore a wearable video camera on a side of their head with the camera facing forward. The video data were later used to calculate their line of sight at each moment and eventually identify their head-up times. Four sessions were run with the SARDA advisories, and four sessions were run without (baseline). Traffic-load levels were varied in each session. The same set of user interface - EFS and the radar displays - were used in both the advisory and baseline sessions to make them directly comparable. The paper reports the findings and discusses their implications.
Single-pixel camera with one graphene photodetector.
Li, Gongxin; Wang, Wenxue; Wang, Yuechao; Yang, Wenguang; Liu, Lianqing
2016-01-11
Consumer cameras in the megapixel range are ubiquitous, but the improvement of them is hindered by the poor performance and high cost of traditional photodetectors. Graphene, a two-dimensional micro-/nano-material, recently has exhibited exceptional properties as a sensing element in a photodetector over traditional materials. However, it is difficult to fabricate a large-scale array of graphene photodetectors to replace the traditional photodetector array. To take full advantage of the unique characteristics of the graphene photodetector, in this study we integrated a graphene photodetector in a single-pixel camera based on compressive sensing. To begin with, we introduced a method called laser scribing for fabrication the graphene. It produces the graphene components in arbitrary patterns more quickly without photoresist contamination as do traditional methods. Next, we proposed a system for calibrating the optoelectrical properties of micro/nano photodetectors based on a digital micromirror device (DMD), which changes the light intensity by controlling the number of individual micromirrors positioned at + 12°. The calibration sensitivity is driven by the sum of all micromirrors of the DMD and can be as high as 10(-5)A/W. Finally, the single-pixel camera integrated with one graphene photodetector was used to recover a static image to demonstrate the feasibility of the single-pixel imaging system with the graphene photodetector. A high-resolution image can be recovered with the camera at a sampling rate much less than Nyquist rate. The study was the first demonstration for ever record of a macroscopic camera with a graphene photodetector. The camera has the potential for high-speed and high-resolution imaging at much less cost than traditional megapixel cameras.
Volumetric particle image velocimetry with a single plenoptic camera
NASA Astrophysics Data System (ADS)
Fahringer, Timothy W.; Lynch, Kyle P.; Thurow, Brian S.
2015-11-01
A novel three-dimensional (3D), three-component (3C) particle image velocimetry (PIV) technique based on volume illumination and light field imaging with a single plenoptic camera is described. A plenoptic camera uses a densely packed microlens array mounted near a high resolution image sensor to sample the spatial and angular distribution of light collected by the camera. The multiplicative algebraic reconstruction technique (MART) computed tomography algorithm is used to reconstruct a volumetric intensity field from individual snapshots and a cross-correlation algorithm is used to estimate the velocity field from a pair of reconstructed particle volumes. This work provides an introduction to the basic concepts of light field imaging with a plenoptic camera and describes the unique implementation of MART in the context of plenoptic image data for 3D/3C PIV measurements. Simulations of a plenoptic camera using geometric optics are used to generate synthetic plenoptic particle images, which are subsequently used to estimate the quality of particle volume reconstructions at various particle number densities. 3D reconstructions using this method produce reconstructed particles that are elongated by a factor of approximately 4 along the optical axis of the camera. A simulated 3D Gaussian vortex is used to test the capability of single camera plenoptic PIV to produce a 3D/3C vector field, where it was found that lateral displacements could be measured to approximately 0.2 voxel accuracy in the lateral direction and 1 voxel in the depth direction over a 300× 200× 200 voxel volume. The feasibility of the technique is demonstrated experimentally using a home-built plenoptic camera based on a 16-megapixel interline CCD camera and a 289× 193 array of microlenses and a pulsed Nd:YAG laser. 3D/3C measurements were performed in the wake of a low Reynolds number circular cylinder and compared with measurements made using a conventional 2D/2C PIV system. Overall, single camera plenoptic PIV is shown to be a viable 3D/3C velocimetry technique.
An astronomy camera for low background applications in the 1. 0 to 2. 5. mu. m spectral region
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaki, S.A.; Bailey, G.C.; Hagood, R.W.
1989-02-01
A short wavelength (1.0-2.5 ..mu..m) 128 x 128 focal plane array forms the heart of this low background astronomy camera system. The camera is designed to accept either a 128 x 128 HgCdTe array for the 1-2.5 ..mu..m spectral region or an InSb array for the 3-5 ..mu..m spectral region. A cryogenic folded optical system is utilized to control excess stray light along with a cold eight-position filter wheel for spectral filtering. The camera head and electronics will also accept a 256 x 256 focal plane. Engineering evaluation of the complete system is complete along with two engineering runs atmore » the JPL Table Mountain Observatory. System design, engineering performance, and sample imagery are presented in this paper.« less
Finding Intrinsic and Extrinsic Viewing Parameters from a Single Realist Painting
NASA Astrophysics Data System (ADS)
Jordan, Tadeusz; Stork, David G.; Khoo, Wai L.; Zhu, Zhigang
In this paper we studied the geometry of a three-dimensional tableau from a single realist painting - Scott Fraser’s Three way vanitas (2006). The tableau contains a carefully chosen complex arrangement of objects including a moth, egg, cup, and strand of string, glass of water, bone, and hand mirror. Each of the three plane mirrors presents a different view of the tableau from a virtual camera behind each mirror and symmetric to the artist’s viewing point. Our new contribution was to incorporate single-view geometric information extracted from the direct image of the wooden mirror frames in order to obtain the camera models of both the real camera and the three virtual cameras. Both the intrinsic and extrinsic parameters are estimated for the direct image and the images in three plane mirrors depicted within the painting.
Reconditioning of Cassini Narrow-Angle Camera
2002-07-23
These five images of single stars, taken at different times with the narrow-angle camera on NASA Cassini spacecraft, show the effects of haze collecting on the camera optics, then successful removal of the haze by warming treatments.
Gao, Taotao; Xiang, Jie; Jin, Runsen; Zhang, Yajie; Wu, Han
2016-01-01
Camera assistant plays a very important role in uniportal video-assisted thoracoscopic surgery (VATS), who acts as the eye of the surgeon, providing the VATS team with a stable and clear operating view. Thus, a good assistant should cooperate with surgeon and manipulate the camera expertly, to ensure eye-hand coordination. We have performed more than 100 uniportal VATS in the Department Of Thoracic Surgery in Ruijin Hospital. Based on our experiences, we summarized the method of holding camera, known as “ipsilateral, high, single-hand, sideways”, which largely improves the comfort and fluency of surgery. PMID:27867573
Technical and instrumental prerequisites for single-port laparoscopic solo surgery: State of art
Kim, Say-June; Lee, Sang Chul
2015-01-01
With the aid of advanced surgical techniques and instruments, single-port laparoscopic surgery (SPLS) can be accomplished with just two surgical members: an operator and a camera assistant. Under these circumstances, the reasonable replacement of a human camera assistant by a mechanical camera holder has resulted in a new surgical procedure termed single-port solo surgery (SPSS). In SPSS, the fixation and coordinated movement of a camera held by mechanical devices provides fixed and stable operative images that are under the control of the operator. Therefore, SPSS primarily benefits from the provision of the operator’s eye-to-hand coordination. Because SPSS is an intuitive modification of SPLS, the indications for SPSS are the same as those for SPLS. Though SPSS necessitates more actions than the surgery with a human assistant, these difficulties seem to be easily overcome by the greater provision of static operative images and the need for less lens cleaning and repositioning of the camera. When the operation is expected to be difficult and demanding, the SPSS process could be assisted by the addition of another instrument holder besides the camera holder. PMID:25914453
Biased Brownian motion mechanism for processivity and directionality of single-headed myosin-VI.
Iwaki, Mitsuhiro; Iwane, Atsuko Hikikoshi; Ikebe, Mitsuo; Yanagida, Toshio
2008-01-01
Conventional form to function as a vesicle transporter is not a 'single molecule' but a coordinated 'two molecules'. The coordinated two molecules make it complicated to reveal its mechanism. To overcome the difficulty, we adopted a single-headed myosin-VI as a model protein. Myosin-VI is an intracellular vesicle and organelle transporter that moves along actin filaments in a direction opposite to most other known myosin classes. The myosin-VI was expected to form a dimer to move processively along actin filaments with a hand-over-hand mechanism like other myosin organelle transporters. However, wild-type myosin-VI was demonstrated to be monomer and single-headed, casting doubt on its processivity. Using single molecule techniques, we show that green fluorescent protein (GFP)-fused single-headed myosin-VI does not move processively. However, when coupled to a 200 nm polystyrene bead (comparable to an intracellular vesicle in size) at a ratio of one head per bead, single-headed myosin-VI moves processively with large (40 nm) steps. Furthermore, we found that a single-headed myosin-VI-bead complex moved more processively in a high-viscous solution (40-fold higher than water) similar to cellular environment. Because diffusion of the bead is 60-fold slower than myosin-VI heads alone in water, we propose a model in which the bead acts as a diffusional anchor for the myosin-VI, enhancing the head's rebinding following detachment and supporting processive movement of the bead-monomer complex. This investigation will help us understand how molecular motors utilize Brownian motion in cells.
NASA Astrophysics Data System (ADS)
Viegas, Jaime; Mayeh, Mona; Srinivasan, Pradeep; Johnson, Eric G.; Marques, Paulo V. S.; Farahi, Faramarz
2017-02-01
In this work, a silicon oxynitride-on-silica refractometer is presented, based on sub-wavelength coupled arrayed waveguide interference, and capable of low-cost, high resolution, large scale deployment. The sensor has an experimental spectral sensitivity as high as 3200 nm/RIU, covering refractive indices ranging from 1 (air) up to 1.43 (oils). The sensor readout can be performed by standard spectrometers techniques of by pattern projection onto a camera, followed by optical pattern recognition. Positive identification of the refractive index of an unknown species is obtained by pattern cross-correlation with a look-up calibration table based algorithm. Given the lower contrast between core and cladding in such devices, higher mode overlap with single mode fiber is achieved, leading to a larger coupling efficiency and more relaxed alignment requirements as compared to silicon photonics platform. Also, the optical transparency of the sensor in the visible range allows the operation with light sources and camera detectors in the visible range, of much lower capital costs for a complete sensor system. Furthermore, the choice of refractive indices of core and cladding in the sensor head with integrated readout, allows the fabrication of the same device in polymers, for mass-production replication of disposable sensors.
Markerless identification of key events in gait cycle using image flow.
Vishnoi, Nalini; Duric, Zoran; Gerber, Naomi Lynn
2012-01-01
Gait analysis has been an interesting area of research for several decades. In this paper, we propose image-flow-based methods to compute the motion and velocities of different body segments automatically, using a single inexpensive video camera. We then identify and extract different events of the gait cycle (double-support, mid-swing, toe-off and heel-strike) from video images. Experiments were conducted in which four walking subjects were captured from the sagittal plane. Automatic segmentation was performed to isolate the moving body from the background. The head excursion and the shank motion were then computed to identify the key frames corresponding to different events in the gait cycle. Our approach does not require calibrated cameras or special markers to capture movement. We have also compared our method with the Optotrak 3D motion capture system and found our results in good agreement with the Optotrak results. The development of our method has potential use in the markerless and unencumbered video capture of human locomotion. Monitoring gait in homes and communities provides a useful application for the aged and the disabled. Our method could potentially be used as an assessment tool to determine gait symmetry or to establish the normal gait pattern of an individual.
NASA Astrophysics Data System (ADS)
Liu, Yongfeng; Zhang, You-tong; Gou, Chenhua; Tian, Hongsen
2008-12-01
Temperature laser- induced- fluorescence (LIF) 2-D imaging measurements using a new multi-spectral detection strategy are reported for high pressure flames in high-speed diesel engine. Schematic of the experimental set-up is outlined and the experimental data on the diesel engine is summarized. Experiment injection system is a third generation Bosch high-pressure common rail featuring a maximum pressure of 160 MPa. The injector is equipped with a six-hole nozzle, where each hole has a diameter of 0.124 mm. and slightly offset (by 1.0 mm) to the center of the cylinder axis to allow a better cooling of the narrow bridge between the exhaust valves. The measurement system includes a blower, which supplied the intake flow rate, and a prototype single-valve direct injection diesel engine head modified to lay down the swirled-type injector. 14-bit digital CCD cameras are employed to achieve a greater level of accuracy in comparison to the results of previous measurements. The temperature field spatial distributions in the cylinder for different crank angle degrees are carried out in a single direct-injection diesel engine.
2D temperature field measurement in a direct-injection engine using LIF technology
NASA Astrophysics Data System (ADS)
Liu, Yongfeng; Tian, Hongsen; Yang, Jianwei; Sun, Jianmin; Zhu, Aihua
2011-12-01
A new multi-spectral detection strategy for temperature laser- induced- fluorescence (LIF) 2-D imaging measurements is reported for high pressure flames in high-speed diesel engine. Schematic of the experimental set-up is outlined and the experimental data on the diesel engine is summarized. Experiment injection system is a third generation Bosch high-pressure common rail featuring a maximum pressure of 160MPa. The injector is equipped with a six-hole nozzle, where each hole has a diameter of 0.124 mm. and slightly offset to the center of the cylinder axis to allow a better cooling of the narrow bridge between the exhaust valves. The measurement system includes a blower, which supplied the intake flow rate, and a prototype single-valve direct injection diesel engine head modified to lay down the swirled-type injector. 14-bit digital CCD cameras are employed to achieve a greater level of accuracy in comparison to the results of previous measurements. The temperature field spatial distributions in the cylinder for different crank angle degrees are carried out in a single direct-injection diesel engine.
Situational Awareness from a Low-Cost Camera System
NASA Technical Reports Server (NTRS)
Freudinger, Lawrence C.; Ward, David; Lesage, John
2010-01-01
A method gathers scene information from a low-cost camera system. Existing surveillance systems using sufficient cameras for continuous coverage of a large field necessarily generate enormous amounts of raw data. Digitizing and channeling that data to a central computer and processing it in real time is difficult when using low-cost, commercially available components. A newly developed system is located on a combined power and data wire to form a string-of-lights camera system. Each camera is accessible through this network interface using standard TCP/IP networking protocols. The cameras more closely resemble cell-phone cameras than traditional security camera systems. Processing capabilities are built directly onto the camera backplane, which helps maintain a low cost. The low power requirements of each camera allow the creation of a single imaging system comprising over 100 cameras. Each camera has built-in processing capabilities to detect events and cooperatively share this information with neighboring cameras. The location of the event is reported to the host computer in Cartesian coordinates computed from data correlation across multiple cameras. In this way, events in the field of view can present low-bandwidth information to the host rather than high-bandwidth bitmap data constantly being generated by the cameras. This approach offers greater flexibility than conventional systems, without compromising performance through using many small, low-cost cameras with overlapping fields of view. This means significant increased viewing without ignoring surveillance areas, which can occur when pan, tilt, and zoom cameras look away. Additionally, due to the sharing of a single cable for power and data, the installation costs are lower. The technology is targeted toward 3D scene extraction and automatic target tracking for military and commercial applications. Security systems and environmental/ vehicular monitoring systems are also potential applications.
FPGA Based Adaptive Rate and Manifold Pattern Projection for Structured Light 3D Camera System †
Lee, Sukhan
2018-01-01
The quality of the captured point cloud and the scanning speed of a structured light 3D camera system depend upon their capability of handling the object surface of a large reflectance variation in the trade-off of the required number of patterns to be projected. In this paper, we propose and implement a flexible embedded framework that is capable of triggering the camera single or multiple times for capturing single or multiple projections within a single camera exposure setting. This allows the 3D camera system to synchronize the camera and projector even for miss-matched frame rates such that the system is capable of projecting different types of patterns for different scan speed applications. This makes the system capturing a high quality of 3D point cloud even for the surface of a large reflectance variation while achieving a high scan speed. The proposed framework is implemented on the Field Programmable Gate Array (FPGA), where the camera trigger is adaptively generated in such a way that the position and the number of triggers are automatically determined according to camera exposure settings. In other words, the projection frequency is adaptive to different scanning applications without altering the architecture. In addition, the proposed framework is unique as it does not require any external memory for storage because pattern pixels are generated in real-time, which minimizes the complexity and size of the application-specific integrated circuit (ASIC) design and implementation. PMID:29642506
Leung, Joseph W; Wang, Dong; Hu, Bing; Lim, Brian
2011-01-01
Background ERCP mechanical simulator (EMS) and ex-vivo porcine stomach model (PSM) have been described. No direct comparison was reported on endoscopists' perception regarding their efficacy for ERCP training Objective Comparative assessment of EMS and PSM. Design Questionnaire survey before and after practice. Setting Hands-on practice workshops. Subjects 22 endoscopists with prior experience in 111±225 (mean±SD) ERCP. Interventions Participants performed scope insertion, selective bile duct cannulation with guide wire and insertion of a single biliary stent. Simulated fluoroscopy with external pin-hole camera (EMS), or with additional transillumination (PSM) was used to monitor exchange of accessories. Main outcome measure Participants rated their understanding and confidence before and after hands-on practice, and credibility of each simulator for ERCP training. Comparative efficacy of EMS and PSM for ERCP education was scored (1=not, 10=very) based on pre and post practice surveys: realism (tissue pliability, papilla anatomy, visual/cannulation realism, wire manipulation, simulated fluoroscopy, overall experience); usefulness (assessment of results, supplementing clinical experience, easy for trainees to learn new skills) and application (overall ease of use, prepare trainees to use real instrument and ease of incorporation into training). Results Before hands-on practice, both EMS and PSM received high scores. After practice, there was a significantly greater increase in confidence score for EMS than PSM (p<0.003). Participants found EMS more useful for training (p=0.017). Limitations: Subjective scores. Conclusions Based on head-to-head hands-on comparison, endoscopists considered both EMS and PSM credible options for improving understanding and supplementing clinical ERCP training. EMS is more useful for basic learning. PMID:22163080
Real-time face and gesture analysis for human-robot interaction
NASA Astrophysics Data System (ADS)
Wallhoff, Frank; Rehrl, Tobias; Mayer, Christoph; Radig, Bernd
2010-05-01
Human communication relies on a large number of different communication mechanisms like spoken language, facial expressions, or gestures. Facial expressions and gestures are one of the main nonverbal communication mechanisms and pass large amounts of information between human dialog partners. Therefore, to allow for intuitive human-machine interaction, a real-time capable processing and recognition of facial expressions, hand and head gestures are of great importance. We present a system that is tackling these challenges. The input features for the dynamic head gestures and facial expressions are obtained from a sophisticated three-dimensional model, which is fitted to the user in a real-time capable manner. Applying this model different kinds of information are extracted from the image data and afterwards handed over to a real-time capable data-transferring framework, the so-called Real-Time DataBase (RTDB). In addition to the head and facial-related features, also low-level image features regarding the human hand - optical flow, Hu-moments are stored into the RTDB for the evaluation process of hand gestures. In general, the input of a single camera is sufficient for the parallel evaluation of the different gestures and facial expressions. The real-time capable recognition of the dynamic hand and head gestures are performed via different Hidden Markov Models, which have proven to be a quick and real-time capable classification method. On the other hand, for the facial expressions classical decision trees or more sophisticated support vector machines are used for the classification process. These obtained results of the classification processes are again handed over to the RTDB, where other processes (like a Dialog Management Unit) can easily access them without any blocking effects. In addition, an adjustable amount of history can be stored by the RTDB buffer unit.
3D Medical Collaboration Technology to Enhance Emergency Healthcare
Welch, Greg; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M.; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E.
2009-01-01
Two-dimensional (2D) videoconferencing has been explored widely in the past 15–20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals’ viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare. PMID:19521951
3D medical collaboration technology to enhance emergency healthcare.
Welch, Gregory F; Sonnenwald, Diane H; Fuchs, Henry; Cairns, Bruce; Mayer-Patel, Ketan; Söderholm, Hanna M; Yang, Ruigang; State, Andrei; Towles, Herman; Ilie, Adrian; Ampalam, Manoj K; Krishnan, Srinivas; Noel, Vincent; Noland, Michael; Manning, James E
2009-04-19
Two-dimensional (2D) videoconferencing has been explored widely in the past 15-20 years to support collaboration in healthcare. Two issues that arise in most evaluations of 2D videoconferencing in telemedicine are the difficulty obtaining optimal camera views and poor depth perception. To address these problems, we are exploring the use of a small array of cameras to reconstruct dynamic three-dimensional (3D) views of a remote environment and of events taking place within. The 3D views could be sent across wired or wireless networks to remote healthcare professionals equipped with fixed displays or with mobile devices such as personal digital assistants (PDAs). The remote professionals' viewpoints could be specified manually or automatically (continuously) via user head or PDA tracking, giving the remote viewers head-slaved or hand-slaved virtual cameras for monoscopic or stereoscopic viewing of the dynamic reconstructions. We call this idea remote 3D medical collaboration. In this article we motivate and explain the vision for 3D medical collaboration technology; we describe the relevant computer vision, computer graphics, display, and networking research; we present a proof-of-concept prototype system; and we present evaluation results supporting the general hypothesis that 3D remote medical collaboration technology could offer benefits over conventional 2D videoconferencing in emergency healthcare.
A 3-D mixed-reality system for stereoscopic visualization of medical dataset.
Ferrari, Vincenzo; Megali, Giuseppe; Troia, Elena; Pietrabissa, Andrea; Mosca, Franco
2009-11-01
We developed a simple, light, and cheap 3-D visualization device based on mixed reality that can be used by physicians to see preoperative radiological exams in a natural way. The system allows the user to see stereoscopic "augmented images," which are created by mixing 3-D virtual models of anatomies obtained by processing preoperative volumetric radiological images (computed tomography or MRI) with real patient live images, grabbed by means of cameras. The interface of the system consists of a head-mounted display equipped with two high-definition cameras. Cameras are mounted in correspondence of the user's eyes and allow one to grab live images of the patient with the same point of view of the user. The system does not use any external tracker to detect movements of the user or the patient. The movements of the user's head and the alignment of virtual patient with the real one are done using machine vision methods applied on pairs of live images. Experimental results, concerning frame rate and alignment precision between virtual and real patient, demonstrate that machine vision methods used for localization are appropriate for the specific application and that systems based on stereoscopic mixed reality are feasible and can be proficiently adopted in clinical practice.
Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko
2014-04-01
Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO₁O₂V canonical, (2) SO₂O₁V single-scrambled, and (3) O₁O₂SV double-scrambled order. First pass reading times indicated that the third noun phrase just before the verb in both single- and double-scrambled sentences required longer reading times compared to canonical sentences. Re-reading times (the sum of all fixations minus the first pass reading) showed that all noun phrases including the crucial phrase before the verb in double-scrambled sentences required longer re-reading times than those required for single-scrambled sentences; single-scrambled sentences had no difference from canonical ones. Therefore, a single filler-gap dependency can be resolved in pre-head anticipatory processing whereas two filler-gap dependencies require much greater cognitive loading than a single case. These two dependencies can be resolved in post-head processing using verb agreement information.
... minutes prior to the test. When it is time for the imaging to begin, you will lie down on a moveable examination table with your head tipped backward and neck extended. The gamma camera will then take a series of images, capturing images of the thyroid gland ...
The role of passive avian head stabilization in flapping flight
Pete, Ashley E.; Kress, Daniel; Dimitrov, Marina A.; Lentink, David
2015-01-01
Birds improve vision by stabilizing head position relative to their surroundings, while their body is forced up and down during flapping flight. Stabilization is facilitated by compensatory motion of the sophisticated avian head–neck system. While relative head motion has been studied in stationary and walking birds, little is known about how birds accomplish head stabilization during flapping flight. To unravel this, we approximate the avian neck with a linear mass–spring–damper system for vertical displacements, analogous to proven head stabilization models for walking humans. We corroborate the model's dimensionless natural frequency and damping ratios from high-speed video recordings of whooper swans (Cygnus cygnus) flying over a lake. The data show that flap-induced body oscillations can be passively attenuated through the neck. We find that the passive model robustly attenuates large body oscillations, even in response to head mass and gust perturbations. Our proof of principle shows that bird-inspired drones with flapping wings could record better images with a swan-inspired passive camera suspension. PMID:26311316
NASA Astrophysics Data System (ADS)
Maurer, Calvin R., Jr.; Sauer, Frank; Hu, Bo; Bascle, Benedicte; Geiger, Bernhard; Wenzel, Fabian; Recchi, Filippo; Rohlfing, Torsten; Brown, Christopher R.; Bakos, Robert J.; Maciunas, Robert J.; Bani-Hashemi, Ali R.
2001-05-01
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patient's head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
3D digital image correlation using a single 3CCD colour camera and dichroic filter
NASA Astrophysics Data System (ADS)
Zhong, F. Q.; Shao, X. X.; Quan, C.
2018-04-01
In recent years, three-dimensional digital image correlation methods using a single colour camera have been reported. In this study, we propose a simplified system by employing a dichroic filter (DF) to replace the beam splitter and colour filters. The DF can be used to combine two views from different perspectives reflected by two planar mirrors and eliminate their interference. A 3CCD colour camera is then used to capture two different views simultaneously via its blue and red channels. Moreover, the measurement accuracy of the proposed method is higher since the effect of refraction is reduced. Experiments are carried out to verify the effectiveness of the proposed method. It is shown that the interference between the blue and red views is insignificant. In addition, the measurement accuracy of the proposed method is validated on the rigid body displacement. The experimental results demonstrate that the measurement accuracy of the proposed method is higher compared with the reported methods using a single colour camera. Finally, the proposed method is employed to measure the in- and out-of-plane displacements of a loaded plastic board. The re-projection errors of the proposed method are smaller than those of the reported methods using a single colour camera.
High Information Capacity Quantum Imaging
2014-09-19
single-pixel camera [41, 75]. An object is imaged onto a Digital Micromirror device ( DMD ), a 2D binary array of individually-addressable mirrors that...reflect light either to a single detector or a dump. Rows of the sensing matrix A consist of random, binary patterns placed sequentially on the DMD ...The single-pixel camera concept naturally adapts to imaging correlations by adding a second detector. Consider placing separate DMDs in the near-field
Single-Fiber Optical Link For Video And Control
NASA Technical Reports Server (NTRS)
Galloway, F. Houston
1993-01-01
Single optical fiber carries control signals to remote television cameras and video signals from cameras. Fiber replaces multiconductor copper cable, with consequent reduction in size. Repeaters not needed. System works with either multimode- or single-mode fiber types. Nonmetallic fiber provides immunity to electromagnetic interference at suboptical frequencies and much less vulnerable to electronic eavesdropping and lightning strikes. Multigigahertz bandwidth more than adequate for high-resolution television signals.
Telishevesky, Yoel S; Levin, Liran; Ashkenazi, Malka
2012-01-01
The purpose of this study was to evaluate the effect of toothbrush design on the ability of parents to effectively brush their children's teeth. Parents of children (mean age=5.1±0.75 years old) from 4 kindergarten schools were randomly assigned to receive instruction in brushing their children's teeth using a manual single-headed toothbrush (2 schools) or a triple-headed toothbrush (2 schools). The parents' ability to brush their children's teeth was evaluated according to a novel toothbrush performing skill index (Ashkenazi Index), based on 2 criteria: (1) placement of the toothbrush on each tooth segment to be brushed ("reach"); and (2) completion of enough strokes on each segment ("stay"). One month after instruction, tooth-brushing ability was re-evaluated and plaque index of the children's teeth was assessed. One month after instruction, parents using the triple-headed toothbrush received significantly higher scores on the tooth-brushing performance index (~86%), than did those in the single-headed group (~61%; P=.001). The plaque index was significantly higher in the single-headed group (0.97±0.38) vs the triple-headed group (0.72±0.29; P<.01). The tooth-brushing performance index correlated negatively with the plaque index (P<.01). A triple-headed toothbrush promotes more consistent tooth-brushing by parents than does a single-headed toothbrush.
Markerless client-server augmented reality system with natural features
NASA Astrophysics Data System (ADS)
Ning, Shuangning; Sang, Xinzhu; Chen, Duo
2017-10-01
A markerless client-server augmented reality system is presented. In this research, the more extensive and mature virtual reality head-mounted display is adopted to assist the implementation of augmented reality. The viewer is provided an image in front of their eyes with the head-mounted display. The front-facing camera is used to capture video signals into the workstation. The generated virtual scene is merged with the outside world information received from the camera. The integrated video is sent to the helmet display system. The distinguishing feature and novelty is to realize the augmented reality with natural features instead of marker, which address the limitations of the marker, such as only black and white, the inapplicability of different environment conditions, and particularly cannot work when the marker is partially blocked. Further, 3D stereoscopic perception of virtual animation model is achieved. The high-speed and stable socket native communication method is adopted for transmission of the key video stream data, which can reduce the calculation burden of the system.
Application of Sensor Fusion to Improve Uav Image Classification
NASA Astrophysics Data System (ADS)
Jabari, S.; Fathollahi, F.; Zhang, Y.
2017-08-01
Image classification is one of the most important tasks of remote sensing projects including the ones that are based on using UAV images. Improving the quality of UAV images directly affects the classification results and can save a huge amount of time and effort in this area. In this study, we show that sensor fusion can improve image quality which results in increasing the accuracy of image classification. Here, we tested two sensor fusion configurations by using a Panchromatic (Pan) camera along with either a colour camera or a four-band multi-spectral (MS) camera. We use the Pan camera to benefit from its higher sensitivity and the colour or MS camera to benefit from its spectral properties. The resulting images are then compared to the ones acquired by a high resolution single Bayer-pattern colour camera (here referred to as HRC). We assessed the quality of the output images by performing image classification tests. The outputs prove that the proposed sensor fusion configurations can achieve higher accuracies compared to the images of the single Bayer-pattern colour camera. Therefore, incorporating a Pan camera on-board in the UAV missions and performing image fusion can help achieving higher quality images and accordingly higher accuracy classification results.
Next-generation digital camera integration and software development issues
NASA Astrophysics Data System (ADS)
Venkataraman, Shyam; Peters, Ken; Hecht, Richard
1998-04-01
This paper investigates the complexities associated with the development of next generation digital cameras due to requirements in connectivity and interoperability. Each successive generation of digital camera improves drastically in cost, performance, resolution, image quality and interoperability features. This is being accomplished by advancements in a number of areas: research, silicon, standards, etc. As the capabilities of these cameras increase, so do the requirements for both hardware and software. Today, there are two single chip camera solutions in the market including the Motorola MPC 823 and LSI DCAM- 101. Real time constraints for a digital camera may be defined by the maximum time allowable between capture of images. Constraints in the design of an embedded digital camera include processor architecture, memory, processing speed and the real-time operating systems. This paper will present the LSI DCAM-101, a single-chip digital camera solution. It will present an overview of the architecture and the challenges in hardware and software for supporting streaming video in such a complex device. Issues presented include the development of the data flow software architecture, testing and integration on this complex silicon device. The strategy for optimizing performance on the architecture will also be presented.
Modification of a medical PET scanner for PEPT studies
NASA Astrophysics Data System (ADS)
Sadrmomtaz, Alireza; Parker, D. J.; Byars, L. G.
2007-04-01
Over the last 20 years, positron emission tomography (PET) has developed as the most powerful functional imaging modality in medicine. Over the same period the University of Birmingham Positron Imaging Centre has applied PET to study engineering processes and developed the alternative technique of positron emission particle tracking (PEPT) in which a single radioactively labelled tracer particle is tracked by detecting simultaneously the pairs of back-to-back photons arising from positron/electron annihilation. Originally PEPT was performed using a pair of multiwire detectors, and more recently using a pair of digital gamma camera heads. In 2002 the Positron Imaging Centre acquired a medical PET scanner, an ECAT 931/08, previously used at Hammersmith Hospital. This scanner has been rebuilt in a flexible geometry for use in PEPT studies. This paper presents initial results from this system. Fast moving tracer particles can be rapidly and accurately located.
Snake spectacle vessel permeability to sodium fluorescein.
Bellhorn, Roy W; Strom, Ann R; Motta, Monica J; Doval, John; Hawkins, Michelle G; Paul-Murphy, Joanne
2018-03-01
Assess vascular permeability of the snake spectacle to sodium fluorescein during resting and shedding phases of the ecdysis cycle. Ball python (Python regius). The snake was anesthetized, and spectral domain optic coherence tomography was performed prior to angiographic procedures. An electronically controlled digital single-lens reflex camera with a dual-head flash equipped with filters suitable for fluorescein angiography was used to make images. Sodium fluorescein (10%) solution was administered by intracardiac injection. Angiographic images were made as fluorescein traversed the vasculature of the iris and spectacle. Individually acquired photographic frames were assessed and sequenced into pseudovideo image streams for further evaluation CONCLUSIONS: Fluorescein angiograms of the snake spectacle were readily obtained. Vascular permeability varied with the phase of ecdysis. Copious leakage of fluorescein occurred during the shedding phase. This angiographic method may provide diverse opportunities to investigate vascular aspects of snake spectacle ecdysis, dysecdysis, and the integument in general. © 2017 American College of Veterinary Ophthalmologists.
Development of a single-photon-counting camera with use of a triple-stacked micro-channel plate.
Yasuda, Naruomi; Suzuki, Hitoshi; Katafuchi, Tetsuro
2016-01-01
At the quantum-mechanical level, all substances (not merely electromagnetic waves such as light and X-rays) exhibit wave–particle duality. Whereas students of radiation science can easily understand the wave nature of electromagnetic waves, the particle (photon) nature may elude them. Therefore, to assist students in understanding the wave–particle duality of electromagnetic waves, we have developed a photon-counting camera that captures single photons in two-dimensional images. As an image intensifier, this camera has a triple-stacked micro-channel plate (MCP) with an amplification factor of 10(6). The ultra-low light of a single photon entering the camera is first converted to an electron through the photoelectric effect on the photocathode. The electron is intensified by the triple-stacked MCP and then converted to a visible light distribution, which is measured by a high-sensitivity complementary metal oxide semiconductor image sensor. Because it detects individual photons, the photon-counting camera is expected to provide students with a complete understanding of the particle nature of electromagnetic waves. Moreover, it measures ultra-weak light that cannot be detected by ordinary low-sensitivity cameras. Therefore, it is suitable for experimental research on scintillator luminescence, biophoton detection, and similar topics.
Visual field information in Nap-of-the-Earth flight by teleoperated Helmet-Mounted displays
NASA Technical Reports Server (NTRS)
Grunwald, Arthur J.; Kohn, S.; Merhav, S. J.
1991-01-01
The human ability to derive Control-Oriented Visual Field Information from teleoperated Helmet-Mounted displays in Nap-of-the-Earth flight, is investigated. The visual field with these types of displays originates from a Forward Looking Infrared Radiation Camera, gimbal-mounted at the front of the aircraft and slaved to the pilot's line-of-sight, to obtain wide-angle visual coverage. Although these displays are proved to be effective in Apache and Cobra helicopter night operations, they demand very high pilot proficiency and work load. Experimental work presented in the paper has shown that part of the difficulties encountered in vehicular control by means of these displays can be attributed to the narrow viewing aperture and head/camera slaving system phase lags. Both these shortcomings will impair visuo-vestibular coordination, when voluntary head rotation is present. This might result in errors in estimating the Control-Oriented Visual Field Information vital in vehicular control, such as the vehicle yaw rate or the anticipated flight path, or might even lead to visuo-vestibular conflicts (motion sickness). Since, under these conditions, the pilot will tend to minimize head rotation, the full wide-angle coverage of the Helmet-Mounted Display, provided by the line-of-sight slaving system, is not always fully utilized.
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
Purpose: The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Methods: Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Conclusion: Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching. PMID:29283133
Khanduja, Sumeet; Sampangi, Raju; Hemlatha, B C; Singh, Satvir; Lall, Ashish
2018-01-01
The purpose of this study is to describe the use of commercial digital single light reflex (DSLR) for vitreoretinal surgery recording and compare it to standard 3-chip charged coupling device (CCD) camera. Simultaneous recording was done using Sony A7s2 camera and Sony high-definition 3-chip camera attached to each side of the microscope. The videos recorded from both the camera systems were edited and sequences of similar time frames were selected. Three sequences that selected for evaluation were (a) anterior segment surgery, (b) surgery under direct viewing system, and (c) surgery under indirect wide-angle viewing system. The videos of each sequence were evaluated and rated on a scale of 0-10 for color, contrast, and overall quality Results: Most results were rated either 8/10 or 9/10 for both the cameras. A noninferiority analysis by comparing mean scores of DSLR camera versus CCD camera was performed and P values were obtained. The mean scores of the two cameras were comparable for each other on all parameters assessed in the different videos except of color and contrast in posterior pole view and color on wide-angle view, which were rated significantly higher (better) in DSLR camera. Commercial DSLRs are an affordable low-cost alternative for vitreoretinal surgery recording and may be used for documentation and teaching.
Rapid decrease of radar cross section of meteor head echo observed by the MU radar
NASA Astrophysics Data System (ADS)
Nakamura, T.; Nishio, M.; Sato, T.; Tsutsumi, S.; Tsuda, T.; Fushimi, K.
The meteor head echo observation using the MU (Middle and Upper atmosphere) radar (46.5M Hz, 1MW), Shigaraki, Japan, was carried out simultaneously with a high sensitive ICCD (Image-intensified CCD) camera observation in November 2001. The time records were synchronized using GPS satellite signals, in order to compare instantaneous radar and optical meteor magnitudes. 26 faint meteors were successfully observed simultaneously by both equipments. Detailed comparison of the time variation of radar echo intensity and absolute optical magnitude showed that the radar scattering cross section is likely to decrease rapidly by 5 - 20 dB without no corresponding magnitude variation in the optical data. From a simple modeling, we concluded that such decrease of RCS (radar cross section ) is probably due to the transition from overdense head echo to underd ense head echo.
Fast Fiber-Coupled Imaging Devices
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brockington, Samuel; Case, Andrew; Witherspoon, Franklin Douglas
HyperV Technologies Corp. has successfully designed, built and experimentally demonstrated a full scale 1024 pixel 100 MegaFrames/s fiber coupled camera with 12 or 14 bits, and record lengths of 32K frames, exceeding our original performance objectives. This high-pixel-count, fiber optically-coupled, imaging diagnostic can be used for investigating fast, bright plasma events. In Phase 1 of this effort, a 100 pixel fiber-coupled fast streak camera for imaging plasma jet profiles was constructed and successfully demonstrated. The resulting response from outside plasma physics researchers emphasized development of increased pixel performance as a higher priority over increasing pixel count. In this Phase 2more » effort, HyperV therefore focused on increasing the sample rate and bit-depth of the photodiode pixel designed in Phase 1, while still maintaining a long record length and holding the cost per channel to levels which allowed up to 1024 pixels to be constructed. Cost per channel was 53.31 dollars, very close to our original target of $50 per channel. The system consists of an imaging "camera head" coupled to a photodiode bank with an array of optical fibers. The output of these fast photodiodes is then digitized at 100 Megaframes per second and stored in record lengths of 32,768 samples with bit depths of 12 to 14 bits per pixel. Longer record lengths are possible with additional memory. A prototype imaging system with up to 1024 pixels was designed and constructed and used to successfully take movies of very fast moving plasma jets as a demonstration of the camera performance capabilities. Some faulty electrical components on the 64 circuit boards resulted in only 1008 functional channels out of 1024 on this first generation prototype system. We experimentally observed backlit high speed fan blades in initial camera testing and then followed that with full movies and streak images of free flowing high speed plasma jets (at 30-50 km/s). Jet structure and jet collisions onto metal pillars in the path of the plasma jets were recorded in a single shot. This new fast imaging system is an attractive alternative to conventional fast framing cameras for applications and experiments where imaging events using existing techniques are inefficient or impossible. The development of HyperV's new diagnostic was split into two tracks: a next generation camera track, in which HyperV built, tested, and demonstrated a prototype 1024 channel camera at its own facility, and a second plasma community beta test track, where selected plasma physics programs received small systems of a few test pixels to evaluate the expected performance of a full scale camera on their experiments. These evaluations were performed as part of an unfunded collaboration with researchers at Los Alamos National Laboratory and the University of California at Davis. Results from the prototype 1024-pixel camera are discussed, as well as results from the collaborations with test pixel system deployment sites.« less
Astronaut Bonnie Dunbar preparing to perform bio-medical test
1985-10-30
61A-18-001A (30 Oct-6 Nov 1985) --- Her head equipped with a sensor device, astronaut Bonnie J. Dunbar, 61-A mission specialist, talks to earthbound investigators while participating in a bio-medical test. A 35mm camera was used to expose the frame.
NASA Technical Reports Server (NTRS)
1991-01-01
When Michael Henry wanted to start an aerial video service, he turned to Johnson Space Center for assistance. Two NASA engineers - one had designed and developed TV systems in Apollo, Skylab, Apollo- Soyuz and Space Shuttle programs - designed a wing-mounted fiberglass camera pod. Camera head and angles are adjustable, and the pod is shaped to reduce vibration. The controls are located so a solo pilot can operate the system. A microprocessor displays latitude, longitude, and bearing, and a GPS receiver provides position data for possible legal references. The service has been successfully utilized by railroads, oil companies, real estate companies, etc.
Touch And Go Camera System (TAGCAMS) for the OSIRIS-REx Asteroid Sample Return Mission
NASA Astrophysics Data System (ADS)
Bos, B. J.; Ravine, M. A.; Caplinger, M.; Schaffner, J. A.; Ladewig, J. V.; Olds, R. D.; Norman, C. D.; Huish, D.; Hughes, M.; Anderson, S. K.; Lorenz, D. A.; May, A.; Jackman, C. D.; Nelson, D.; Moreau, M.; Kubitschek, D.; Getzandanner, K.; Gordon, K. E.; Eberhardt, A.; Lauretta, D. S.
2018-02-01
NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch And Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample, and document asteroid sample stowage. The cameras were designed and constructed by Malin Space Science Systems (MSSS) based on requirements developed by Lockheed Martin and NASA. All three of the cameras are mounted to the spacecraft nadir deck and provide images in the visible part of the spectrum, 400-700 nm. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. Their boresights are aligned in the nadir direction with small angular offsets for operational convenience. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Its boresight is pointed at the OSIRIS-REx sample return capsule located on the spacecraft deck. All three cameras have at their heart a 2592 × 1944 pixel complementary metal oxide semiconductor (CMOS) detector array that provides up to 12-bit pixel depth. All cameras also share the same lens design and a camera field of view of roughly 44° × 32° with a pixel scale of 0.28 mrad/pixel. The StowCam lens is focused to image features on the spacecraft deck, while both NavCam lens focus positions are optimized for imaging at infinity. A brief description of the TAGCAMS instrument and how it is used to support critical OSIRIS-REx operations is provided.
Pham, Quang Duc; Hayasaki, Yoshio
2015-01-01
We demonstrate an optical frequency comb profilometer with a single-pixel camera to measure the position and profile of an object's surface that exceeds far beyond light wavelength without 2π phase ambiguity. The present configuration of the single-pixel camera can perform the profilometry with an axial resolution of 3.4 μm at 1 GHz operation corresponding to a wavelength of 30 cm. Therefore, the axial dynamic range was increased to 0.87×105. It was found from the experiments and computer simulations that the improvement was derived from higher modulation contrast of digital micromirror devices. The frame rate was also increased to 20 Hz.
A robust approach for a filter-based monocular simultaneous localization and mapping (SLAM) system.
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-07-03
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes.
Variable-Interval Sequenced-Action Camera (VINSAC). Dissemination Document No. 1.
ERIC Educational Resources Information Center
Ward, Ted
The 16 millimeter (mm) Variable-Interval Sequenced-Action Camera (VINSAC) is designed for inexpensive photographic recording of effective teacher instruction and use of instructional materials for teacher education and research purposes. The camera photographs single frames at preselected time intervals (.5 second to 20 seconds) which are…
NASA Astrophysics Data System (ADS)
Li-Chee-Ming, J.; Armenakis, C.
2014-11-01
This paper presents the ongoing development of a small unmanned aerial mapping system (sUAMS) that in the future will track its trajectory and perform 3D mapping in near-real time. As both mapping and tracking algorithms require powerful computational capabilities and large data storage facilities, we propose to use the RoboEarth Cloud Engine (RCE) to offload heavy computation and store data to secure computing environments in the cloud. While the RCE's capabilities have been demonstrated with terrestrial robots in indoor environments, this paper explores the feasibility of using the RCE in mapping and tracking applications in outdoor environments by small UAMS. The experiments presented in this work assess the data processing strategies and evaluate the attainable tracking and mapping accuracies using the data obtained by the sUAMS. Testing was performed with an Aeryon Scout quadcopter. It flew over York University, up to approximately 40 metres above the ground. The quadcopter was equipped with a single-frequency GPS receiver providing positioning to about 3 meter accuracies, an AHRS (Attitude and Heading Reference System) estimating the attitude to about 3 degrees, and an FPV (First Person Viewing) camera. Video images captured from the onboard camera were processed using VisualSFM and SURE, which are being reformed as an Application-as-a-Service via the RCE. The 3D virtual building model of York University was used as a known environment to georeference the point cloud generated from the sUAMS' sensor data. The estimated position and orientation parameters of the video camera show increases in accuracy when compared to the sUAMS' autopilot solution, derived from the onboard GPS and AHRS. The paper presents the proposed approach and the results, along with their accuracies.
1995-10-30
STS073-E-5313 (3 Nov. 1995) --- Typhoon Angela packed winds of 115 knots when this shot was taken with an Electronic Still Camera (ESC) from the Earth-orbiting space shuttle Columbia. It subsequently increased to speeds of 155 nautical miles, making it a super typhoon, heading due west toward Luzon in the Philippines.
View of the shuttle Discovery STS 51-D launch
1985-04-12
51D-9092 (12 April 1985) --- The Space Shuttle Discovery ascends the launch complex in Florida and heads through Atlantic skies toward its 51-D mission. The seven member crew lifted off at 8:59 a.m. (EST), April 12, 1985. This picture was made with a 35mm camera.
STS-57 external tank (ET) falls away from Endeavour, OV-105, after jettison
1993-06-21
STS057-03-017 (21 June 1993) --- The external fuel tank falls toward Earth after being jettisoned from the Space Shuttle Endeavour as the spacecraft headed toward its ten-day stay in Earth orbit. A 35mm camera was used to record the ET jettison.
Marker-less multi-frame motion tracking and compensation in PET-brain imaging
NASA Astrophysics Data System (ADS)
Lindsay, C.; Mukherjee, J. M.; Johnson, K.; Olivier, P.; Song, X.; Shao, L.; King, M. A.
2015-03-01
In PET brain imaging, patient motion can contribute significantly to the degradation of image quality potentially leading to diagnostic and therapeutic problems. To mitigate the image artifacts resulting from patient motion, motion must be detected and tracked then provided to a motion correction algorithm. Existing techniques to track patient motion fall into one of two categories: 1) image-derived approaches and 2) external motion tracking (EMT). Typical EMT requires patients to have markers in a known pattern on a rigid too attached to their head, which are then tracked by expensive and bulky motion tracking camera systems or stereo cameras. This has made marker-based EMT unattractive for routine clinical application. Our main contributions are the development of a marker-less motion tracking system that uses lowcost, small depth-sensing cameras which can be installed in the bore of the imaging system. Our motion tracking system does not require anything to be attached to the patient and can track the rigid transformation (6-degrees of freedom) of the patient's head at a rate 60 Hz. We show that our method can not only be used in with Multi-frame Acquisition (MAF) PET motion correction, but precise timing can be employed to determine only the necessary frames needed for correction. This can speeds up reconstruction by eliminating the unnecessary subdivision of frames.
DeCicco, Anthony E; Sokil, Alexis B; Marhefka, Gregary D; Reist, Kirk; Hansen, Christopher L
2015-04-01
Obesity is not only associated with an increased risk of coronary artery disease, but also decreases the accuracy of many diagnostic modalities pertinent to this disease. Advances in myocardial perfusion imaging (MPI) have mitigated somewhat the effects of obesity, although the feasibility of MPI in the super-obese (defined as a BMI > 50) is currently untested. We undertook this study to assess the practicality of MPI in the super-obese using a multi-headed solid-state gamma camera with attenuation correction. We retrospectively identified consecutive super-obese patients referred for MPI at our institution. The images were interpreted by 3 blinded, experienced readers and graded for quality and diagnosis, and subjectively evaluated the contribution of attenuation correction. Clinical follow-up was obtained from review of medical records. 72 consecutive super-obese patients were included. Their BMI ranged from 50 to 67 (55.7 ± 5.1). Stress image quality was considered good or excellent in 45 (63%), satisfactory in 24 (33%), poor in 3 (4%), and uninterpretable in 0 patients. Rest images were considered good or excellent in 34 (49%), satisfactory in 23 (33%), poor in 13 (19%), and uninterpretable in 0 patients. Attenuation correction changed the interpretation in 34 (47%) of studies. MPI is feasible and provides acceptable image quality for super-obese patients, although it may be camera and protocol dependent.
Development of biostereometric experiments. [stereometric camera system
NASA Technical Reports Server (NTRS)
Herron, R. E.
1978-01-01
The stereometric camera was designed for close-range techniques in biostereometrics. The camera focusing distance of 360 mm to infinity covers a broad field of close-range photogrammetry. The design provides for a separate unit for the lens system and interchangeable backs on the camera for the use of single frame film exposure, roll-type film cassettes, or glass plates. The system incorporates the use of a surface contrast optical projector.
Head stabilization in whooping cranes
Kinloch, M.R.; Cronin, T.W.; Olsen, Glenn H.; Chavez-Ramirez, Felipe
2005-01-01
The whooping crane (Grus americana) is the tallest bird in North America, yet not much is known about its visual ecology. How these birds overcome their unusual height to identify, locate, track, and capture prey items is not well understood. There have been many studies on head and eye stabilization in large wading birds (herons and egrets), but the pattern of head movement and stabilization during foraging is unclear. Patterns of head movement and stabilization during walking were examined in whooping cranes at Patuxent Wildlife Research Center, Laurel, Maryland USA. Four whooping cranes (1 male and 3 females) were videotaped for this study. All birds were already acclimated to the presence of people and to food rewards. Whooping cranes were videotaped using both digital and Hi-8 Sony video cameras (Sony Corporation, 7-35 Kitashinagawa, 6-Chome, Shinagawa-ku, Tokyo, Japan), placed on a tripod and set at bird height in the cranes' home pens. The cranes were videotaped repeatedly, at different locations in the pens and while walking (or running) at different speeds. Rewards (meal worms, smelt, crickets and corn) were used to entice the cranes to walk across the camera's view plane. The resulting videotape was analyzed at the University of Maryland at Baltimore County. Briefly, we used a computerized reduced graphic model of a crane superimposed over each frame of analyzed tape segments by means of a custom written program (T. W. Cronin, using C++) with the ability to combine video and computer graphic input. The speed of the birds in analyzed segments ranged from 0.30 m/s to 2.64 m/s, and the proportion of time the head was stabilized ranged from 79% to 0%, respectively. The speed at which the proportion reached 0% was 1.83 m/s. The analyses suggest that the proportion of time the head is stable decreases as speed of the bird increases. In all cases, birds were able to reach their target prey with little difficulty. Thus when cranes are walking searching for food, they walk at a speed that permits them to keep their heads still and visual field immobile at least half the time.
Compensation for positioning error of industrial robot for flexible vision measuring system
NASA Astrophysics Data System (ADS)
Guo, Lei; Liang, Yajun; Song, Jincheng; Sun, Zengyu; Zhu, Jigui
2013-01-01
Positioning error of robot is a main factor of accuracy of flexible coordinate measuring system which consists of universal industrial robot and visual sensor. Present compensation methods for positioning error based on kinematic model of robot have a significant limitation that it isn't effective in the whole measuring space. A new compensation method for positioning error of robot based on vision measuring technique is presented. One approach is setting global control points in measured field and attaching an orientation camera to vision sensor. Then global control points are measured by orientation camera to calculate the transformation relation from the current position of sensor system to global coordinate system and positioning error of robot is compensated. Another approach is setting control points on vision sensor and two large field cameras behind the sensor. Then the three dimensional coordinates of control points are measured and the pose and position of sensor is calculated real-timely. Experiment result shows the RMS of spatial positioning is 3.422mm by single camera and 0.031mm by dual cameras. Conclusion is arithmetic of single camera method needs to be improved for higher accuracy and accuracy of dual cameras method is applicable.
Image dynamic range test and evaluation of Gaofen-2 dual cameras
NASA Astrophysics Data System (ADS)
Zhang, Zhenhua; Gan, Fuping; Wei, Dandan
2015-12-01
In order to fully understand the dynamic range of Gaofen-2 satellite data and support the data processing, application and next satellites development, in this article, we evaluated the dynamic range by calculating some statistics such as maximum ,minimum, average and stand deviation of four images obtained at the same time by Gaofen-2 dual cameras in Beijing area; then the maximum ,minimum, average and stand deviation of each longitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of each camera's dynamic range consistency; and these four statistics of each latitudinal overlap of PMS1,PMS2 were calculated respectively for the evaluation of the dynamic range consistency between PMS1 and PMS2 at last. The results suggest that there is a wide dynamic range of DN value in the image obtained by PMS1 and PMS2 which contains rich information of ground objects; in general, the consistency of dynamic range between the single camera images is in close agreement, but also a little difference, so do the dual cameras. The consistency of dynamic range between the single camera images is better than the dual cameras'.
Functional MRI of the Olfactory System in Conscious Dogs
Jia, Hao; Pustovyy, Oleg M.; Waggoner, Paul; Beyers, Ronald J.; Schumacher, John; Wildey, Chester; Barrett, Jay; Morrison, Edward; Salibi, Nouha; Denney, Thomas S.; Vodyanoy, Vitaly J.; Deshpande, Gopikrishna
2014-01-01
We depend upon the olfactory abilities of dogs for critical tasks such as detecting bombs, landmines, other hazardous chemicals and illicit substances. Hence, a mechanistic understanding of the olfactory system in dogs is of great scientific interest. Previous studies explored this aspect at the cellular and behavior levels; however, the cognitive-level neural substrates linking them have never been explored. This is critical given the fact that behavior is driven by filtered sensory representations in higher order cognitive areas rather than the raw odor maps of the olfactory bulb. Since sedated dogs cannot sniff, we investigated this using functional magnetic resonance imaging of conscious dogs. We addressed the technical challenges of head motion using a two pronged strategy of behavioral training to keep dogs' head as still as possible and a single camera optical head motion tracking system to account for residual jerky movements. We built a custom computer-controlled odorant delivery system which was synchronized with image acquisition, allowing the investigation of brain regions activated by odors. The olfactory bulb and piriform lobes were commonly activated in both awake and anesthetized dogs, while the frontal cortex was activated mainly in conscious dogs. Comparison of responses to low and high odor intensity showed differences in either the strength or spatial extent of activation in the olfactory bulb, piriform lobes, cerebellum, and frontal cortex. Our results demonstrate the viability of the proposed method for functional imaging of the olfactory system in conscious dogs. This could potentially open up a new field of research in detector dog technology. PMID:24466054
'Illinois' and 'New York' Wiped Clean
NASA Technical Reports Server (NTRS)
2004-01-01
This panoramic camera image was taken by NASA's Mars Exploration Rover Spirit on sol 79 after completing a two-location brushing on the rock dubbed 'Mazatzal.' A coating of fine, dust-like material was successfully removed from targets named 'Illinois' (right) and 'New York' (left), revealing the weathered rock underneath. In this image, Spirit's panoramic camera mast assembly, or camera head, can be seen shadowing Mazatzal's surface. This approximate true color image was taken with the 601, 535 and 482 nanometer filters.
The center of the two brushed spots are approximately 10 centimeters (3.9 inches) apart and will be aggressively analyzed by the instruments on the robotic arm on sol 80. Plans for sol 81 are to grind into the New York target to get past any weathered rock and expose the original, internal rock underneath.Optical aberration correction for simple lenses via sparse representation
NASA Astrophysics Data System (ADS)
Cui, Jinlin; Huang, Wei
2018-04-01
Simple lenses with spherical surfaces are lightweight, inexpensive, highly flexible, and can be easily processed. However, they suffer from optical aberrations that lead to limitations in high-quality photography. In this study, we propose a set of computational photography techniques based on sparse signal representation to remove optical aberrations, thereby allowing the recovery of images captured through a single-lens camera. The primary advantage of the proposed method is that many prior point spread functions calibrated at different depths are successfully used for restoring visual images in a short time, which can be generally applied to nonblind deconvolution methods for solving the problem of the excessive processing time caused by the number of point spread functions. The optical software CODE V is applied for examining the reliability of the proposed method by simulation. The simulation results reveal that the suggested method outperforms the traditional methods. Moreover, the performance of a single-lens camera is significantly enhanced both qualitatively and perceptually. Particularly, the prior information obtained by CODE V can be used for processing the real images of a single-lens camera, which provides an alternative approach to conveniently and accurately obtain point spread functions of single-lens cameras.
Calibration of asynchronous smart phone cameras from moving objects
NASA Astrophysics Data System (ADS)
Hagen, Oksana; Istenič, Klemen; Bharti, Vibhav; Dhali, Maruf Ahmed; Barmaimon, Daniel; Houssineau, Jérémie; Clark, Daniel
2015-04-01
Calibrating multiple cameras is a fundamental prerequisite for many Computer Vision applications. Typically this involves using a pair of identical synchronized industrial or high-end consumer cameras. This paper considers an application on a pair of low-cost portable cameras with different parameters that are found in smart phones. This paper addresses the issues of acquisition, detection of moving objects, dynamic camera registration and tracking of arbitrary number of targets. The acquisition of data is performed using two standard smart phone cameras and later processed using detections of moving objects in the scene. The registration of cameras onto the same world reference frame is performed using a recently developed method for camera calibration using a disparity space parameterisation and the single-cluster PHD filter.
NASA Astrophysics Data System (ADS)
Cajgfinger, Thomas; Chabanat, Eric; Dominjon, Agnes; Doan, Quang T.; Guerin, Cyrille; Houles, Julien; Barbier, Remi
2011-03-01
Nano-biophotonics applications will benefit from new fluorescent microscopy methods based essentially on super-resolution techniques (beyond the diffraction limit) on large biological structures (membranes) with fast frame rate (1000 Hz). This trend tends to push the photon detectors to the single-photon counting regime and the camera acquisition system to real time dynamic multiple-target tracing. The LUSIPHER prototype presented in this paper aims to give a different approach than those of Electron Multiplied CCD (EMCCD) technology and try to answer to the stringent demands of the new nano-biophotonics imaging techniques. The electron bombarded CMOS (ebCMOS) device has the potential to respond to this challenge, thanks to the linear gain of the accelerating high voltage of the photo-cathode, to the possible ultra fast frame rate of CMOS sensors and to the single-photon sensitivity. We produced a camera system based on a 640 kPixels ebCMOS with its acquisition system. The proof of concept for single-photon based tracking for multiple single-emitters is the main result of this paper.
Beam measurements using visible synchrotron light at NSLS2 storage ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cheng, Weixing, E-mail: chengwx@bnl.gov; Bacha, Bel; Singh, Om
2016-07-27
Visible Synchrotron Light Monitor (SLM) diagnostic beamline has been designed and constructed at NSLS2 storage ring, to characterize the electron beam profile at various machine conditions. Due to the excellent alignment, SLM beamline was able to see the first visible light when beam was circulating the ring for the first turn. The beamline has been commissioned for the past year. Besides a normal CCD camera to monitor the beam profile, streak camera and gated camera are used to measure the longitudinal and transverse profile to understand the beam dynamics. Measurement results from these cameras will be presented in this paper.more » A time correlated single photon counting system (TCSPC) has also been setup to measure the single bunch purity.« less
NASA Astrophysics Data System (ADS)
Li, Zhengyan; Zgadzaj, Rafal; Wang, Xiaoming; Reed, Stephen; Dong, Peng; Downer, Michael C.
2010-11-01
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index "bubble" in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the "bubble". Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the "bubble" from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporal Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.
Design framework for a spectral mask for a plenoptic camera
NASA Astrophysics Data System (ADS)
Berkner, Kathrin; Shroff, Sapna A.
2012-01-01
Plenoptic cameras are designed to capture different combinations of light rays from a scene, sampling its lightfield. Such camera designs capturing directional ray information enable applications such as digital refocusing, rotation, or depth estimation. Only few address capturing spectral information of the scene. It has been demonstrated that by modifying a plenoptic camera with a filter array containing different spectral filters inserted in the pupil plane of the main lens, sampling of the spectral dimension of the plenoptic function is performed. As a result, the plenoptic camera is turned into a single-snapshot multispectral imaging system that trades-off spatial with spectral information captured with a single sensor. Little work has been performed so far on analyzing diffraction effects and aberrations of the optical system on the performance of the spectral imager. In this paper we demonstrate simulation of a spectrally-coded plenoptic camera optical system via wave propagation analysis, evaluate quality of the spectral measurements captured at the detector plane, and demonstrate opportunities for optimization of the spectral mask for a few sample applications.
Liao, Jun; Wang, Zhe; Zhang, Zibang; Bian, Zichao; Guo, Kaikai; Nambiar, Aparna; Jiang, Yutong; Jiang, Shaowei; Zhong, Jingang; Choma, Michael; Zheng, Guoan
2018-02-01
We report the development of a multichannel microscopy for whole-slide multiplane, multispectral and phase imaging. We use trinocular heads to split the beam path into 6 independent channels and employ a camera array for parallel data acquisition, achieving a maximum data throughput of approximately 1 gigapixel per second. To perform single-frame rapid autofocusing, we place 2 near-infrared light-emitting diodes (LEDs) at the back focal plane of the condenser lens to illuminate the sample from 2 different incident angles. A hot mirror is used to direct the near-infrared light to an autofocusing camera. For multiplane whole-slide imaging (WSI), we acquire 6 different focal planes of a thick specimen simultaneously. For multispectral WSI, we relay the 6 independent image planes to the same focal position and simultaneously acquire information at 6 spectral bands. For whole-slide phase imaging, we acquire images at 3 focal positions simultaneously and use the transport-of-intensity equation to recover the phase information. We also provide an open-source design to further increase the number of channels from 6 to 15. The reported platform provides a simple solution for multiplexed fluorescence imaging and multimodal WSI. Acquiring an instant focal stack without z-scanning may also enable fast 3-dimensional dynamic tracking of various biological samples. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
NASA Astrophysics Data System (ADS)
Buschinelli, Pedro D. V.; Melo, João. Ricardo C.; Albertazzi, Armando; Santos, João. M. C.; Camerini, Claudio S.
2013-04-01
An axis-symmetrical optical laser triangulation system was developed by the authors to measure the inner geometry of long pipes used in the oil industry. It has a special optical configuration able to acquire shape information of the inner geometry of a section of a pipe from a single image frame. A collimated laser beam is pointed to the tip of a 45° conical mirror. The laser light is reflected in such a way that a radial light sheet is formed and intercepts the inner geometry and forms a bright laser line on a section of the inspected pipe. A camera acquires the image of the laser line through a wide angle lens. An odometer-based triggering system is used to shot the camera to acquire a set of equally spaced images at high speed while the device is moved along the pipe's axis. Image processing is done in real-time (between images acquisitions) thanks to the use of parallel computing technology. The measured geometry is analyzed to identify corrosion damages. The measured geometry and results are graphically presented using virtual reality techniques and devices as 3D glasses and head-mounted displays. The paper describes the measurement principles, calibration strategies, laboratory evaluation of the developed device, as well as, a practical example of a corroded pipe used in an industrial gas production plant.
Laser head for simultaneous optical pumping of several dye lasers. [with single flash lamp
NASA Technical Reports Server (NTRS)
Mumola, P. B.; Mcalexander, B. T. (Inventor)
1975-01-01
The invention is a laser head for simultaneous pumping several dye lasers with a single flash lamp. The laser head includes primarily a multi-elliptical cylinder cavity with a single flash lamp placed along the common focal axis of the cavity and with capillary tube dye cells placed along each of the other focal axes of the cavity. The inside surface of the cavity is polished. Hence, the single flash lamp supplies the energy to the several dye cells.
Towards Kilo-Hertz 6-DoF Visual Tracking Using an Egocentric Cluster of Rolling Shutter Cameras.
Bapat, Akash; Dunn, Enrique; Frahm, Jan-Michael
2016-11-01
To maintain a reliable registration of the virtual world with the real world, augmented reality (AR) applications require highly accurate, low-latency tracking of the device. In this paper, we propose a novel method for performing this fast 6-DOF head pose tracking using a cluster of rolling shutter cameras. The key idea is that a rolling shutter camera works by capturing the rows of an image in rapid succession, essentially acting as a high-frequency 1D image sensor. By integrating multiple rolling shutter cameras on the AR device, our tracker is able to perform 6-DOF markerless tracking in a static indoor environment with minimal latency. Compared to state-of-the-art tracking systems, this tracking approach performs at significantly higher frequency, and it works in generalized environments. To demonstrate the feasibility of our system, we present thorough evaluations on synthetically generated data with tracking frequencies reaching 56.7 kHz. We further validate the method's accuracy on real-world images collected from a prototype of our tracking system against ground truth data using standard commodity GoPro cameras capturing at 120 Hz frame rate.
An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks.
Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing
2017-03-20
In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods.
An Effective and Robust Decentralized Target Tracking Scheme in Wireless Camera Sensor Networks
Fu, Pengcheng; Cheng, Yongbo; Tang, Hongying; Li, Baoqing; Pei, Jun; Yuan, Xiaobing
2017-01-01
In this paper, we propose an effective and robust decentralized tracking scheme based on the square root cubature information filter (SRCIF) to balance the energy consumption and tracking accuracy in wireless camera sensor networks (WCNs). More specifically, regarding the characteristics and constraints of camera nodes in WCNs, some special mechanisms are put forward and integrated in this tracking scheme. First, a decentralized tracking approach is adopted so that the tracking can be implemented energy-efficiently and steadily. Subsequently, task cluster nodes are dynamically selected by adopting a greedy on-line decision approach based on the defined contribution decision (CD) considering the limited energy of camera nodes. Additionally, we design an efficient cluster head (CH) selection mechanism that casts such selection problem as an optimization problem based on the remaining energy and distance-to-target. Finally, we also perform analysis on the target detection probability when selecting the task cluster nodes and their CH, owing to the directional sensing and observation limitations in field of view (FOV) of camera nodes in WCNs. From simulation results, the proposed tracking scheme shows an obvious improvement in balancing the energy consumption and tracking accuracy over the existing methods. PMID:28335537
Konduru, Anil Reddy; Yelikar, Balasaheb R; Sathyashree, K V; Kumar, Ankur
2018-01-01
Open source technologies and mobile innovations have radically changed the way people interact with technology. These innovations and advancements have been used across various disciplines and already have a significant impact. Microscopy, with focus on visually appealing contrasting colors for better appreciation of morphology, forms the core of the disciplines such as Pathology, microbiology, and anatomy. Here, learning happens with the aid of multi-head microscopes and digital camera systems for teaching larger groups and in organizing interactive sessions for students or faculty of other departments. The cost of the original equipment manufacturer (OEM) camera systems in bringing this useful technology at all the locations is a limiting factor. To avoid this, we have used the low-cost technologies like Raspberry Pi, Mobile high definition link and 3D printing for adapters to create portable camera systems. Adopting these open source technologies enabled us to convert any binocular or trinocular microscope be connected to a projector or HD television at a fraction of the cost of the OEM camera systems with comparable quality. These systems, in addition to being cost-effective, have also provided the added advantage of portability, thus providing the much-needed flexibility at various teaching locations.
A Robust Approach for a Filter-Based Monocular Simultaneous Localization and Mapping (SLAM) System
Munguía, Rodrigo; Castillo-Toledo, Bernardino; Grau, Antoni
2013-01-01
Simultaneous localization and mapping (SLAM) is an important problem to solve in robotics theory in order to build truly autonomous mobile robots. This work presents a novel method for implementing a SLAM system based on a single camera sensor. The SLAM with a single camera, or monocular SLAM, is probably one of the most complex SLAM variants. In this case, a single camera, which is freely moving through its environment, represents the sole sensor input to the system. The sensors have a large impact on the algorithm used for SLAM. Cameras are used more frequently, because they provide a lot of information and are well adapted for embedded systems: they are light, cheap and power-saving. Nevertheless, and unlike range sensors, which provide range and angular information, a camera is a projective sensor providing only angular measurements of image features. Therefore, depth information (range) cannot be obtained in a single step. In this case, special techniques for feature system-initialization are needed in order to enable the use of angular sensors (as cameras) in SLAM systems. The main contribution of this work is to present a novel and robust scheme for incorporating and measuring visual features in filtering-based monocular SLAM systems. The proposed method is based in a two-step technique, which is intended to exploit all the information available in angular measurements. Unlike previous schemes, the values of parameters used by the initialization technique are derived directly from the sensor characteristics, thus simplifying the tuning of the system. The experimental results show that the proposed method surpasses the performance of previous schemes. PMID:23823972
Márquez, G.; Pinto, A.; Alamo, L.; Baumann, B.; Ye, F.; Winkler, H.; Taylor, K.; Padrón, R.
2014-01-01
Summary Myosin interacting-heads (MIH) motifs are visualized in 3D-reconstructions of thick filaments from striated muscle. These reconstructions are calculated by averaging methods using images from electron micrographs of grids prepared using numerous filament preparations. Here we propose an alternative method to calculate the 3D-reconstruction of a single thick filament using only a tilt series images recorded by electron tomography. Relaxed thick filaments, prepared from tarantula leg muscle homogenates, were negatively stained. Single-axis tilt series of single isolated thick filaments were obtained with the electron microscope at a low electron dose, and recorded on a CCD camera by electron tomography. An IHRSR 3D-recontruction was calculated from the tilt series images of a single thick filament. The reconstruction was enhanced by including in the search stage dual tilt image segments while only single tilt along the filament axis is usually used, as well as applying a band pass filter just before the back projection. The reconstruction from a single filament has a 40 Å resolution and clearly shows the presence of MIH motifs. In contrast, the electron tomogram 3D-reconstruction of the same thick filament –calculated without any image averaging and/or imposition of helical symmetry- only reveals MIH motifs infrequently. This is –to our knowledge- the first application of the IHRSR method to calculate a 3D reconstruction from tilt series images. This single filament IHRSR reconstruction method (SF-IHRSR) should provide a new tool to assess structural differences between well-ordered thick (or thin) filaments in a grid by recording separately their electron tomograms. PMID:24727133
Márquez, G; Pinto, A; Alamo, L; Baumann, B; Ye, F; Winkler, H; Taylor, K; Padrón, R
2014-05-01
Myosin interacting-heads (MIH) motifs are visualized in 3D-reconstructions of thick filaments from striated muscle. These reconstructions are calculated by averaging methods using images from electron micrographs of grids prepared using numerous filament preparations. Here we propose an alternative method to calculate the 3D-reconstruction of a single thick filament using only a tilt series images recorded by electron tomography. Relaxed thick filaments, prepared from tarantula leg muscle homogenates, were negatively stained. Single-axis tilt series of single isolated thick filaments were obtained with the electron microscope at a low electron dose, and recorded on a CCD camera by electron tomography. An IHRSR 3D-recontruction was calculated from the tilt series images of a single thick filament. The reconstruction was enhanced by including in the search stage dual tilt image segments while only single tilt along the filament axis is usually used, as well as applying a band pass filter just before the back projection. The reconstruction from a single filament has a 40 Å resolution and clearly shows the presence of MIH motifs. In contrast, the electron tomogram 3D-reconstruction of the same thick filament - calculated without any image averaging and/or imposition of helical symmetry - only reveals MIH motifs infrequently. This is - to our knowledge - the first application of the IHRSR method to calculate a 3D reconstruction from tilt series images. This single filament IHRSR reconstruction method (SF-IHRSR) should provide a new tool to assess structural differences between well-ordered thick (or thin) filaments in a grid by recording separately their electron tomograms. Copyright © 2014 Elsevier Inc. All rights reserved.
Relationship between central and peripheral corneal astigmatism in elderly patients
NASA Astrophysics Data System (ADS)
Kawamorita, Takushi; Shimizu, Kimiya; Hoshikawa, Rie; Kamiya, Kazutaka; Shoji, Nobuyuki
2018-03-01
Imaging system for cardiac planar imaging using a dedicated dual-head gamma camera
Majewski, Stanislaw [Morgantown, VA; Umeno, Marc M [Woodinville, WA
2011-09-13
A cardiac imaging system employing dual gamma imaging heads co-registered with one another to provide two dynamic simultaneous views of the heart sector of a patient torso. A first gamma imaging head is positioned in a first orientation with respect to the heart sector and a second gamma imaging head is positioned in a second orientation with respect to the heart sector. An adjustment arrangement is capable of adjusting the distance between the separate imaging heads and the angle between the heads. With the angle between the imaging heads set to 180 degrees and operating in a range of 140-159 keV and at a rate of up to 500kHz, the imaging heads are co-registered to produce simultaneous dynamic recording of two stereotactic views of the heart. The use of co-registered imaging heads maximizes the uniformity of detection sensitivity of blood flow in and around the heart over the whole heart volume and minimizes radiation absorption effects. A normalization/image fusion technique is implemented pixel-by-corresponding pixel to increase signal for any cardiac region viewed in two images obtained from the two opposed detector heads for the same time bin. The imaging system is capable of producing enhanced first pass studies, bloodpool studies including planar, gated and non-gated EKG studies, planar EKG perfusion studies, and planar hot spot imaging.
People counting and re-identification using fusion of video camera and laser scanner
NASA Astrophysics Data System (ADS)
Ling, Bo; Olivera, Santiago; Wagley, Raj
2016-05-01
We present a system for people counting and re-identification. It can be used by transit and homeland security agencies. Under FTA SBIR program, we have developed a preliminary system for transit passenger counting and re-identification using a laser scanner and video camera. The laser scanner is used to identify the locations of passenger's head and shoulder in an image, a challenging task in crowed environment. It can also estimate the passenger height without prior calibration. Various color models have been applied to form color signatures. Finally, using a statistical fusion and classification scheme, passengers are counted and re-identified.
First stereo video dataset with ground truth for remote car pose estimation using satellite markers
NASA Astrophysics Data System (ADS)
Gil, Gustavo; Savino, Giovanni; Pierini, Marco
2018-04-01
Leading causes of PTW (Powered Two-Wheeler) crashes and near misses in urban areas are on the part of a failure or delayed prediction of the changing trajectories of other vehicles. Regrettably, misperception from both car drivers and motorcycle riders results in fatal or serious consequences for riders. Intelligent vehicles could provide early warning about possible collisions, helping to avoid the crash. There is evidence that stereo cameras can be used for estimating the heading angle of other vehicles, which is key to anticipate their imminent location, but there is limited heading ground truth data available in the public domain. Consequently, we employed a marker-based technique for creating ground truth of car pose and create a dataset∗ for computer vision benchmarking purposes. This dataset of a moving vehicle collected from a static mounted stereo camera is a simplification of a complex and dynamic reality, which serves as a test bed for car pose estimation algorithms. The dataset contains the accurate pose of the moving obstacle, and realistic imagery including texture-less and non-lambertian surfaces (e.g. reflectance and transparency).
Why are faces denser in the visual experiences of younger than older infants?
Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.
2017-01-01
Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants declines over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested two opposing hypotheses about this observed age-related decline in the frequency of faces in infant views. By the People-input hypothesis, there are more faces in view for younger infants because people are more often physically in front of younger than older infants. This hypothesis predicts that not just faces but views of other body parts will decline with age. By the Face-input hypothesis, the decline is strictly about faces, not people or other body parts in general. Two experiments, one using a time-sampling method (84 infants 3 to 24 months in age) and the other analyses of head camera images (36 infants 1 to 24 months) provide strong support for the Face-input hypothesis. The results suggest developmental constraints on the environment that ensure faces are prevalent early in development. PMID:28026190
Dynamic single photon emission computed tomography—basic principles and cardiac applications
Gullberg, Grant T; Reutter, Bryan W; Sitek, Arkadiusz; Maltz, Jonathan S; Budinger, Thomas F
2011-01-01
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time–activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time–activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements. PMID:20858925
NASA Astrophysics Data System (ADS)
Gullberg, Grant T.; Reutter, Bryan W.; Sitek, Arkadiusz; Maltz, Jonathan S.; Budinger, Thomas F.
2010-10-01
The very nature of nuclear medicine, the visual representation of injected radiopharmaceuticals, implies imaging of dynamic processes such as the uptake and wash-out of radiotracers from body organs. For years, nuclear medicine has been touted as the modality of choice for evaluating function in health and disease. This evaluation is greatly enhanced using single photon emission computed tomography (SPECT), which permits three-dimensional (3D) visualization of tracer distributions in the body. However, to fully realize the potential of the technique requires the imaging of in vivo dynamic processes of flow and metabolism. Tissue motion and deformation must also be addressed. Absolute quantification of these dynamic processes in the body has the potential to improve diagnosis. This paper presents a review of advancements toward the realization of the potential of dynamic SPECT imaging and a brief history of the development of the instrumentation. A major portion of the paper is devoted to the review of special data processing methods that have been developed for extracting kinetics from dynamic cardiac SPECT data acquired using rotating detector heads that move as radiopharmaceuticals exchange between biological compartments. Recent developments in multi-resolution spatiotemporal methods enable one to estimate kinetic parameters of compartment models of dynamic processes using data acquired from a single camera head with slow gantry rotation. The estimation of kinetic parameters directly from projection measurements improves bias and variance over the conventional method of first reconstructing 3D dynamic images, generating time-activity curves from selected regions of interest and then estimating the kinetic parameters from the generated time-activity curves. Although the potential applications of SPECT for imaging dynamic processes have not been fully realized in the clinic, it is hoped that this review illuminates the potential of SPECT for dynamic imaging, especially in light of new developments that enable measurement of dynamic processes directly from projection measurements.
On the accuracy potential of focused plenoptic camera range determination in long distance operation
NASA Astrophysics Data System (ADS)
Sardemann, Hannes; Maas, Hans-Gerd
2016-04-01
Plenoptic cameras have found increasing interest in optical 3D measurement techniques in recent years. While their basic principle is 100 years old, the development in digital photography, micro-lens fabrication technology and computer hardware has boosted the development and lead to several commercially available ready-to-use cameras. Beyond their popular option of a posteriori image focusing or total focus image generation, their basic ability of generating 3D information from single camera imagery depicts a very beneficial option for certain applications. The paper will first present some fundamentals on the design and history of plenoptic cameras and will describe depth determination from plenoptic camera image data. It will then present an analysis of the depth determination accuracy potential of plenoptic cameras. While most research on plenoptic camera accuracy so far has focused on close range applications, we will focus on mid and long ranges of up to 100 m. This range is especially relevant, if plenoptic cameras are discussed as potential mono-sensorial range imaging devices in (semi-)autonomous cars or in mobile robotics. The results show the expected deterioration of depth measurement accuracy with depth. At depths of 30-100 m, which may be considered typical in autonomous driving, depth errors in the order of 3% (with peaks up to 10-13 m) were obtained from processing small point clusters on an imaged target. Outliers much higher than these values were observed in single point analysis, stressing the necessity of spatial or spatio-temporal filtering of the plenoptic camera depth measurements. Despite these obviously large errors, a plenoptic camera may nevertheless be considered a valid option for the application fields of real-time robotics like autonomous driving or unmanned aerial and underwater vehicles, where the accuracy requirements decrease with distance.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Zhengyan; Zgadzaj, Rafal; Wang Xiaoming
2010-11-04
We demonstrate a prototype Frequency Domain Streak Camera (FDSC) that can capture the picosecond time evolution of the plasma accelerator structure in a single shot. In our prototype Frequency-Domain Streak Camera, a probe pulse propagates obliquely to a sub-picosecond pump pulse that creates an evolving nonlinear index 'bubble' in fused silica glass, supplementing a conventional Frequency Domain Holographic (FDH) probe-reference pair that co-propagates with the 'bubble'. Frequency Domain Tomography (FDT) generalizes Frequency-Domain Streak Camera by probing the 'bubble' from multiple angles and reconstructing its morphology and evolution using algorithms similar to those used in medical CAT scans. Multiplexing methods (Temporalmore » Multiplexing and Angular Multiplexing) improve data storage and processing capability, demonstrating a compact Frequency Domain Tomography system with a single spectrometer.« less
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+.
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J; Song, David H
2015-02-01
Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons' point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon's perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera's automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video.
Monte-Carlo Simulation for Accuracy Assessment of a Single Camera Navigation System
NASA Astrophysics Data System (ADS)
Bethmann, F.; Luhmann, T.
2012-07-01
The paper describes a simulation-based optimization of an optical tracking system that is used as a 6DOF navigation system for neurosurgery. Compared to classical system used in clinical navigation, the presented system has two unique properties: firstly, the system will be miniaturized and integrated into an operating microscope for neurosurgery; secondly, due to miniaturization a single camera approach has been designed. Single camera techniques for 6DOF measurements show a special sensitivity against weak geometric configurations between camera and object. In addition, the achievable accuracy potential depends significantly on the geometric properties of the tracked objects (locators). Besides quality and stability of the targets used on the locator, their geometric configuration is of major importance. In the following the development and investigation of a simulation program is presented which allows for the assessment and optimization of the system with respect to accuracy. Different system parameters can be altered as well as different scenarios indicating the operational use of the system. Measurement deviations are estimated based on the Monte-Carlo method. Practical measurements validate the correctness of the numerical simulation results.
NASA Astrophysics Data System (ADS)
Tsai, Tracy; Rella, Chris; Crosson, Eric
2013-04-01
Quantification of fugitive methane emissions from unconventional natural gas (i.e. shale gas, tight sand gas, etc.) production, processing, and transport is essential for scientists, policy-makers, and the energy industry, because methane has a global warming potential of at least 21 times that of carbon dioxide over a span of 100 years [1]. Therefore, fugitive emissions reduce any environmental benefits to using natural gas instead of traditional fossil fuels [2]. Current measurement techniques involve first locating all the possible leaks and then measuring the emission of each leak. This technique is a painstaking and slow process that cannot be scaled up to the large size of the natural gas industry in which there are at least half a million natural gas wells in the United States alone [3]. An alternative method is to calculate the emission of a plume through dispersion modeling. This method is a scalable approach since all the individual leaks within a natural gas facility can be aggregated into a single plume measurement. However, plume dispersion modeling requires additional knowledge of the distance to the source, atmospheric turbulence, and local topography, and it is a mathematically intensive process. Therefore, there is a need for an instrument capable of simple, rapid, and accurate measurements of fugitive methane emissions on a per well head scale. We will present the "plume camera" instrument, which simultaneously measures methane at different spatial points or pixels. The spatial correlation between methane measurements provides spatial information of the plume, and in addition to the wind measurement collected with a sonic anemometer, the flux can be determined. Unlike the plume dispersion model, this approach does not require knowledge of the distance to the source and atmospheric conditions. Moreover, the instrument can fit inside a standard car such that emission measurements can be performed on a per well head basis. In a controlled experiment with known releases from a methane tank, a 2-pixel plume camera measured 496 ± 160 sccm from a release of 650 sccm located 21 m away, and 4,180 ± 962 sccm from a release of 3,400 sccm located 49 m away. These results in addition to results with a higher-pixel camera will be discussed. Field campaign data collected with the plume camera pixels mounted onto a vehicle and driven through the natural gas fields in the Uintah Basin (Utah, United States) will also be presented along with the limitations and advantages of the instrument. References: 1. S. Solomon, D. Qin, M. Manning, Z. Chen, M. Marquis, K.B. Averyt, M.Tignor and H.L. Miller (eds.). IPCC, 2007: Climate Change 2007: The Physical Science Basis of the Fourth Assessment Report. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA. 2. R.W. Howarth, R. Santoro, and A. Ingraffea. "Methane and the greenhouse-gas footprint of natural gas from shale formations." Climate Change, 106, 679 (2011). 3. U.S. Energy Information Administration. "Number of Producing Wells."
NASA Astrophysics Data System (ADS)
McIntire, John; Geiselman, Eric; Heft, Eric; Havig, Paul
2011-06-01
Designers, researchers, and users of binocular stereoscopic head- or helmet-mounted displays (HMDs) face the tricky issue of what imagery to present in their particular displays, and how to do so effectively. Stereoscopic imagery must often be created in-house with a 3D graphics program or from within a 3D virtual environment, or stereoscopic photos/videos must be carefully captured, perhaps for relaying to an operator in a teleoperative system. In such situations, the question arises as to what camera separation (real or virtual) is appropriate or desirable for end-users and operators. We review some of the relevant literature regarding the question of stereo pair camera separation using deskmounted or larger scale stereoscopic displays, and employ our findings to potential HMD applications, including command & control, teleoperation, information and scientific visualization, and entertainment.
2004-11-11
NASA's Mars Exploration Rover Opportunity captured this view from the base of "Burns Cliff" during the rover's 280th martian day (Nov. 6, 2004). This cliff in the inner wall of "Endurance Crater" displays multiple layers of bedrock for the rover to examine with its panoramic camera and miniature thermal emission spectrometer. The rover team has decided that the farthest Opportunity can safely advance along the base of the cliff is close to the squarish white rock near the center of this image. After examining the site for a few days from that position, the the rover will turn around and head out of the crater. The view is a mosaic of frames taken by Opportunity's navigation camera. The rover was on ground with a slope of about 30 degrees when the pictures were taken, and the view is presented here in a way that corrects for that tilt of the camera. http://photojournal.jpl.nasa.gov/catalog/PIA07039
NASA Astrophysics Data System (ADS)
Talukder, Ashit; Morookian, John M.; Monacos, Steve P.; Lam, Raymond K.; Lebaw, C.; Bond, A.
2004-04-01
Eyetracking is one of the latest technologies that has shown potential in several areas including human-computer interaction for people with and without disabilities, and for noninvasive monitoring, detection, and even diagnosis of physiological and neurological problems in individuals. Current non-invasive eyetracking methods achieve a 30 Hz rate with possibly low accuracy in gaze estimation, that is insufficient for many applications. We propose a new non-invasive visual eyetracking system that is capable of operating at speeds as high as 6-12 KHz. A new CCD video camera and hardware architecture is used, and a novel fast image processing algorithm leverages specific features of the input CCD camera to yield a real-time eyetracking system. A field programmable gate array (FPGA) is used to control the CCD camera and execute the image processing operations. Initial results show the excellent performance of our system under severe head motion and low contrast conditions.
Label inspection of approximate cylinder based on adverse cylinder panorama
NASA Astrophysics Data System (ADS)
Lin, Jianping; Liao, Qingmin; He, Bei; Shi, Chenbo
2013-12-01
This paper presents a machine vision system for automated label inspection, with the goal to reduce labor cost and ensure consistent product quality. Firstly, the images captured from each single-camera are distorted, since the inspection object is approximate cylindrical. Therefore, this paper proposes an algorithm based on adverse cylinder projection, where label images are rectified by distortion compensation. Secondly, to overcome the limited field of viewing for each single-camera, our method novelly combines images of all single-cameras and build a panorama for label inspection. Thirdly, considering the shake of production lines and error of electronic signal, we design the real-time image registration to calculate offsets between the template and inspected images. Experimental results demonstrate that our system is accurate, real-time and can be applied for numerous real- time inspections of approximate cylinders.
Penguin head movement detected using small accelerometers: a proxy of prey encounter rate.
Kokubun, Nobuo; Kim, Jeong-Hoon; Shin, Hyoung-Chul; Naito, Yasuhiko; Takahashi, Akinori
2011-11-15
Determining temporal and spatial variation in feeding rates is essential for understanding the relationship between habitat features and the foraging behavior of top predators. In this study we examined the utility of head movement as a proxy of prey encounter rates in medium-sized Antarctic penguins, under the presumption that the birds should move their heads actively when they encounter and peck prey. A field study of free-ranging chinstrap and gentoo penguins was conducted at King George Island, Antarctica. Head movement was recorded using small accelerometers attached to the head, with simultaneous monitoring for prey encounter or body angle. The main prey was Antarctic krill (>99% in wet mass) for both species. Penguin head movement coincided with a slow change in body angle during dives. Active head movements were extracted using a high-pass filter (5 Hz acceleration signals) and the remaining acceleration peaks (higher than a threshold acceleration of 1.0 g) were counted. The timing of head movements coincided well with images of prey taken from the back-mounted cameras: head movement was recorded within ±2.5 s of a prey image on 89.1±16.1% (N=7 trips) of images. The number of head movements varied largely among dive bouts, suggesting large temporal variations in prey encounter rates. Our results show that head movement is an effective proxy of prey encounter, and we suggest that the method will be widely applicable for a variety of predators.
Han, Woong Kyu; Tan, Yung K; Olweny, Ephrem O; Yin, Gang; Liu, Zhuo-Wei; Faddegon, Stephen; Scott, Daniel J; Cadeddu, Jeffrey A
2013-04-01
To compare surgeon-assessed ergonomic and workload demands of magnetic anchoring and guidance system (MAGS) laparoendoscopic single-site surgery (LESS) nephrectomy with conventional LESS nephrectomy in a porcine model. Participants included two expert and five novice surgeons who each performed bilateral LESS nephrectomy in two nonsurvival animals using either the MAGS camera or conventional laparoscope. Task difficulty and workload demands of the surgeon and camera driver were assessed using the validated National Aeronautics and Space Administration Task Load Index (NASA-TLX) questionnaire. Surgeons were also asked to score 6 parameters on a Likert scale (range 1=low/easy to 5=high/hard): procedure-associated workload, ergonomics, technical challenge, visualization, accidental events, and instrument handling. Each step of the nephrectomy was also timed and instrument clashing was quantified. Scores for each parameter on the Likert scale were significantly lower for MAGS-LESS nephrectomy. Mean number of internal and external clashes were significantly lower for the MAGS camera (p<0.001). Mean task times for each procedure were shorter for experts than for novices, but this was not statistically significant. NASA-TLX workload ratings by the surgeon and camera driver showed that MAGS resulted in a significantly lower workload than the conventional laparoscope during LESS nephrectomy (p<0.05). The use of the MAGS camera during LESS nephrectomy lowers the task workload for both the surgeon and camera driver when compared to conventional laparoscope use. Subjectively, it appears to also improve surgeons' impressions of ergonomics and technical challenge. Pending approval for clinical use, further evaluation in the clinical setting is warranted.
Precision of FLEET Velocimetry Using High-speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 micro sec, precisions of 0.5 m/s in air and 0.2 m/s in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision High Speed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Tan, Tai Ho; Williams, Arthur H.
1985-01-01
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasmas generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Tan, T.H.; Williams, A.H.
An optical fiber-coupled detector visible streak camera plasma diagnostic apparatus. Arrays of optical fiber-coupled detectors are placed on the film plane of several types of particle, x-ray and visible spectrometers or directly in the path of the emissions to be measured and the output is imaged by a visible streak camera. Time and spatial dependence of the emission from plasma generated from a single pulse of electromagnetic radiation or from a single particle beam burst can be recorded.
Neck Strength Imbalance Correlates With Increased Head Acceleration in Soccer Heading
Dezman, Zachary D.W.; Ledet, Eric H.; Kerr, Hamish A.
2013-01-01
Background: Soccer heading is using the head to directly contact the ball, often to advance the ball down the field or score. It is a skill fundamental to the game, yet it has come under scrutiny. Repeated subclinical effects of heading may compound over time, resulting in neurologic deficits. Greater head accelerations are linked to brain injury. Developing an understanding of how the neck muscles help stabilize and reduce head acceleration during impact may help prevent brain injury. Hypothesis: Neck strength imbalance correlates to increasing head acceleration during impact while heading a soccer ball. Study Design: Observational laboratory investigation. Methods: Sixteen Division I and II collegiate soccer players headed a ball in a controlled indoor laboratory setting while player motions were recorded by a 14-camera Vicon MX motion capture system. Neck flexor and extensor strength of each player was measured using a spring-type clinical dynamometer. Results: Players were served soccer balls by hand at a mean velocity of 4.29 m/s (±0.74 m/s). Players returned the ball to the server using a heading maneuver at a mean velocity of 5.48 m/s (±1.18 m/s). Mean neck strength difference was positively correlated with angular head acceleration (rho = 0.497; P = 0.05), with a trend toward significance for linear head acceleration (rho = 0.485; P = 0.057). Conclusion: This study suggests that symmetrical strength in neck flexors and extensors reduces head acceleration experienced during low-velocity heading in experienced collegiate players. Clinical Relevance: Balanced neck strength may reduce head acceleration cumulative subclinical injury. Since neck strength is a measureable and amenable strength training intervention, this may represent a modifiable intrinsic risk factor for injury. PMID:24459547
Evaluation of a video-based head motion tracking system for dedicated brain PET
NASA Astrophysics Data System (ADS)
Anishchenko, S.; Beylin, D.; Stepanov, P.; Stepanov, A.; Weinberg, I. N.; Schaeffer, S.; Zavarzin, V.; Shaposhnikov, D.; Smith, M. F.
2015-03-01
Unintentional head motion during Positron Emission Tomography (PET) data acquisition can degrade PET image quality and lead to artifacts. Poor patient compliance, head tremor, and coughing are examples of movement sources. Head motion due to patient non-compliance can be an issue with the rise of amyloid brain PET in dementia patients. To preserve PET image resolution and quantitative accuracy, head motion can be tracked and corrected in the image reconstruction algorithm. While fiducial markers can be used, a contactless approach is preferable. A video-based head motion tracking system for a dedicated portable brain PET scanner was developed. Four wide-angle cameras organized in two stereo pairs are used for capturing video of the patient's head during the PET data acquisition. Facial points are automatically tracked and used to determine the six degree of freedom head pose as a function of time. The presented work evaluated the newly designed tracking system using a head phantom and a moving American College of Radiology (ACR) phantom. The mean video-tracking error was 0.99±0.90 mm relative to the magnetic tracking device used as ground truth. Qualitative evaluation with the ACR phantom shows the advantage of the motion tracking application. The developed system is able to perform tracking with accuracy close to millimeter and can help to preserve resolution of brain PET images in presence of movements.
A new high-speed IR camera system
NASA Technical Reports Server (NTRS)
Travis, Jeffrey W.; Shu, Peter K.; Jhabvala, Murzy D.; Kasten, Michael S.; Moseley, Samuel H.; Casey, Sean C.; Mcgovern, Lawrence K.; Luers, Philip J.; Dabney, Philip W.; Kaipa, Ravi C.
1994-01-01
A multi-organizational team at the Goddard Space Flight Center is developing a new far infrared (FIR) camera system which furthers the state of the art for this type of instrument by the incorporating recent advances in several technological disciplines. All aspects of the camera system are optimized for operation at the high data rates required for astronomical observations in the far infrared. The instrument is built around a Blocked Impurity Band (BIB) detector array which exhibits responsivity over a broad wavelength band and which is capable of operating at 1000 frames/sec, and consists of a focal plane dewar, a compact camera head electronics package, and a Digital Signal Processor (DSP)-based data system residing in a standard 486 personal computer. In this paper we discuss the overall system architecture, the focal plane dewar, and advanced features and design considerations for the electronics. This system, or one derived from it, may prove useful for many commercial and/or industrial infrared imaging or spectroscopic applications, including thermal machine vision for robotic manufacturing, photographic observation of short-duration thermal events such as combustion or chemical reactions, and high-resolution surveillance imaging.
Observations of the Perseids 2013 using SPOSH cameras
NASA Astrophysics Data System (ADS)
Margonis, A.; Elgner, S.; Christou, A.; Oberst, J.; Flohrer, J.
2013-09-01
Earth is constantly bombard by debris, most of which disintegrates in the upper atmosphere. The collision of a dust particle, having a mass of approximately 1g or larger, with the Earth's atmosphere results into a visible streak of light in the night sky, called meteor. Comets produce new meteoroids each time they come close to the Sun due to sublimation processes. These fresh particles are moving around the Sun in orbits similar to their parent comet forming meteoroid streams. For this reason, the intersection of Earth's orbital path with different comets, gives rise to anumber of meteor showers throughout the year. The Perseids are one of the most prominent annual meteor showers occurring every summer, having its origin in Halley-type comet 109P/Swift-Tuttle. The dense core of this stream passes Earth's orbit on the 12th of August when more than 100 meteors per hour can been seen by a single observer under ideal conditions. The Technical University of Berlin (TUB) and the German Aerospace Center (DLR) together with the Armagh observatory organize meteor campaigns every summer observing the activity of the Perseids meteor shower. The observations are carried out using the Smart Panoramic Optical Sensor Head (SPOSH) camera system [2] which has been developed by DLR and Jena-Optronik GmbH under an ESA/ESTEC contract. The camera was designed to image faint, short-lived phenomena on dark planetary hemispheres. The camera is equipped with a highly sensitive back-illuminated CCD chip having a pixel resolution of 1024x1024. The custom-made fish-eye lens offers a 120°x120° field-of-view (168° over the diagonal) making the monitoring of nearly the whole night sky possible (Fig. 1). This year the observations will take place between 3rd and 10th of August to cover the meteor activity of the Perseids just before their maximum. The SPOSH cameras will be deployed at two remote sites located in high altitudes in the Greek Peloponnese peninsula. The baseline of ∼50km between the two observing stations ensures a large overlapping area of the cameras' field of views allowing the triangulation of approximately every meteor captured by the two observing systems. The acquired data will be reduced using dedicated software developed at TUB and DLR. Assuming a successful campaign, statistics, trajectories and photometric properties of the processed double-station meteors will be presented at the conference. Furthermore, a first order statistical analysis of the meteors processed during the 2012 and the new 2013 campaigns will be presented [1].
Digital dental photography. Part 4: choosing a camera.
Ahmad, I
2009-06-13
With so many cameras and systems on the market, making a choice of the right one for your practice needs is a daunting task. As described in Part 1 of this series, a digital single reflex (DSLR) camera is an ideal choice for dental use in enabling the taking of portraits, close-up or macro images of the dentition and study casts. However, for the sake of completion, some other cameras systems that are used in dentistry are also discussed.
Unmanned Ground Vehicle Perception Using Thermal Infrared Cameras
NASA Technical Reports Server (NTRS)
Rankin, Arturo; Huertas, Andres; Matthies, Larry; Bajracharya, Max; Assad, Christopher; Brennan, Shane; Bellut, Paolo; Sherwin, Gary
2011-01-01
TIR cameras can be used for day/night Unmanned Ground Vehicle (UGV) autonomous navigation when stealth is required. The quality of uncooled TIR cameras has significantly improved over the last decade, making them a viable option at low speed Limiting factors for stereo ranging with uncooled LWIR cameras are image blur and low texture scenes TIR perception capabilities JPL has explored includes: (1) single and dual band TIR terrain classification (2) obstacle detection (pedestrian, vehicle, tree trunks, ditches, and water) (3) perception thru obscurants
NASA Technical Reports Server (NTRS)
Vaughan, Andrew T. (Inventor); Riedel, Joseph E. (Inventor)
2016-01-01
A single, compact, lower power deep space positioning system (DPS) configured to determine a location of a spacecraft anywhere in the solar system, and provide state information relative to Earth, Sun, or any remote object. For example, the DPS includes a first camera and, possibly, a second camera configured to capture a plurality of navigation images to determine a state of a spacecraft in a solar system. The second camera is located behind, or adjacent to, a secondary reflector of a first camera in a body of a telescope.
Infrared-enhanced TV for fire detection
NASA Technical Reports Server (NTRS)
Hall, J. R.
1978-01-01
Closed-circuit television is superior to conventional smoke or heat sensors for detecting fires in large open spaces. Single TV camera scans entire area, whereas many conventional sensors and maze of interconnecting wiring might be required to get same coverage. Camera is monitored by person who would trip alarm if fire were detected, or electronic circuitry could process camera signal for fully-automatic alarm system.
General Model of Photon-Pair Detection with an Image Sensor
NASA Astrophysics Data System (ADS)
Defienne, Hugo; Reichert, Matthew; Fleischer, Jason W.
2018-05-01
We develop an analytic model that relates intensity correlation measurements performed by an image sensor to the properties of photon pairs illuminating it. Experiments using an effective single-photon counting camera, a linear electron-multiplying charge-coupled device camera, and a standard CCD camera confirm the model. The results open the field of quantum optical sensing using conventional detectors.
NASA Astrophysics Data System (ADS)
Reiss, D.; Jaumann, R.
The topographic information provided by the Mars Orbiter Laser Altimeter has been used in combination with the Mars Observer Camera imagery to estimate the topo- graphic position of sapping pits and gully heads on the rim of Nirgal Vallis. Hence Nirgal Vallis is understood to be formed by groundwater sapping (1, 2, 3, 4) an aquifer is proposed as water supply. Gullies in the northern rim of Nirgal Vallis as discovered in Mars Observer Camera (MOC) images (5, 6) proof the existence of such an aquifer. Further evidence for sapping in Nirgal Vallis is demonstrated by short hanging tribu- taries with amphitheater-like heads. The basis of these sapping pits defines the con- tact of aquifer to aquiclude during the valley formation. The gully heads are much deeper under the local surface and the correlation of their topographic position with the valley depth indicate the subsidence of the groundwater level following the ver- tical erosion of the valley. This implies the existence of different groundwater tables over time confined by impermeable layers, whereas the gully head level is the most recent groundwater table which still may be erosional active under the conditions of increasing water pressure and ice barrier failure (5). The occurrence of more than one tilted sapping level at different topographic positions which are time-correlated with the erosional notching of the valley, either indicates different aquifers with litholog- ical aquicludes or a climate controlled subsidence of the permafrost layer acting as confining layer. References: (1) Baker et al., 1992, In: Mars, Univ. of Arizona Press. (2) Carr, 1995, JGR 100, 7479. (3) Malin and Carr, 1999, Icarus, 397, 589. (4) Jaumann and Reiss, 2002, LPSC. (5) Malin and Edgett, 2000, Science, 288, 2330. (6) Malin and Edgett, 2001, JGR 106, 23429.
MonoSLAM: real-time single camera SLAM.
Davison, Andrew J; Reid, Ian D; Molton, Nicholas D; Stasse, Olivier
2007-06-01
We present a real-time algorithm which can recover the 3D trajectory of a monocular camera, moving rapidly through a previously unknown scene. Our system, which we dub MonoSLAM, is the first successful application of the SLAM methodology from mobile robotics to the "pure vision" domain of a single uncontrolled camera, achieving real time but drift-free performance inaccessible to Structure from Motion approaches. The core of the approach is the online creation of a sparse but persistent map of natural landmarks within a probabilistic framework. Our key novel contributions include an active approach to mapping and measurement, the use of a general motion model for smooth camera movement, and solutions for monocular feature initialization and feature orientation estimation. Together, these add up to an extremely efficient and robust algorithm which runs at 30 Hz with standard PC and camera hardware. This work extends the range of robotic systems in which SLAM can be usefully applied, but also opens up new areas. We present applications of MonoSLAM to real-time 3D localization and mapping for a high-performance full-size humanoid robot and live augmented reality with a hand-held camera.
Non-invasive diagnostics of ion beams in strong toroidal magnetic fields with standard CMOS cameras
NASA Astrophysics Data System (ADS)
Ates, Adem; Ates, Yakup; Niebuhr, Heiko; Ratzinger, Ulrich
2018-01-01
A superconducting Figure-8 stellarator type magnetostatic Storage Ring (F8SR) is under investigation at the Institute for Applied Physics (IAP) at Goethe University Frankfurt. Besides numerical simulations on an optimized design for beam transport and injection a scaled down (0.6T) experiment with two 30°toroidal magnets is set up for further investigations. A great challenge is the development of a non-destructive, magnetically insensitive and flexible detector for local investigations of an ion beam propagating through the toroidal magnetostatic field. This paper introduces a new way of beam path measurement by residual gas monitoring. It uses a single board camera connected to a standard single board computer by a camera serial interface all placed inside the vacuum chamber. First experiments with one camera were done and in a next step two under 90 degree arranged cameras were installed. With the help of the two cameras which are moveable along the beam pipe the theoretical predictions are experimentally verified successfully. Previous experimental results have been confirmed. The transport of H+ and H2+ ion beams with energies of 7 keV and at beam currents of about 1 mA is investigated successfully.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn
Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but theirmore » level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE Healthcare), followed by a CT scan for attenuation correction (AC). For each experiment, separate images were created including reconstruction with no corrections (NC), with AC, with attenuation and dual-energy window (DEW) scatter correction (ACSC), with attenuation and partial volume correction (PVC) applied (ACPVC), and with attenuation, scatter, and PVC applied (ACSCPVC). The DEW SC method used was modified to account for the presence of the low-energy tail. Results: T-tests showed that the mean error in absolute activity measurement was reduced significantly for AC and ACSC compared to NC for both (hot and cold) datasets (p < 0.001) and that ACSC, ACPVC, and ACSCPVC show significant reductions in mean differences compared to AC (p ≤ 0.001) without increasing the uncertainty (p > 0.4). The effect of SC and PVC was significant in reducing errors over AC in both datasets (p < 0.001 and p < 0.01, respectively), resulting in a mean error of 5% ± 4%. Conclusions: Quantitative measurements of cardiac {sup 99m}Tc activity are achievable using attenuation and scatter corrections, with the authors’ dedicated cardiac SPECT camera. Partial volume corrections offer improvements in measurement accuracy in AC images and ACSC images with elevated background activity; however, these improvements are not significant in ACSC images with low background activity.« less
NASA Technical Reports Server (NTRS)
1992-01-01
The IMAX camera system is used to record on-orbit activities of interest to the public. Because of the extremely high resolution of the IMAX camera, projector, and audio systems, the audience is afforded a motion picture experience unlike any other. IMAX and OMNIMAX motion picture systems were designed to create motion picture images of superior quality and audience impact. The IMAX camera is a 65 mm, single lens, reflex viewing design with a 15 perforation per frame horizontal pull across. The frame size is 2.06 x 2.77 inches. Film travels through the camera at a rate of 336 feet per minute when the camera is running at the standard 24 frames/sec.
ERIC Educational Resources Information Center
Tamaoka, Katsuo; Asano, Michiko; Miyaoka, Yayoi; Yokosawa, Kazuhiko
2014-01-01
Using the eye-tracking method, the present study depicted pre- and post-head processing for simple scrambled sentences of head-final languages. Three versions of simple Japanese active sentences with ditransitive verbs were used: namely, (1) SO[subscript 1]O[subscript 2]V canonical, (2) SO[subscript 2]O[subscript 1]V single-scrambled, and (3)…
Sliding movement of single actin filaments on one-headed myosin filaments
NASA Astrophysics Data System (ADS)
Harada, Yoshie; Noguchi, Akira; Kishino, Akiyoshi; Yanagida, Toshio
1987-04-01
The myosin molecule consists of two heads, each of which contains an enzymatic active site and an actin-binding site. The fundamental problem of whether the two heads function independently or cooperatively during muscle contraction has been studied by methods using an actomyosin thread1, superprecipitation2-4 and chemical modification of muscle fibres5. No clear conclusion has yet been reached. We have approached this question using an assay system in which sliding movements of fluorescently labelled single actin filaments along myosin filaments can be observed directly6,7. Here, we report direct measurement of the sliding of single actin filaments along one-headed myosin filaments in which the density of heads was varied over a wide range. Our results show that cooperative interaction between the two heads of myosin is not essential for inducing the sliding movement of actin filaments.
Measuring SO2 ship emissions with an ultraviolet imaging camera
NASA Astrophysics Data System (ADS)
Prata, A. J.
2014-05-01
Over the last few years fast-sampling ultraviolet (UV) imaging cameras have been developed for use in measuring SO2 emissions from industrial sources (e.g. power plants; typical emission rates ~ 1-10 kg s-1) and natural sources (e.g. volcanoes; typical emission rates ~ 10-100 kg s-1). Generally, measurements have been made from sources rich in SO2 with high concentrations and emission rates. In this work, for the first time, a UV camera has been used to measure the much lower concentrations and emission rates of SO2 (typical emission rates ~ 0.01-0.1 kg s-1) in the plumes from moving and stationary ships. Some innovations and trade-offs have been made so that estimates of the emission rates and path concentrations can be retrieved in real time. Field experiments were conducted at Kongsfjord in Ny Ålesund, Svalbard, where SO2 emissions from cruise ships were made, and at the port of Rotterdam, Netherlands, measuring emissions from more than 10 different container and cargo ships. In all cases SO2 path concentrations could be estimated and emission rates determined by measuring ship plume speeds simultaneously using the camera, or by using surface wind speed data from an independent source. Accuracies were compromised in some cases because of the presence of particulates in some ship emissions and the restriction of single-filter UV imagery, a requirement for fast-sampling (> 10 Hz) from a single camera. Despite the ease of use and ability to determine SO2 emission rates from the UV camera system, the limitation in accuracy and precision suggest that the system may only be used under rather ideal circumstances and that currently the technology needs further development to serve as a method to monitor ship emissions for regulatory purposes. A dual-camera system or a single, dual-filter camera is required in order to properly correct for the effects of particulates in ship plumes.
Berge, Jerica M; Hoppmann, Caroline; Hanson, Carrie; Neumark-Sztainer, Dianne
2013-12-01
Cross-sectional and longitudinal research has shown that family meals are protective for adolescent healthful eating behaviors. However, little is known about what parents think of these findings and whether parents from single- vs dual-headed households have differing perspectives about the findings. In addition, parents' perspectives regarding barriers to applying the findings on family meals in their own homes and suggestions for more widespread adoption of the findings are unknown. The current study aimed to identify single- and dual-headed household parents' perspectives regarding the research findings on family meals, barriers to applying the findings in their own homes, and suggestions for helping families have more family meals. The current qualitative study included 59 parents who participated in substudy of two linked multilevel studies-EAT 2010 (Eating and Activity in Teens) and Families and Eating and Activity in Teens (F-EAT). Parents (91.5% female) were racially/ethnically and socioeconomically diverse. Data were analyzed using a grounded theory approach. Results from the current study suggest that parents from both single- and dual-headed households have similar perspectives regarding why family meals are protective for healthful eating habits for adolescents (eg, provides structure/routine, opportunities for communication, connection), but provide similar and different reasons for barriers to family meals (eg, single-headed=cost vs dual-headed=lack of creativity) and ideas and suggestions for how to increase the frequency of family meals (eg, single-headed=give fewer options vs dual-headed=include children in the meal preparation). Findings can help inform public health intervention researchers and providers who work with adolescents and their families to understand how to approach discussions regarding reasons for having family meals, barriers to carrying out family meals, and ways to increase family meals depending on family structure. Copyright © 2013 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Berge, Jerica M.; Hoppmann, Caroline; Hanson, Carrie; Neumark-Sztainer, Dianne
2013-01-01
Cross-sectional and longitudinal research has shown that family meals are protective for adolescent healthful eating behaviors. However, little is known about what parents think of these findings and whether parents from single- versus dual-headed households have differing perspectives about the findings. Additionally, parents’ perspectives regarding barriers to applying the findings on family meals in their own homes and suggestions for more wide-spread adoption of the findings are unknown. The current study aimed to identify single- and dual-headed household parents’ perspectives regarding the research findings on family meals, barriers to applying the findings in their own homes and suggestions for helping families have more family meals. The current qualitative study included 59 parents who participated in sub-study of two linked multi-level studies—EAT 2010 (Eating and Activity in Teens) and Families and Eating and Activity in Teens (F-EAT). Parents (91.5% female) were racially/ethnically and socio-economically diverse. Data were analyzed using a grounded theory approach. Results from the current study suggest that parents from both single- and dual-headed households have similar perspectives regarding why family meals are protective for healthful eating habits for adolescents (e.g., provides structure/routine, opportunities for communication, connection), but provide similar and different reasons for barriers to family meals (e.g., single-headed=cost vs. dual-headed=lack of creativity) and ideas and suggestions for how to increase the frequency of family meals (e.g., single-headed=give fewer options vs. dual-headed=include children in the meal preparation). Findings may help inform public health intervention researchers and providers who work with adolescents and their families to understand how to approach discussions regarding reasons for having family meals, barriers to carrying out family meals and ways to increase family meals depending on family structure. PMID:24238144
A Silicon SPECT System for Molecular Imaging of the Mouse Brain.
Shokouhi, Sepideh; Fritz, Mark A; McDonald, Benjamin S; Durko, Heather L; Furenlid, Lars R; Wilson, Donald W; Peterson, Todd E
2007-01-01
We previously demonstrated the feasibility of using silicon double-sided strip detectors (DSSDs) for SPECT imaging of the activity distribution of iodine-125 using a 300-micrometer thick detector. Based on this experience, we now have developed fully customized silicon DSSDs and associated readout electronics with the intent of developing a multi-pinhole SPECT system. Each DSSD has a 60.4 mm × 60.4 mm active area and is 1 mm thick. The strip pitch is 59 micrometers, and the readout of the 1024 strips on each side gives rise to a detector with over one million pixels. Combining four high-resolution DSSDs into a SPECT system offers an unprecedented space-bandwidth product for the imaging of single-photon emitters. The system consists of two camera heads with two silicon detectors stacked one behind the other in each head. The collimator has a focused pinhole system with cylindrical-shaped pinholes that are laser-drilled in a 250 μm tungsten plate. The unique ability to collect projection data at two magnifications simultaneously allows for multiplexed data at high resolution to be combined with lower magnification data with little or no multiplexing. With the current multi-pinhole collimator design, our SPECT system will be capable of offering high spatial resolution, sensitivity and angular sampling for small field-of-view applications, such as molecular imaging of the mouse brain.
Incremental Support Vector Machine Framework for Visual Sensor Networks
NASA Astrophysics Data System (ADS)
Awad, Mariette; Jiang, Xianhua; Motai, Yuichi
2006-12-01
Motivated by the emerging requirements of surveillance networks, we present in this paper an incremental multiclassification support vector machine (SVM) technique as a new framework for action classification based on real-time multivideo collected by homogeneous sites. The technique is based on an adaptation of least square SVM (LS-SVM) formulation but extends beyond the static image-based learning of current SVM methodologies. In applying the technique, an initial supervised offline learning phase is followed by a visual behavior data acquisition and an online learning phase during which the cluster head performs an ensemble of model aggregations based on the sensor nodes inputs. The cluster head then selectively switches on designated sensor nodes for future incremental learning. Combining sensor data offers an improvement over single camera sensing especially when the latter has an occluded view of the target object. The optimization involved alleviates the burdens of power consumption and communication bandwidth requirements. The resulting misclassification error rate, the iterative error reduction rate of the proposed incremental learning, and the decision fusion technique prove its validity when applied to visual sensor networks. Furthermore, the enabled online learning allows an adaptive domain knowledge insertion and offers the advantage of reducing both the model training time and the information storage requirements of the overall system which makes it even more attractive for distributed sensor networks communication.
NASA Astrophysics Data System (ADS)
Bechis, K.; Pitruzzello, A.
2014-09-01
This presentation describes our ongoing research into using a ground-based light field camera to obtain passive, single-aperture 3D imagery of LEO objects. Light field cameras are an emerging and rapidly evolving technology for passive 3D imaging with a single optical sensor. The cameras use an array of lenslets placed in front of the camera focal plane, which provides angle of arrival information for light rays originating from across the target, allowing range to target and 3D image to be obtained from a single image using monocular optics. The technology, which has been commercially available for less than four years, has the potential to replace dual-sensor systems such as stereo cameras, dual radar-optical systems, and optical-LIDAR fused systems, thus reducing size, weight, cost, and complexity. We have developed a prototype system for passive ranging and 3D imaging using a commercial light field camera and custom light field image processing algorithms. Our light field camera system has been demonstrated for ground-target surveillance and threat detection applications, and this paper presents results of our research thus far into applying this technology to the 3D imaging of LEO objects. The prototype 3D imaging camera system developed by Northrop Grumman uses a Raytrix R5 C2GigE light field camera connected to a Windows computer with an nVidia graphics processing unit (GPU). The system has a frame rate of 30 Hz, and a software control interface allows for automated camera triggering and light field image acquisition to disk. Custom image processing software then performs the following steps: (1) image refocusing, (2) change detection, (3) range finding, and (4) 3D reconstruction. In Step (1), a series of 2D images are generated from each light field image; the 2D images can be refocused at up to 100 different depths. Currently, steps (1) through (3) are automated, while step (4) requires some user interaction. A key requirement for light field camera operation is that the target must be within the near-field (Fraunhofer distance) of the collecting optics. For example, in visible light the near-field of a 1-m telescope extends out to about 3,500 km, while the near-field of the AEOS telescope extends out over 46,000 km. For our initial proof of concept, we have integrated our light field camera with a 14-inch Meade LX600 advanced coma-free telescope, to image various surrogate ground targets at up to tens of kilometers range. Our experiments with the 14-inch telescope have assessed factors and requirements that are traceable and scalable to a larger-aperture system that would have the near-field distance needed to obtain 3D images of LEO objects. The next step would be to integrate a light field camera with a 1-m or larger telescope and evaluate its 3D imaging capability against LEO objects. 3D imaging of LEO space objects with light field camera technology can potentially provide a valuable new tool for space situational awareness, especially for those situations where laser or radar illumination of the target objects is not feasible.
Compact Autonomous Hemispheric Vision System
NASA Technical Reports Server (NTRS)
Pingree, Paula J.; Cunningham, Thomas J.; Werne, Thomas A.; Eastwood, Michael L.; Walch, Marc J.; Staehle, Robert L.
2012-01-01
Solar System Exploration camera implementations to date have involved either single cameras with wide field-of-view (FOV) and consequently coarser spatial resolution, cameras on a movable mast, or single cameras necessitating rotation of the host vehicle to afford visibility outside a relatively narrow FOV. These cameras require detailed commanding from the ground or separate onboard computers to operate properly, and are incapable of making decisions based on image content that control pointing and downlink strategy. For color, a filter wheel having selectable positions was often added, which added moving parts, size, mass, power, and reduced reliability. A system was developed based on a general-purpose miniature visible-light camera using advanced CMOS (complementary metal oxide semiconductor) imager technology. The baseline camera has a 92 FOV and six cameras are arranged in an angled-up carousel fashion, with FOV overlaps such that the system has a 360 FOV (azimuth). A seventh camera, also with a FOV of 92 , is installed normal to the plane of the other 6 cameras giving the system a > 90 FOV in elevation and completing the hemispheric vision system. A central unit houses the common electronics box (CEB) controlling the system (power conversion, data processing, memory, and control software). Stereo is achieved by adding a second system on a baseline, and color is achieved by stacking two more systems (for a total of three, each system equipped with its own filter.) Two connectors on the bottom of the CEB provide a connection to a carrier (rover, spacecraft, balloon, etc.) for telemetry, commands, and power. This system has no moving parts. The system's onboard software (SW) supports autonomous operations such as pattern recognition and tracking.
The TolTEC Camera for the LMT Telescope
NASA Astrophysics Data System (ADS)
Bryan, Sean
2018-01-01
TolTEC is a new camera being built for the 50-meter Large Millimeter-wave Telescope (LMT) on Sierra Negra in Puebla, Mexico. The instrument will discover and characterize distant galaxies by detecting the thermal emission of dust heated by starlight. The polarimetric capabilities of the camera will measure magnetic fields in star-forming regions in the Milky Way. The optical design of the camera uses mirrors, lenses, and dichroics to simultaneously couple a 4 arcminute diameter field of view onto three single-band focal planes at 150, 220, and 280 GHz. The 7000 polarization-selective detectors are single-band horn-coupled LEKID detectors fabricated at NIST. A rotating half wave plate operates at ambient temperature to modulate the polarized signal. In addition to the galactic and extragalactic surveys already planned, TolTEC installed at the LMT will provide open observing time to the community.
Why Are Faces Denser in the Visual Experiences of Younger than Older Infants?
ERIC Educational Resources Information Center
Jayaraman, Swapnaa; Fausey, Caitlin M.; Smith, Linda B.
2017-01-01
Recent evidence from studies using head cameras suggests that the frequency of faces directly in front of infants "declines" over the first year and a half of life, a result that has implications for the development of and evolutionary constraints on face processing. Two experiments tested 2 opposing hypotheses about this observed…
Crewmember in the middeck beside the Commercial Generic Bioprocessing exp.
1993-01-19
STS054-07-003 (13-19 Jan 1993) --- Astronaut John H. Casper, mission commander, floats near the Commercial Generic Bioprocessing Apparatus (CGBA) station on Endeavour's middeck. A friction car and its accompanying loop -- part of the Toys in Space package onboard -- can be seen just above Casper's head. The photograph was taken with a 35mm camera.
Researching Literacy in Context: Using Video Analysis to Explore School Literacies
ERIC Educational Resources Information Center
Blikstad-Balas, Marte; Sørvik, Gard Ove
2015-01-01
This article addresses how methodological approaches relying on video can be included in literacy research to capture changing literacies. In addition to arguing why literacy is best studied in context, we provide empirical examples of how small, head-mounted video cameras have been used in two different research projects that share a common aim:…
ERIC Educational Resources Information Center
Wilhoit, Elizabeth D.; Kisselburgh, Lorraine G.
2016-01-01
In this article, we introduce participant viewpoint ethnography (PVE), a phenomenological video research method that combines reflexive, interview-based data with video capture of actual experiences. In PVE, participants wear a head-mounted camera to record the phenomena of study from their point of view. The researcher and participant then review…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Park, Mi-Ae; Moore, Stephen C.; McQuaid, Sarah J.
Purpose: The authors have previously reported the advantages of high-sensitivity single-photon emission computed tomography (SPECT) systems for imaging structures located deep inside the brain. DaTscan (Isoflupane I-123) is a dopamine transporter (DaT) imaging agent that has shown potential for early detection of Parkinson disease (PD), as well as for monitoring progression of the disease. Realizing the full potential of DaTscan requires efficient estimation of striatal uptake from SPECT images. They have evaluated two SPECT systems, a conventional dual-head gamma camera with low-energy high-resolution collimators (conventional) and a dedicated high-sensitivity multidetector cardiac imaging system (dedicated) for imaging tasks related to PD.more » Methods: Cramer-Rao bounds (CRB) on precision of estimates of striatal and background activity concentrations were calculated from high-count, separate acquisitions of the compartments (right striata, left striata, background) of a striatal phantom. CRB on striatal and background activity concentration were calculated from essentially noise-free projection datasets, synthesized by scaling and summing the compartment projection datasets, for a range of total detected counts. They also calculated variances of estimates of specific-to-nonspecific binding ratios (BR) and asymmetry indices from these values using propagation of error analysis, as well as the precision of measuring changes in BR on the order of the average annual decline in early PD. Results: Under typical clinical conditions, the conventional camera detected 2 M counts while the dedicated camera detected 12 M counts. Assuming a normal BR of 5, the standard deviation of BR estimates was 0.042 and 0.021 for the conventional and dedicated system, respectively. For an 8% decrease to BR = 4.6, the signal-to-noise ratio were 6.8 (conventional) and 13.3 (dedicated); for a 5% decrease, they were 4.2 (conventional) and 8.3 (dedicated). Conclusions: This implies that PD can be detected earlier with the dedicated system than with the conventional system; therefore, earlier identification of PD progression should be possible with the high-sensitivity dedicated SPECT camera.« less
Bone edema of the whole vertebral body: an unusual case of spondyloarthritis.
Ortolan, Augusta; Lazzarin, Paolo; Lorenzin, Mariagrazia; Rampin, Lucia; Ramonda, Roberta
2017-01-01
Spondyloarthritis (SpA) is usually characterized by early inflammatory involvement of the sacroiliac joints (SI), which constitutes one of the most important classification criteria according to the SpondyloArthritis International Society (ASAS). These criteria do not include inflammatory spine lesions which can be detected on MRI, although spine involvement is very common in axial SpA. This is because spine MRI lesion often retrieved in SpA are not very specific, and can be found in many other diseases such as malignancy and osteoarthritis. Here we present the case of a 33-year old woman who presented a worsening low back pain, with a thoracic spine MRI showing bone marrow edema (BME) of the whole T8 vertebral body. Owing to this peculiar presentation, together with the unresponsiveness of the pain to nonsteroidal anti inflammatory drugs (NSAIDs) and a slight increase of the biomarker CA19-9, a malignancy was suspected. Therefore, the patient underwent bone scintigraphy, Single positron emission computed tomography (SPET/TC), positron emission tomography and repeated MRI without reaching a diagnosis. Finally, when SI joints MRI was performed, BME of the SI joints emerged: this was fundamental to formulate the diagnosis of axSpA.
Latha, M; Pari, L
2005-02-01
The influence of Scoparia dulcis, a traditionally used plant for the treatment of diabetes mellitus, was examined in streptozotocin diabetic rats on dearrangement in glycoprotein levels. Diabetes was induced in male Wistar rats by a single intraperitoneal injection of streptozotocin. An aqueous extract of Scoparia dulcis plant was administered orally for 6 weeks. The effect of the Scoparia dulcis extract on blood glucose, plasma insulin, plasma and tissue glycoproteins studied was in comparison to glibenclamide. The levels of blood glucose and plasma glycoproteins were increased significantly whereas the level of plasma insulin was significantly decreased in diabetic rats. There was a significant decrease in the level of sialic acid and elevated levels of hexose, hexosamine and fucose in the liver and kidney of streptozotocin diabetic rats. Oral administration of Scoparia dulcis plant extract (SPEt) to diabetic rats led to decreased levels of blood glucose and plasma glycoproteins. The levels of plasma insulin and tissue sialic acid were increased whereas the levels of tissue hexose, hexosamine and fucose were near normal. The present study indicates that Scoparia dulcis possesses a significant beneficial effect on glycoproteins in addition to its antidiabetic effect.
Review of intelligent video surveillance with single camera
NASA Astrophysics Data System (ADS)
Liu, Ying; Fan, Jiu-lun; Wang, DianWei
2012-01-01
Intelligent video surveillance has found a wide range of applications in public security. This paper describes the state-of- the-art techniques in video surveillance system with single camera. This can serve as a starting point for building practical video surveillance systems in developing regions, leveraging existing ubiquitous infrastructure. In addition, this paper discusses the gap between existing technologies and the requirements in real-world scenario, and proposes potential solutions to reduce this gap.
Detection of non-classical space-time correlations with a novel type of single-photon camera.
Just, Felix; Filipenko, Mykhaylo; Cavanna, Andrea; Michel, Thilo; Gleixner, Thomas; Taheri, Michael; Vallerga, John; Campbell, Michael; Tick, Timo; Anton, Gisela; Chekhova, Maria V; Leuchs, Gerd
2014-07-14
During the last decades, multi-pixel detectors have been developed capable of registering single photons. The newly developed hybrid photon detector camera has a remarkable property that it has not only spatial but also temporal resolution. In this work, we apply this device to the detection of non-classical light from spontaneous parametric down-conversion and use two-photon correlations for the absolute calibration of its quantum efficiency.
Lensless Photoluminescence Hyperspectral Camera Employing Random Speckle Patterns.
Žídek, Karel; Denk, Ondřej; Hlubuček, Jiří
2017-11-10
We propose and demonstrate a spectrally-resolved photoluminescence imaging setup based on the so-called single pixel camera - a technique of compressive sensing, which enables imaging by using a single-pixel photodetector. The method relies on encoding an image by a series of random patterns. In our approach, the image encoding was maintained via laser speckle patterns generated by an excitation laser beam scattered on a diffusor. By using a spectrometer as the single-pixel detector we attained a realization of a spectrally-resolved photoluminescence camera with unmatched simplicity. We present reconstructed hyperspectral images of several model scenes. We also discuss parameters affecting the imaging quality, such as the correlation degree of speckle patterns, pattern fineness, and number of datapoints. Finally, we compare the presented technique to hyperspectral imaging using sample scanning. The presented method enables photoluminescence imaging for a broad range of coherent excitation sources and detection spectral areas.
S201 catalog of far-ultraviolet objects
NASA Technical Reports Server (NTRS)
Page, T.; Carruthers, G. K.; Hill, R. E.
1978-01-01
A catalog of star images was compiled from images obtained by an NRL Far-Ultraviolet Camera/Spectrograph operated from 21 to 23 April 1972 on the lunar surface during the Apollo-16 mission. These images were scanned on a microdensitometer, and the output recorded on magnetic tapes. The catalog is divided into 11 parts, covering ten fields in the sky (the Sagittarius field being covered by two parts), and each part is headed by a constellation name and the field center coordinates. The errors in position of the detected images are less than about 3 arc-min. Correlations are given with star numbers in the Smithsonian Astrophysical Observatory catalog. Values are given of the peak density and the density volume. The text includes a discussion of the photometry, corrections thereto due to threshold and saturation effects, and its comparison with theoretical expectation, stellar model atmospheres, and a generalized far-ultraviolet interstellar extinction law. The S201 catalog is also available on a single reel of seven-track magnetic tape.
4D light-field sensing system for people counting
NASA Astrophysics Data System (ADS)
Hou, Guangqi; Zhang, Chi; Wang, Yunlong; Sun, Zhenan
2016-03-01
Counting the number of people is still an important task in social security applications, and a few methods based on video surveillance have been proposed in recent years. In this paper, we design a novel optical sensing system to directly acquire the depth map of the scene from one light-field camera. The light-field sensing system can count the number of people crossing the passageway, and record the direction and intensity of rays at a snapshot without any assistant light devices. Depth maps are extracted from the raw light-ray sensing data. Our smart sensing system is equipped with a passive imaging sensor, which is able to naturally discern the depth difference between the head and shoulders for each person. Then a human model is built. Through detecting the human model from light-field images, the number of people passing the scene can be counted rapidly. We verify the feasibility of the sensing system as well as the accuracy by capturing real-world scenes passing single and multiple people under natural illumination.
DeSantis, Michael C; DeCenzo, Shawn H; Li, Je-Luen; Wang, Y M
2010-03-29
Standard deviation measurements of intensity profiles of stationary single fluorescent molecules are useful for studying axial localization, molecular orientation, and a fluorescence imaging system's spatial resolution. Here we report on the analysis of the precision of standard deviation measurements of intensity profiles of single fluorescent molecules imaged using an EMCCD camera.We have developed an analytical expression for the standard deviation measurement error of a single image which is a function of the total number of detected photons, the background photon noise, and the camera pixel size. The theoretical results agree well with the experimental, simulation, and numerical integration results. Using this expression, we show that single-molecule standard deviation measurements offer nanometer precision for a large range of experimental parameters.
Influence of vision on head stabilization strategies in older adults during walking.
Cromwell, Ronita L; Newton, Roberta A; Forrest, Gail
2002-07-01
Maintaining balance during dynamic activities is essential for preventing falls in older adults. Head stabilization contributes to dynamic balance, especially during the functional task of walking. Head stability and the role of vision in this process have not been studied during walking in older adults. Seventeen older adults (76.2 +/- 6.9 years) and 20 young adults (26.0 +/- 3.4 years) walked with their eyes open (EO), with their eyes closed (EC), and with fixed gaze (FG). Participants performed three trials of each condition. Sagittal plane head and trunk angular velocities in space were obtained using an infrared camera system with passive reflective markers. Frequency analyses of head-on-trunk with respect to trunk gains and phases were examined for head-trunk movement strategies used for head stability. Average walking velocity, cadence, and peak head velocity were calculated for each condition. Differences between age groups demonstrated that older adults decreased walking velocity in EO (p =.022). FG (p = .021), and EC (p = .022). and decreased cadence during EC (p = .007). Peak head velocity also decreased across conditions (p < .0001) for older adults. Movement patterns demonstrated increased head stability during EO. diminished head stability with EC, and improved head stability with FG as older adult patterns resembled those of young adults. Increased stability of the lower extremity outcome measures for older adults was indicated by reductions in walking velocity and cadence. Concomitant increases in head stability were related to visual tasks. Increased stability may serve as a protective mechanism to prevent falls. Further, vision facilitates the head stabilization process for older adults to compensate for age-related decrements in other sensory systems subserving dynamic balance.
Blur spot limitations in distal endoscope sensors
NASA Astrophysics Data System (ADS)
Yaron, Avi; Shechterman, Mark; Horesh, Nadav
2006-02-01
In years past, the picture quality of electronic video systems was limited by the image sensor. In the present, the resolution of miniature image sensors, as in medical endoscopy, is typically superior to the resolution of the optical system. This "excess resolution" is utilized by Visionsense to create stereoscopic vision. Visionsense has developed a single chip stereoscopic camera that multiplexes the horizontal dimension of the image sensor into two (left and right) images, compensates the blur phenomena, and provides additional depth resolution without sacrificing planar resolution. The camera is based on a dual-pupil imaging objective and an image sensor coated by an array of microlenses (a plenoptic camera). The camera has the advantage of being compact, providing simultaneous acquisition of left and right images, and offering resolution comparable to a dual chip stereoscopic camera with low to medium resolution imaging lenses. A stereoscopic vision system provides an improved 3-dimensional perspective of intra-operative sites that is crucial for advanced minimally invasive surgery and contributes to surgeon performance. An additional advantage of single chip stereo sensors is improvement of tolerance to electronic signal noise.
Strategies for Pre-Emptive Mid-Air Collision Avoidance in Budgerigars
Schiffner, Ingo; Srinivasan, Mandyam V.
2016-01-01
We have investigated how birds avoid mid-air collisions during head-on encounters. Trajectories of birds flying towards each other in a tunnel were recorded using high speed video cameras. Analysis and modelling of the data suggest two simple strategies for collision avoidance: (a) each bird veers to its right and (b) each bird changes its altitude relative to the other bird according to a preset preference. Both strategies suggest simple rules by which collisions can be avoided in head-on encounters by two agents, be they animals or machines. The findings are potentially applicable to the design of guidance algorithms for automated collision avoidance on aircraft. PMID:27680488
Obstacles encountered in the development of the low vision enhancement system.
Massof, R W; Rickman, D L
1992-01-01
The Johns Hopkins Wilmer Eye Institute and the NASA Stennis Space Center are collaborating on the development of a new high technology low vision aid called the Low Vision Enhancement System (LVES). The LVES consists of a binocular head-mounted video display system, video cameras mounted on the head-mounted display, and real-time video image processing in a system package that is battery powered and portable. Through a phased development approach, several generations of the LVES can be made available to the patient in a timely fashion. This paper describes the LVES project with major emphasis on technical problems encountered or anticipated during the development process.
A small cable tunnel inspection robot design
NASA Astrophysics Data System (ADS)
Zhou, Xiaolong; Guo, Xiaoxue; Huang, Jiangcheng; Xiao, Jie
2017-04-01
Modern city mainly rely on internal electricity cable tunnel, this can reduce the influence of high voltage over-head lines of urban city appearance and function. In order to reduce the dangers of cable tunnel artificial inspection and high labor intensity, we design a small caterpillar chassis in combination with two degrees of freedom robot with two degrees of freedom camera pan and tilt, used in the cable tunnel inspection work. Caterpillar chassis adopts simple return roller, damping structure. Mechanical arm with three parallel shafts, finish the up and down and rotated action. Two degrees of freedom camera pan and tilt are used to monitor cable tunnel with 360 °no dead angle. It looks simple, practical and efficient.
LAUNCH (SOLID ROCKET BOOSTER [SRB]) - STS-1
1981-04-12
S81-30505 (12 April 1981) --- Separation of space shuttle Columbia?s external tank, photographed by motion picture cameras in the umbilical bays, occurred following the shutdown of the vehicle?s three main engines. Columbia?s cameras were able to record the bottom side of the tank as the orbiter headed toward its Earth-orbital mission with astronauts John W. Young and Robert L. Crippen aboard and the fuel tank fell toward Earth, passing through the atmosphere rapidly. Liquid oxygen and liquid hydrogen umbilical connectors can be seen at the bottom of the tank. For orientation, the photo should be held with the rounded end at bottom of the frame. Photo credit: NASA
Camera-Only Kinematics for Small Lunar Rovers
NASA Astrophysics Data System (ADS)
Fang, E.; Suresh, S.; Whittaker, W.
2016-11-01
Knowledge of the kinematic state of rovers is critical. Existing methods add sensors and wiring to moving parts, which can fail and adds mass and volume. This research presents a method to optically determine kinematic state using a single camera.
Hydra multiple head star sensor and its in-flight self-calibration of optical heads alignment
NASA Astrophysics Data System (ADS)
Majewski, L.; Blarre, L.; Perrimon, N.; Kocher, Y.; Martinez, P. E.; Dussy, S.
2017-11-01
HYDRA is EADS SODERN new product line of APS-based autonomous star trackers. The baseline is a multiple head sensor made of three separated optical heads and one electronic unit. Actually the concept which was chosen offers more than three single-head star trackers working independently. Since HYDRA merges all fields of view the result is a more accurate, more robust and completely autonomous multiple-head sensor, releasing the AOCS from the need to manage the outputs of independent single-head star trackers. Specific to the multiple head architecture and the underlying data fusion, is the calibration of the relative alignments between the sensor optical heads. The performance of the sensor is related to its estimation of such alignments. HYDRA design is first reminded in this paper along with simplification it can bring at system level (AOCS). Then self-calibration of optical heads alignment is highlighted through descriptions and simulation results, thus demonstrating the performances of a key part of HYDRA multiple-head concept.
Sentinel Node Detection in Head and Neck Malignancies: Innovations in Radioguided Surgery
Vermeeren, L.; Klop, W. M. C.; van den Brekel, M. W. M.; Balm, A. J. M.; Nieweg, O. E.; Valdés Olmos, R. A.
2009-01-01
Sentinel node mapping is becoming a routine procedure for staging of various malignancies, because it can determine lymph node status more precisely. Due to anatomical problems, localizing sentinel nodes in the head and neck region on the basis of conventional images can be difficult. New diagnostic tools can provide better visualization of sentinel nodes. In an attempt to keep up with possible scientific progress, this article reviews new and innovative tools for sentinel node localization in this specific area. The overview comprises a short introduction of the sentinel node procedure as well as indications in the head and neck region. Then the results of SPECT/CT for sentinel node detection are described. Finally, a portable gamma camera to enable intraoperative real-time imaging with improved sentinel node detection is described. PMID:20016804
High-resolution, continuous field-of-view (FOV), non-rotating imaging system
NASA Technical Reports Server (NTRS)
Huntsberger, Terrance L. (Inventor); Stirbl, Robert C. (Inventor); Aghazarian, Hrand (Inventor); Padgett, Curtis W. (Inventor)
2010-01-01
A high resolution CMOS imaging system especially suitable for use in a periscope head. The imaging system includes a sensor head for scene acquisition, and a control apparatus inclusive of distributed processors and software for device-control, data handling, and display. The sensor head encloses a combination of wide field-of-view CMOS imagers and narrow field-of-view CMOS imagers. Each bank of imagers is controlled by a dedicated processing module in order to handle information flow and image analysis of the outputs of the camera system. The imaging system also includes automated or manually controlled display system and software for providing an interactive graphical user interface (GUI) that displays a full 360-degree field of view and allows the user or automated ATR system to select regions for higher resolution inspection.
Coherent infrared imaging camera (CIRIC)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hutchinson, D.P.; Simpson, M.L.; Bennett, C.A.
1995-07-01
New developments in 2-D, wide-bandwidth HgCdTe (MCT) and GaAs quantum-well infrared photodetectors (QWIP) coupled with Monolithic Microwave Integrated Circuit (MMIC) technology are now making focal plane array coherent infrared (IR) cameras viable. Unlike conventional IR cameras which provide only thermal data about a scene or target, a coherent camera based on optical heterodyne interferometry will also provide spectral and range information. Each pixel of the camera, consisting of a single photo-sensitive heterodyne mixer followed by an intermediate frequency amplifier and illuminated by a separate local oscillator beam, constitutes a complete optical heterodyne receiver. Applications of coherent IR cameras are numerousmore » and include target surveillance, range detection, chemical plume evolution, monitoring stack plume emissions, and wind shear detection.« less
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor.
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-02-03
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver's point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods.
Deep Learning-Based Gaze Detection System for Automobile Drivers Using a NIR Camera Sensor
Naqvi, Rizwan Ali; Arsalan, Muhammad; Batchuluun, Ganbayar; Yoon, Hyo Sik; Park, Kang Ryoung
2018-01-01
A paradigm shift is required to prevent the increasing automobile accident deaths that are mostly due to the inattentive behavior of drivers. Knowledge of gaze region can provide valuable information regarding a driver’s point of attention. Accurate and inexpensive gaze classification systems in cars can improve safe driving. However, monitoring real-time driving behaviors and conditions presents some challenges: dizziness due to long drives, extreme lighting variations, glasses reflections, and occlusions. Past studies on gaze detection in cars have been chiefly based on head movements. The margin of error in gaze detection increases when drivers gaze at objects by moving their eyes without moving their heads. To solve this problem, a pupil center corneal reflection (PCCR)-based method has been considered. However, the error of accurately detecting the pupil center and corneal reflection center is increased in a car environment due to various environment light changes, reflections on glasses surface, and motion and optical blurring of captured eye image. In addition, existing PCCR-based methods require initial user calibration, which is difficult to perform in a car environment. To address this issue, we propose a deep learning-based gaze detection method using a near-infrared (NIR) camera sensor considering driver head and eye movement that does not require any initial user calibration. The proposed system is evaluated on our self-constructed database as well as on open Columbia gaze dataset (CAVE-DB). The proposed method demonstrated greater accuracy than the previous gaze classification methods. PMID:29401681
Carlson, Jay; Kowalczuk, Jędrzej; Psota, Eric; Pérez, Lance C
2012-01-01
Robotic surgical platforms require vision feedback systems, which often consist of low-resolution, expensive, single-imager analog cameras. These systems are retooled for 3D display by simply doubling the cameras and outboard control units. Here, a fully-integrated digital stereoscopic video camera employing high-definition sensors and a class-compliant USB video interface is presented. This system can be used with low-cost PC hardware and consumer-level 3D displays for tele-medical surgical applications including military medical support, disaster relief, and space exploration.
Estimating tiger abundance from camera trap data: Field surveys and analytical issues
Karanth, K. Ullas; Nichols, James D.; O'Connell, Allan F.; Nichols, James D.; Karanth, K. Ullas
2011-01-01
Automated photography of tigers Panthera tigris for purely illustrative purposes was pioneered by British forester Fred Champion (1927, 1933) in India in the early part of the Twentieth Century. However, it was McDougal (1977) in Nepal who first used camera traps, equipped with single-lens reflex cameras activated by pressure pads, to identify individual tigers and study their social and predatory behaviors. These attempts involved a small number of expensive, cumbersome camera traps, and were not, in any formal sense, directed at “sampling” tiger populations.
A Robust Mechanical Sensing System for Unmanned Sea Surface Vehicles
NASA Technical Reports Server (NTRS)
Kulczycki, Eric A.; Magnone, Lee J.; Huntsberger, Terrance; Aghazarian, Hrand; Padgett, Curtis W.; Trotz, David C.; Garrett, Michael S.
2009-01-01
The need for autonomous navigation and intelligent control of unmanned sea surface vehicles requires a mechanically robust sensing architecture that is watertight, durable, and insensitive to vibration and shock loading. The sensing system developed here comprises four black and white cameras and a single color camera. The cameras are rigidly mounted to a camera bar that can be reconfigured to mount multiple vehicles, and act as both navigational cameras and application cameras. The cameras are housed in watertight casings to protect them and their electronics from moisture and wave splashes. Two of the black and white cameras are positioned to provide lateral vision. They are angled away from the front of the vehicle at horizontal angles to provide ideal fields of view for mapping and autonomous navigation. The other two black and white cameras are positioned at an angle into the color camera's field of view to support vehicle applications. These two cameras provide an overlap, as well as a backup to the front camera. The color camera is positioned directly in the middle of the bar, aimed straight ahead. This system is applicable to any sea-going vehicle, both on Earth and in space.
NASA Astrophysics Data System (ADS)
Maj, P.; Kasiński, K.; Gryboś, P.; Szczygieł, R.; Kozioł, A.
2015-12-01
Integrated circuits designed for specific applications generally use non-standard communication methods. Hybrid pixel detector readout electronics produces a huge amount of data as a result of number of frames per seconds. The data needs to be transmitted to a higher level system without limiting the ASIC's capabilities. Nowadays, the Camera Link interface is still one of the fastest communication methods, allowing transmission speeds up to 800 MB/s. In order to communicate between a higher level system and the ASIC with a dedicated protocol, an FPGA with dedicated code is required. The configuration data is received from the PC and written to the ASIC. At the same time, the same FPGA should be able to transmit the data from the ASIC to the PC at the very high speed. The camera should be an embedded system enabling autonomous operation and self-monitoring. In the presented solution, at least three different hardware platforms are used—FPGA, microprocessor with real-time operating system and the PC with end-user software. We present the use of a single software platform for high speed data transfer from 65k pixel camera to the personal computer.
Optical flow and driver's kinematics analysis for state of alert sensing.
Jiménez-Pinto, Javier; Torres-Torriti, Miguel
2013-03-28
Road accident statistics from different countries show that a significant number of accidents occur due to driver's fatigue and lack of awareness to traffic conditions. In particular, about 60% of the accidents in which long haul truck and bus drivers are involved are attributed to drowsiness and fatigue. It is thus fundamental to improve non-invasive systems for sensing a driver's state of alert. One of the main challenges to correctly resolve the state of alert is measuring the percentage of eyelid closure over time (PERCLOS), despite the driver's head and body movements. In this paper, we propose a technique that involves optical flow and driver's kinematics analysis to improve the robustness of the driver's alert state measurement under pose changes using a single camera with near-infrared illumination. The proposed approach infers and keeps track of the driver's pose in 3D space in order to ensure that eyes can be located correctly, even after periods of partial occlusion, for example, when the driver stares away from the camera. Our experiments show the effectiveness of the approach with a correct eyes detection rate of 99.41%, on average. The results obtained with the proposed approach in an experiment involving fifteen persons under different levels of sleep deprivation also confirm the discriminability of the fatigue levels. In addition to the measurement of fatigue and drowsiness, the pose tracking capability of the proposed approach has potential applications in distraction assessment and alerting of machine operators.
Optical Flow and Driver's Kinematics Analysis for State of Alert Sensing
Jiménez-Pinto, Javier; Torres-Torriti, Miguel
2013-01-01
Road accident statistics from different countries show that a significant number of accidents occur due to driver's fatigue and lack of awareness to traffic conditions. In particular, about 60% of the accidents in which long haul truck and bus drivers are involved are attributed to drowsiness and fatigue. It is thus fundamental to improve non-invasive systems for sensing a driver's state of alert. One of the main challenges to correctly resolve the state of alert is measuring the percentage of eyelid closure over time (PERCLOS), despite the driver's head and body movements. In this paper, we propose a technique that involves optical flow and driver's kinematics analysis to improve the robustness of the driver's alert state measurement under pose changes using a single camera with near-infrared illumination. The proposed approach infers and keeps track of the driver's pose in 3D space in order to ensure that eyes can be located correctly, even after periods of partial occlusion, for example, when the driver stares away from the camera. Our experiments show the effectiveness of the approach with a correct eyes detection rate of 99.41%, on average. The results obtained with the proposed approach in an experiment involving fifteen persons under different levels of sleep deprivation also confirm the discriminability of the fatigue levels. In addition to the measurement of fatigue and drowsiness, the pose tracking capability of the proposed approach has potential applications in distraction assessment and alerting of machine operators. PMID:23539029
System for real-time generation of georeferenced terrain models
NASA Astrophysics Data System (ADS)
Schultz, Howard J.; Hanson, Allen R.; Riseman, Edward M.; Stolle, Frank; Zhu, Zhigang; Hayward, Christopher D.; Slaymaker, Dana
2001-02-01
A growing number of law enforcement applications, especially in the areas of border security, drug enforcement and anti- terrorism require high-resolution wide area surveillance from unmanned air vehicles. At the University of Massachusetts we are developing an aerial reconnaissance system capable of generating high resolution, geographically registered terrain models (in the form of a seamless mosaic) in real-time from a single down-looking digital video camera. The efficiency of the processing algorithms, as well as the simplicity of the hardware, will provide the user with the ability to produce and roam through stereoscopic geo-referenced mosaic images in real-time, and to automatically generate highly accurate 3D terrain models offline in a fraction of the time currently required by softcopy conventional photogrammetry systems. The system is organized around a set of integrated sensor and software components. The instrumentation package is comprised of several inexpensive commercial-off-the-shelf components, including a digital video camera, a differential GPS, and a 3-axis heading and reference system. At the heart of the system is a set of software tools for image registration, mosaic generation, geo-location and aircraft state vector recovery. Each process is designed to efficiently handle the data collected by the instrument package. Particular attention is given to minimizing geospatial errors at each stage, as well as modeling propagation of errors through the system. Preliminary results for an urban and forested scene are discussed in detail.
Distributing digital video to multiple computers
Murray, James A.
2004-01-01
Video is an effective teaching tool, and live video microscopy is especially helpful in teaching dissection techniques and the anatomy of small neural structures. Digital video equipment is more affordable now and allows easy conversion from older analog video devices. I here describe a simple technique for bringing digital video from one camera to all of the computers in a single room. This technique allows students to view and record the video from a single camera on a microscope. PMID:23493464
NASA Astrophysics Data System (ADS)
Haubeck, K.; Prinz, T.
2013-08-01
The use of Unmanned Aerial Vehicles (UAVs) for surveying archaeological sites is becoming more and more common due to their advantages in rapidity of data acquisition, cost-efficiency and flexibility. One possible usage is the documentation and visualization of historic geo-structures and -objects using UAV-attached digital small frame cameras. These monoscopic cameras offer the possibility to obtain close-range aerial photographs, but - under the condition that an accurate nadir-waypoint flight is not possible due to choppy or windy weather conditions - at the same time implicate the problem that two single aerial images not always meet the required overlap to use them for 3D photogrammetric purposes. In this paper, we present an attempt to replace the monoscopic camera with a calibrated low-cost stereo camera that takes two pictures from a slightly different angle at the same time. Our results show that such a geometrically predefined stereo image pair can be used for photogrammetric purposes e.g. the creation of digital terrain models (DTMs) and orthophotos or the 3D extraction of single geo-objects. Because of the limited geometric photobase of the applied stereo camera and the resulting base-height ratio the accuracy of the DTM however directly depends on the UAV flight altitude.
Preliminary Evaluation of a Commercial 360 Multi-Camera Rig for Photogrammetric Purposes
NASA Astrophysics Data System (ADS)
Teppati Losè, L.; Chiabrando, F.; Spanò, A.
2018-05-01
The research presented in this paper is focused on a preliminary evaluation of a 360 multi-camera rig: the possibilities to use the images acquired by the system in a photogrammetric workflow and for the creation of spherical images are investigated and different tests and analyses are reported. Particular attention is dedicated to different operative approaches for the estimation of the interior orientation parameters of the cameras, both from an operative and theoretical point of view. The consistency of the six cameras that compose the 360 system was in depth analysed adopting a self-calibration approach in a commercial photogrammetric software solution. A 3D calibration field was projected and created, and several topographic measurements were performed in order to have a set of control points to enhance and control the photogrammetric process. The influence of the interior parameters of the six cameras were analyse both in the different phases of the photogrammetric workflow (reprojection errors on the single tie point, dense cloud generation, geometrical description of the surveyed object, etc.), both in the stitching of the different images into a single spherical panorama (some consideration on the influence of the camera parameters on the overall quality of the spherical image are reported also in these section).
Can imaginary head tilt shorten postrotatory nystagmus?
Gianna-Poulin, C C; Voelker, C C; Erickson, B; Black, F O
2001-08-01
In healthy subjects, head tilt upon cessation of a constant-velocity yaw head rotation shortens the duration of postrotatory nystagmus. The presumed mechanism for this effect is that the velocity storage of horizontal semicircular canal inputs is being discharged by otolith organ inputs which signal a constant yaw head position when the head longitudinal axis is no longer earth-vertical. In the present study, normal subjects were rotated head upright in the dark on a vertical-axis rotational chair at 60 degrees/s for 75 s and were required to perform a specific task as soon as the chair stopped. Horizontal position of the right eye was recorded with an infra-red video camera. The average eye velocity (AEV) was measured over a 30-s interval following chair acceleration/deceleration. The ratios (postrotatory AEV/perrotatory AEV) were 1.1 (SD 0.112) when subjects (N=10) kept their head erect, 0.414 (SD 0.083) when subjects tilted their head forward, 1.003 (SD 0.108) when subjects imagined watching a TV show, 1.012 (SD 0.074) when subjects imagined looking at a painting on a wall, and 0.995 (SD 0.074) when subjects imagined floating in a prone position on a lake. Thus, while actual head tilt reduced postrotatory nystagmus, the imagination tasks did not have a statistically significant effect on postrotatory nystagmus. Therefore, velocity storage does not appear to be under the influence of cortical neural signals when subjects imagine that they are floating in a prone orientation.
Eye-head coordination during free exploration in human and cat.
Einhäuser, Wolfgang; Moeller, Gudrun U; Schumann, Frank; Conradt, Jörg; Vockeroth, Johannes; Bartl, Klaus; Schneider, Erich; König, Peter
2009-05-01
Eye, head, and body movements jointly control the direction of gaze and the stability of retinal images in most mammalian species. The contribution of the individual movement components, however, will largely depend on the ecological niche the animal occupies and the layout of the animal's retina, in particular its photoreceptor density distribution. Here the relative contribution of eye-in-head and head-in-world movements in cats is measured, and the results are compared to recent human data. For the cat, a lightweight custom-made head-mounted video setup was used (CatCam). Human data were acquired with the novel EyeSeeCam device, which measures eye position to control a gaze-contingent camera in real time. For both species, analysis was based on simultaneous recordings of eye and head movements during free exploration of a natural environment. Despite the substantial differences in ecological niche, photoreceptor density, and saccade frequency, eye-movement characteristics in both species are remarkably similar. Coordinated eye and head movements dominate the dynamics of the retinal input. Interestingly, compensatory (gaze-stabilizing) movements play a more dominant role in humans than they do in cats. This finding was interpreted to be a consequence of substantially different timescales for head movements, with cats' head movements showing about a 5-fold faster dynamics than humans. For both species, models and laboratory experiments therefore need to account for this rich input dynamic to obtain validity for ecologically realistic settings.
Waniewski, Jacek; Antosiewicz, Stefan; Baczynski, Daniel; Poleszczuk, Jan; Pietribiasi, Mauro; Lindholm, Bengt; Wankowicz, Zofia
2017-10-27
Sequential peritoneal equilibration test (sPET) is based on the consecutive performance of the peritoneal equilibration test (PET, 4-hour, glucose 2.27%) and the mini-PET (1-hour, glucose 3.86%), and the estimation of peritoneal transport parameters with the 2-pore model. It enables the assessment of the functional transport barrier for fluid and small solutes. The objective of this study was to check whether the estimated model parameters can serve as better and earlier indicators of the changes in the peritoneal transport characteristics than directly measured transport indices that depend on several transport processes. 17 patients were examined using sPET twice with the interval of about 8 months (230 ± 60 days). There was no difference between the observational parameters measured in the 2 examinations. The indices for solute transport, but not net UF, were well correlated between the examinations. Among the estimated parameters, a significant decrease between the 2 examinations was found only for hydraulic permeability LpS, and osmotic conductance for glucose, whereas the other parameters remained unchanged. These fluid transport parameters did not correlate with D/P for creatinine, although the decrease in LpS values between the examinations was observed mostly for patients with low D/P for creatinine. We conclude that changes in fluid transport parameters, hydraulic permeability and osmotic conductance for glucose, as assessed by the pore model, may precede the changes in small solute transport. The systematic assessment of fluid transport status needs specific clinical and mathematical tools beside the standard PET tests.
A zonal wavefront sensor with multiple detector planes
NASA Astrophysics Data System (ADS)
Pathak, Biswajit; Boruah, Bosanta R.
2018-03-01
A conventional zonal wavefront sensor estimates the wavefront from the data captured in a single detector plane using a single camera. In this paper, we introduce a zonal wavefront sensor which comprises multiple detector planes instead of a single detector plane. The proposed sensor is based on an array of custom designed plane diffraction gratings followed by a single focusing lens. The laser beam whose wavefront is to be estimated is incident on the grating array and one of the diffracted orders from each grating is focused on the detector plane. The setup, by employing a beam splitter arrangement, facilitates focusing of the diffracted beams on multiple detector planes where multiple cameras can be placed. The use of multiple cameras in the sensor can offer several advantages in the wavefront estimation. For instance, the proposed sensor can provide superior inherent centroid detection accuracy that can not be achieved by the conventional system. It can also provide enhanced dynamic range and reduced crosstalk performance. We present here the results from a proof of principle experimental arrangement that demonstrate the advantages of the proposed wavefront sensing scheme.
Geiger-mode APD camera system for single-photon 3D LADAR imaging
NASA Astrophysics Data System (ADS)
Entwistle, Mark; Itzler, Mark A.; Chen, Jim; Owens, Mark; Patel, Ketan; Jiang, Xudong; Slomkowski, Krystyna; Rangwala, Sabbir
2012-06-01
The unparalleled sensitivity of 3D LADAR imaging sensors based on single photon detection provides substantial benefits for imaging at long stand-off distances and minimizing laser pulse energy requirements. To obtain 3D LADAR images with single photon sensitivity, we have demonstrated focal plane arrays (FPAs) based on InGaAsP Geiger-mode avalanche photodiodes (GmAPDs) optimized for use at either 1.06 μm or 1.55 μm. These state-of-the-art FPAs exhibit excellent pixel-level performance and the capability for 100% pixel yield on a 32 x 32 format. To realize the full potential of these FPAs, we have recently developed an integrated camera system providing turnkey operation based on FPGA control. This system implementation enables the extremely high frame-rate capability of the GmAPD FPA, and frame rates in excess of 250 kHz (for 0.4 μs range gates) can be accommodated using an industry-standard CameraLink interface in full configuration. Real-time data streaming for continuous acquisition of 2 μs range gate point cloud data with 13-bit time-stamp resolution at 186 kHz frame rates has been established using multiple solid-state storage drives. Range gate durations spanning 4 ns to 10 μs provide broad operational flexibility. The camera also provides real-time signal processing in the form of multi-frame gray-scale contrast images and single-frame time-stamp histograms, and automated bias control has been implemented to maintain a constant photon detection efficiency in the presence of ambient temperature changes. A comprehensive graphical user interface has been developed to provide complete camera control using a simple serial command set, and this command set supports highly flexible end-user customization.
NASA Astrophysics Data System (ADS)
Colbert, Fred
2013-05-01
There has been a significant increase in the number of in-house Infrared Thermographic Predictive Maintenance programs for Electrical/Mechanical inspections as compared to out-sourced programs using hired consultants. In addition, the number of infrared consulting services companies offering out-sourced programs has also has grown exponentially. These market segments include: Building Envelope (commercial and residential), Refractory, Boiler Evaluations, etc... These surges are driven by two main factors: 1. The low cost of investment in the equipment (the cost of cameras and peripherals continues to decline). 2. Novel marketing campaigns by the camera manufacturers who are looking to sell more cameras into an otherwise saturated market. The key characteristics of these campaigns are to over simplify the applications and understate the significances of technical training, specific skills and experience that's needed to obtain the risk-lowering information that a facility manager needs. These camera selling campaigns focuses on the simplicity of taking a thermogram, but ignores the critical factors of what it takes to actually perform and manage a creditable, valid IR program, which in-turn expose everyone to tremendous liability. As the In-house vs. Out-sourced consulting services compete for market share head to head with each other in a constricted market space, the price for out-sourced/consulting services drops to try to compete on price for more market share. The consequences of this approach are, something must be compromised to be able to stay competitive from a price point, and that compromise is the knowledge, technical skills and experience of the thermographer. This also ends up being reflected back into the skill sets of the in-house thermographer as well. This over simplification of the skill and experience is producing the "Perfect Storm" for Infrared Thermography, for both in-house and out-sourced programs.
Zahl, D A; Schrader, S M; Edwards, P C
2018-05-01
This exploratory study evaluated student perceptions of their ability to self- and peer assess (i) interpersonal communication skills and (ii) clinical procedures (a head and neck examination) during standardised patient (SP) interactions recorded by Google Glass compared to a static camera. Students compared the Google Glass and static camera recordings using an instrument consisting of 20 Likert-type items and four open- and closed-text items. The Likert-type items asked students to rate how effectively they could assess specific aspects of interpersonal communication and a head and neck examination in these two different types of recordings. The interpersonal communication items included verbal, paraverbal and non-verbal subscales. The open- and closed-text items asked students to report on more globally the differences between the two types of recordings. Descriptive and inferential statistical analyses were conducted for all survey items. An inductive thematic analysis was conducted to determine qualitative emergent themes from the open-text questions. Students found the Glass videos more effective for assessing verbal (t 22 = 2.091, P = 0.048) and paraverbal communication skills (t 22 = 3.304, P = 0.003), whilst they reported that the static camera video was more effective for assessing non-verbal communication skills (t 22 = -2.132, P = 0.044). Four principle themes emerged from the students' open-text responses comparing Glass to static camera recordings for self- and peer assessment: (1) first-person perspective, (2) assessment of non-verbal communication, (3) audiovisual experience and (4) student operation of Glass. Our findings suggest that students perceive that Google Glass is a valuable tool for facilitating self- and peer assessment of SP examinations because of students' perceived ability to emphasise and illustrate communicative and clinical activities from a first-person perspective. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
2016-01-01
Digital single-molecule technologies are expanding diagnostic capabilities, enabling the ultrasensitive quantification of targets, such as viral load in HIV and hepatitis C infections, by directly counting single molecules. Replacing fluorescent readout with a robust visual readout that can be captured by any unmodified cell phone camera will facilitate the global distribution of diagnostic tests, including in limited-resource settings where the need is greatest. This paper describes a methodology for developing a visual readout system for digital single-molecule amplification of RNA and DNA by (i) selecting colorimetric amplification-indicator dyes that are compatible with the spectral sensitivity of standard mobile phones, and (ii) identifying an optimal ratiometric image-process for a selected dye to achieve a readout that is robust to lighting conditions and camera hardware and provides unambiguous quantitative results, even for colorblind users. We also include an analysis of the limitations of this methodology, and provide a microfluidic approach that can be applied to expand dynamic range and improve reaction performance, allowing ultrasensitive, quantitative measurements at volumes as low as 5 nL. We validate this methodology using SlipChip-based digital single-molecule isothermal amplification with λDNA as a model and hepatitis C viral RNA as a clinically relevant target. The innovative combination of isothermal amplification chemistry in the presence of a judiciously chosen indicator dye and ratiometric image processing with SlipChip technology allowed the sequence-specific visual readout of single nucleic acid molecules in nanoliter volumes with an unmodified cell phone camera. When paired with devices that integrate sample preparation and nucleic acid amplification, this hardware-agnostic approach will increase the affordability and the distribution of quantitative diagnostic and environmental tests. PMID:26900709
1996-05-22
S77-E-5073 (22 May 1996) --- From its position at 175 statute miles above Earth, the Space Shuttle Endeavour has encountered some colorful and attractive scenes heading into sunsets and sunrises. This particular encounter, captured with an Electronic Still Camera (ESC), occurred on flight day four, during which the six-member crew deployed the Passive Aerodynamically Stabilized Magnetically Damped Satellite (PAMS) - Satellite Test Unit (STU).
Eyes in the Back of Your Head: Cameras for Classroom Observation
ERIC Educational Resources Information Center
Szente, Judit; Massey, Claity; Hoot, James L.
2005-01-01
This article discusses a unique distance learning facility that allows people, while they are busy working with individuals in another corner of the room, to see what exactly is going on in a learning center and how children are communicating with one another. The article is divided up into the following sections: A New Window on Learning;…
NASA Technical Reports Server (NTRS)
2004-01-01
[figure removed for brevity, see original site] Figure 1 In the quest to determine if a pebble was jamming the rock abrasion tool on NASA's Mars Exploration Rover Opportunity, scientists and engineers examined this up-close, approximate true-color image of the tool. The picture was taken by the rover's panoramic camera, using filters centered at 601, 535, and 482 nanometers, at 12:47 local solar time on sol 200 (August 16, 2004).
Colored spots have been drawn on this image corresponding to regions where panoramic camera reflectance spectra were acquired (see chart in Figure 1). Those regions are: the grinding wheel heads (yellow); the rock abrasion tool magnets (green); the supposed pebble (red); a sunlit portion of the aluminum rock abrasion tool housing (purple); and a shadowed portion of the rock abrasion tool housing (brown). These spectra demonstrated that the composition of the supposed pebble was clearly different from that of the sunlit and shadowed portions of the rock abrasion tool, while similar to that of the dust-coated rock abrasion tool magnets and grinding heads. This led the team to conclude that the object disabling the rock abrasion tool was indeed a martian pebble.NASA Astrophysics Data System (ADS)
Nokata, Makoto; Hirai, Wataru; Itatani, Ryosuke
This paper presents a robotic training system that can exercise the user without bodily restraint, neither markers nor sensors are attached to the trainee. We developed the robot system that has a total of four mounted components: a laser sensor, a camera, a cushion, and an electric motor. This paper have showed the method used for determining whether the trainee was bending forward or backward while walking, and the extent of the tilt, using the recorded image of the back of the trainee's head. A characteristic of our software algorithms has been that the image was divided into 9 quadrants, and each quadrant undergoes Hough transformation. We have verified experimentally that by using our algorithms for the four patterns of forward, backward, diagonally, and crouching, the tilt of the trainee's body have been accurately determined. We created a flowchart for determining the direction of movement according to experimental results. By adjusting the values used to make the distinction according to the position and the angle of the camera, and the width of the back of the trainee's head, we were able to accurately determine the walking condition of the trainee, and achieve early detection of the start of a fall.
Murine fundus fluorescein angiography: An alternative approach using a handheld camera.
Ehrenberg, Moshe; Ehrenberg, Scott; Schwob, Ouri; Benny, Ofra
2016-07-01
In today's modern pharmacologic approach to treating sight-threatening retinal vascular disorders, there is an increasing demand for a compact, mobile, lightweight and cost-effective fluorescein fundus camera to document the effects of antiangiogenic drugs on laser-induced choroidal neovascularization (CNV) in mice and other experimental animals. We have adapted the use of the Kowa Genesis Df Camera to perform Fundus Fluorescein Angiography (FFA) in mice. The 1 kg, 28 cm high camera has built-in barrier and exciter filters to allow digital FFA recording to a Compact Flash memory card. Furthermore, this handheld unit has a steady Indirect Lens Holder that firmly attaches to the main unit, that securely holds a 90 diopter lens in position, in order to facilitate appropriate focus and stability, for photographing the delicate central murine fundus. This easily portable fundus fluorescein camera can effectively record exceptional central retinal vascular detail in murine laser-induced CNV, while readily allowing the investigator to adjust the camera's position according to the variable head and eye movements that can randomly occur while the mouse is optimally anesthetized. This movable image recording device, with efficiencies of space, time, cost, energy and personnel, has enabled us to accurately document the alterations in the central choroidal and retinal vasculature following induction of CNV, implemented by argon-green laser photocoagulation and disruption of Bruch's Membrane, in the experimental murine model of exudative macular degeneration. Copyright © 2016 Elsevier Ltd. All rights reserved.
Investigation into the use of photoanthropometry in facial image comparison.
Moreton, Reuben; Morley, Johanna
2011-10-10
Photoanthropometry is a metric based facial image comparison technique. Measurements of the face are taken from an image using predetermined facial landmarks. Measurements are then converted to proportionality indices (PIs) and compared to PIs from another facial image. Photoanthropometry has been presented as a facial image comparison technique in UK courts for over 15 years. It is generally accepted that extrinsic factors (e.g. orientation of the head, camera angle and distance from the camera) can cause discrepancies in anthropometric measurements of the face from photographs. However there has been limited empirical research into quantifying the influence of such variables. The aim of this study was to determine the reliability of photoanthropometric measurements between different images of the same individual taken with different angulations of the camera. The study examined the facial measurements of 25 individuals from high resolution photographs, taken at different horizontal and vertical camera angles in a controlled environment. Results show that the degree of variability in facial measurements of the same individual due to variations in camera angle can be as great as the variability of facial measurements between different individuals. Results suggest that photoanthropometric facial comparison, as it is currently practiced, is unsuitable for elimination purposes. Preliminary investigations into the effects of distance from camera and image resolution in poor quality images suggest that such images are not an accurate representation of an individuals face, however further work is required. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Motion capture for human motion measuring by using single camera with triangle markers
NASA Astrophysics Data System (ADS)
Takahashi, Hidenori; Tanaka, Takayuki; Kaneko, Shun'ichi
2005-12-01
This study aims to realize a motion capture for measuring 3D human motions by using single camera. Although motion capture by using multiple cameras is widely used in sports field, medical field, engineering field and so on, optical motion capture method with one camera is not established. In this paper, the authors achieved a 3D motion capture by using one camera, named as Mono-MoCap (MMC), on the basis of two calibration methods and triangle markers which each length of side is given. The camera calibration methods made 3D coordinates transformation parameter and a lens distortion parameter with Modified DLT method. The triangle markers enabled to calculate a coordinate value of a depth direction on a camera coordinate. Experiments of 3D position measurement by using the MMC on a measurement space of cubic 2 m on each side show an average error of measurement of a center of gravity of a triangle marker was less than 2 mm. As compared with conventional motion capture method by using multiple cameras, the MMC has enough accuracy for 3D measurement. Also, by putting a triangle marker on each human joint, the MMC was able to capture a walking motion, a standing-up motion and a bending and stretching motion. In addition, a method using a triangle marker together with conventional spherical markers was proposed. Finally, a method to estimate a position of a marker by measuring the velocity of the marker was proposed in order to improve the accuracy of MMC.
Improved camera for better X-ray powder photographs
NASA Technical Reports Server (NTRS)
Parrish, W.; Vajda, I. E.
1969-01-01
Camera obtains powder-type photographs of single crystals or polycrystalline powder specimens. X-ray diffraction photographs of a powder specimen are characterized by improved resolution and greater intensity. A reasonably good powder pattern of small samples can be produced for identification purposes.
A digital gigapixel large-format tile-scan camera.
Ben-Ezra, M
2011-01-01
Although the resolution of single-lens reflex (SLR) and medium-format digital cameras has increased in recent years, applications for cultural-heritage preservation and computational photography require even higher resolutions. Addressing this issue, a large-format cameras' large image planes can achieve very high resolution without compromising pixel size and thus can provide high-quality, high-resolution images.This digital large-format tile scan camera can acquire high-quality, high-resolution images of static scenes. It employs unique calibration techniques and a simple algorithm for focal-stack processing of very large images with significant magnification variations. The camera automatically collects overlapping focal stacks and processes them into a high-resolution, extended-depth-of-field image.
Prabhakar, Ramachandran
2012-01-01
Source to surface distance (SSD) plays a very important role in external beam radiotherapy treatment verification. In this study, a simple technique has been developed to verify the SSD automatically with lasers. The study also suggests a methodology for determining the respiratory signal with lasers. Two lasers, red and green are mounted on the collimator head of a Clinac 2300 C/D linac along with a camera to determine the SSD. A software (SSDLas) was developed to estimate the SSD automatically from the images captured by a 12-megapixel camera. To determine the SSD to a patient surface, the external body contour of the central axis transverse computed tomography (CT) cut is imported into the software. Another important aspect in radiotherapy is the generation of respiratory signal. The changes in the lasers separation as the patient breathes are converted to produce a respiratory signal. Multiple frames of laser images were acquired from the camera mounted on the collimator head and each frame was analyzed with SSDLas to generate the respiratory signal. The SSD as observed with the ODI on the machine and SSD measured by the SSDlas software was found to be within the tolerance limit. The methodology described for generating the respiratory signals will be useful for the treatment of mobile tumors such as lung, liver, breast, pancreas etc. The technique described for determining the SSD and the generation of respiratory signals using lasers is cost effective and simple to implement. Copyright © 2011 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Changing the Production Pipeline - Use of Oblique Aerial Cameras for Mapping Purposes
NASA Astrophysics Data System (ADS)
Moe, K.; Toschi, I.; Poli, D.; Lago, F.; Schreiner, C.; Legat, K.; Remondino, F.
2016-06-01
This paper discusses the potential of current photogrammetric multi-head oblique cameras, such as UltraCam Osprey, to improve the efficiency of standard photogrammetric methods for surveying applications like inventory surveys and topographic mapping for public administrations or private customers. In 2015, Terra Messflug (TM), a subsidiary of Vermessung AVT ZT GmbH (Imst, Austria), has flown a number of urban areas in Austria, Czech Republic and Hungary with an UltraCam Osprey Prime multi-head camera system from Vexcel Imaging. In collaboration with FBK Trento (Italy), the data acquired at Imst (a small town in Tyrol, Austria) were analysed and processed to extract precise 3D topographic information. The Imst block comprises 780 images and covers an area of approx. 4.5 km by 1.5 km. Ground truth data is provided in the form of 6 GCPs and several check points surveyed with RTK GNSS. Besides, 3D building data obtained by photogrammetric stereo plotting from a 5 cm nadir flight and a LiDAR point cloud with 10 to 20 measurements per m² are available as reference data or for comparison. The photogrammetric workflow, from flight planning to Dense Image Matching (DIM) and 3D building extraction, is described together with the achieved accuracy. For each step, the differences and innovation with respect to standard photogrammetric procedures based on nadir images are shown, including high overlaps, improved vertical accuracy, and visibility of areas masked in the standard vertical views. Finally the advantages of using oblique images for inventory surveys are demonstrated.
NASA Astrophysics Data System (ADS)
Tyczka, Dale R.; Wright, Robert; Janiszewski, Brian; Chatten, Martha Jane; Bowen, Thomas A.; Skibba, Brian
2012-06-01
Nearly all explosive ordnance disposal robots in use today employ monoscopic standard-definition video cameras to relay live imagery from the robot to the operator. With this approach, operators must rely on shadows and other monoscopic depth cues in order to judge distances and object depths. Alternatively, they can contact an object with the robot's manipulator to determine its position, but that approach carries with it the risk of detonation from unintentionally disturbing the target or nearby objects. We recently completed a study in which high-definition (HD) and stereoscopic video cameras were used in addition to conventional standard-definition (SD) cameras in order to determine if higher resolutions and/or stereoscopic depth cues improve operators' overall performance of various unmanned ground vehicle (UGV) tasks. We also studied the effect that the different vision modes had on operator comfort. A total of six different head-aimed vision modes were used including normal-separation HD stereo, SD stereo, "micro" (reduced separation) SD stereo, HD mono, and SD mono (two types). In general, the study results support the expectation that higher resolution and stereoscopic vision aid UGV teleoperation, but the degree of improvement was found to depend on the specific task being performed; certain tasks derived notably more benefit from improved depth perception than others. This effort was sponsored by the Joint Ground Robotics Enterprise under Robotics Technology Consortium Agreement #69-200902 T01. Technical management was provided by the U.S. Air Force Research Laboratory's Robotics Research and Development Group at Tyndall AFB, Florida.
Precision of FLEET Velocimetry Using High-Speed CMOS Camera Systems
NASA Technical Reports Server (NTRS)
Peters, Christopher J.; Danehy, Paul M.; Bathel, Brett F.; Jiang, Naibo; Calvert, Nathan D.; Miles, Richard B.
2015-01-01
Femtosecond laser electronic excitation tagging (FLEET) is an optical measurement technique that permits quantitative velocimetry of unseeded air or nitrogen using a single laser and a single camera. In this paper, we seek to determine the fundamental precision of the FLEET technique using high-speed complementary metal-oxide semiconductor (CMOS) cameras. Also, we compare the performance of several different high-speed CMOS camera systems for acquiring FLEET velocimetry data in air and nitrogen free-jet flows. The precision was defined as the standard deviation of a set of several hundred single-shot velocity measurements. Methods of enhancing the precision of the measurement were explored such as digital binning (similar in concept to on-sensor binning, but done in post-processing), row-wise digital binning of the signal in adjacent pixels and increasing the time delay between successive exposures. These techniques generally improved precision; however, binning provided the greatest improvement to the un-intensified camera systems which had low signal-to-noise ratio. When binning row-wise by 8 pixels (about the thickness of the tagged region) and using an inter-frame delay of 65 microseconds, precisions of 0.5 meters per second in air and 0.2 meters per second in nitrogen were achieved. The camera comparison included a pco.dimax HD, a LaVision Imager scientific CMOS (sCMOS) and a Photron FASTCAM SA-X2, along with a two-stage LaVision HighSpeed IRO intensifier. Excluding the LaVision Imager sCMOS, the cameras were tested with and without intensification and with both short and long inter-frame delays. Use of intensification and longer inter-frame delay generally improved precision. Overall, the Photron FASTCAM SA-X2 exhibited the best performance in terms of greatest precision and highest signal-to-noise ratio primarily because it had the largest pixels.
Impaction Force Influences Taper-Trunnion Stability in Total Hip Arthroplasty.
Danoff, Jonathan R; Longaray, Jason; Rajaravivarma, Raga; Gopalakrishnan, Ananthkrishnan; Chen, Antonia F; Hozack, William J
2018-07-01
This study investigated the influence of femoral head impaction force, number of head strikes, the energy sequence of head strikes, and head offset on the strength of the taper-trunnion junction. Thirty titanium-alloy trunnions were mated with 36-mm zero-offset cobalt-chromium femoral heads of corresponding taper angle. A drop tower impacted the head with 2.5J or 8.25J, resulting in 6 kN or 14 kN impaction force, respectively, in a single strike or combinations of 6 kN + 14 kN or 14 kN + 14 kN. In addition, ten 36-mm heads with -5 and +5 offset were impacted with sequential 14 kN + 14 kN strikes. Heads were subsequently disassembled using a screw-driven mechanical testing frame, and peak distraction force was recorded. Femoral head pull-off force was 45% the strike force, and heads struck with a single 14 kN impact showed a pull-off force twice that of the 6 kN group. Two head strikes with the same force did not improve pull-off force for either 6 kN (P = .90) or 14 kN (P = .90). If the forces of the 2 impactions varied, but either impact measured 14 kN, a 51% higher pull-off force was found compared to impactions of either 6 kN or 6 kN + 6 kN. Femoral head offset did not significantly change the pull-off force among -5, 0, and +5 heads (P = .37). Femoral head impaction force influenced femoral head trunnion-taper stability, whereas offset did not affect pull-off force. Multiple head strikes did not add additional stability, as long as a single strike achieved 14 kN force at the mallet-head impactor interface. Insufficient impaction force may lead to inadequate engagement of the trunnion-taper junction. Copyright © 2018 Elsevier Inc. All rights reserved.
[Inferior frontal region hypoperfusion in Parkinson disease with dementia].
Ochudło, Stanisław; Opala, Grzegorz; Jasińska-Myga, Barbara; Siuda, Joanna; Nowak, Stanisław
2003-01-01
Dementia is more frequent in patients suffering from Parkinson's disease (PD) then in general population. The mechanism for mental deterioration in PD remains controversial. The aim of our study was comparison of the regional cerebral perfusion quantified by single photon emission computed tomography in patients suffering from idiopathic Parkinson's disease with and without dementia. We examined 49 PD patients: 22 PD patients with dementia and 27 PD patients without dementia. Dementia was recognized according to ICD-10 and DSM-IV criteria. Cognitive functions were executed by means of the Mini Mental State Examination (MMSE) and neuropsychological assessment. The Unified Parkinson's Disease Rating Scale (UPDRS) and Modified Hoehn & Yahr Scale was used to quantify the severity of PD. SPECT was performed with Siemens Diacam single--head rotating gamma camera after intravenous application of technetium 99m hexamethylpropylene amine oxime (99mTc-HMPAO). The perfusion values were expressed as cortical or basal ganglia regions of interest (ROIs)/cerebellum activity ratios. In both examined group of patients the lowest uptake was in basal ganglia region, while the highest uptake was in occipital region. In the subgroup of PD patients with dementia significant hypoperfusion affecting the inferior frontal cortices was observed. In Parkinson's disease with dementia hypoperfusion in inferior frontal region can be found.
Electron density and plasma dynamics of a colliding plasma experiment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiechula, J., E-mail: wiechula@physik.uni-frankfurt.de; Schönlein, A.; Iberler, M.
2016-07-15
We present experimental results of two head-on colliding plasma sheaths accelerated by pulsed-power-driven coaxial plasma accelerators. The measurements have been performed in a small vacuum chamber with a neutral-gas prefill of ArH{sub 2} at gas pressures between 17 Pa and 400 Pa and load voltages between 4 kV and 9 kV. As the plasma sheaths collide, the electron density is significantly increased. The electron density reaches maximum values of ≈8 ⋅ 10{sup 15} cm{sup −3} for a single accelerated plasma and a maximum value of ≈2.6 ⋅ 10{sup 16} cm{sup −3} for the plasma collision. Overall a raise of the plasma density by a factor ofmore » 1.3 to 3.8 has been achieved. A scaling behavior has been derived from the values of the electron density which shows a disproportionately high increase of the electron density of the collisional case for higher applied voltages in comparison to a single accelerated plasma. Sequences of the plasma collision have been taken, using a fast framing camera to study the plasma dynamics. These sequences indicate a maximum collision velocity of 34 km/s.« less
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects
Lambers, Martin; Kolb, Andreas
2017-01-01
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data. PMID:29271888
Quantified, Interactive Simulation of AMCW ToF Camera Including Multipath Effects.
Bulczak, David; Lambers, Martin; Kolb, Andreas
2017-12-22
In the last decade, Time-of-Flight (ToF) range cameras have gained increasing popularity in robotics, automotive industry, and home entertainment. Despite technological developments, ToF cameras still suffer from error sources such as multipath interference or motion artifacts. Thus, simulation of ToF cameras, including these artifacts, is important to improve camera and algorithm development. This paper presents a physically-based, interactive simulation technique for amplitude modulated continuous wave (AMCW) ToF cameras, which, among other error sources, includes single bounce indirect multipath interference based on an enhanced image-space approach. The simulation accounts for physical units down to the charge level accumulated in sensor pixels. Furthermore, we present the first quantified comparison for ToF camera simulators. We present bidirectional reference distribution function (BRDF) measurements for selected, purchasable materials in the near-infrared (NIR) range, craft real and synthetic scenes out of these materials and quantitatively compare the range sensor data.
Integration of image capture and processing: beyond single-chip digital camera
NASA Astrophysics Data System (ADS)
Lim, SukHwan; El Gamal, Abbas
2001-05-01
An important trend in the design of digital cameras is the integration of capture and processing onto a single CMOS chip. Although integrating the components of a digital camera system onto a single chip significantly reduces system size and power, it does not fully exploit the potential advantages of integration. We argue that a key advantage of integration is the ability to exploit the high speed imaging capability of CMOS image senor to enable new applications such as multiple capture for enhancing dynamic range and to improve the performance of existing applications such as optical flow estimation. Conventional digital cameras operate at low frame rates and it would be too costly, if not infeasible, to operate their chips at high frame rates. Integration solves this problem. The idea is to capture images at much higher frame rates than he standard frame rate, process the high frame rate data on chip, and output the video sequence and the application specific data at standard frame rate. This idea is applied to optical flow estimation, where significant performance improvements are demonstrate over methods using standard frame rate sequences. We then investigate the constraints on memory size and processing power that can be integrated with a CMOS image sensor in a 0.18 micrometers process and below. We show that enough memory and processing power can be integrated to be able to not only perform the functions of a conventional camera system but also to perform applications such as real time optical flow estimation.
External Mask Based Depth and Light Field Camera
2013-12-08
laid out in the previous light field cameras. A good overview of the sampling of the plenoptic function can be found in the survey work by Wetzstein et...view is shown in Figure 6. 5. Applications High spatial resolution depth and light fields are a rich source of information about the plenoptic ...http://www.pelicanimaging.com/. [4] E. Adelson and J. Wang. Single lens stereo with a plenoptic camera. Pattern Analysis and Machine Intelligence
Night Vision and Electro-Optics Technology Transfer, 1972-1981
1981-09-15
Lixiscope offers potential applications as: a handheld instrument for dental radiography giving real-time ,1servation in orthodontic procedures; a portable...laboratory are described below. There are however, no hard and fast rules. The laboratory’s experimentation with different films, brackets , cameras and...good single lens reflex camera; an exvosure meter; a tripod; and a custom-built bracket to mate the camera and intensifier (Figure 2-1). Figure 2-1
A smart telerobotic system driven by monocular vision
NASA Technical Reports Server (NTRS)
Defigueiredo, R. J. P.; Maccato, A.; Wlczek, P.; Denney, B.; Scheerer, J.
1994-01-01
A robotic system that accepts autonomously generated motion and control commands is described. The system provides images from the monocular vision of a camera mounted on a robot's end effector, eliminating the need for traditional guidance targets that must be predetermined and specifically identified. The telerobotic vision system presents different views of the targeted object relative to the camera, based on a single camera image and knowledge of the target's solid geometry.
Neil A. Clark; Sang-Mook Lee
2004-01-01
This paper demonstrates how a digital video camera with a long lens can be used with pulse laser ranging in order to collect very large-scale tree crown measurements. The long focal length of the camera lens provides the magnification required for precise viewing of distant points with the trade-off of spatial coverage. Multiple video frames are mosaicked into a single...
STS-45 crewmembers during LINHOF camera briefing in JSC's Bldg 4 rm 2026A
1992-01-14
S92-26522 (Feb 1992) --- Crewmembers assigned to NASA's STS-45 mission are briefed on the use of the Linhof camera in the flight operations facility at the Johnson Space Center (JSC). Charles F. Bolden, mission commander, stands at left. Other crewmembers (seated clockwise around the table from lower left) are Dirk Frimout of Belgium representing the European Space Agency as payload specialist; Charles R. (Rick) Chappell, backup payload specialist; Brian Duffy, pilot; Kathryn D. Sullivan, payload commander; David C. Leestma, mission specialist; Byron K. Lichtenberg, payload specialist; and C. Michael Foale, mission specialist. James H. Ragan (far right), head of the flight equipment section of the flight systems branch in JSC's Man Systems Division, briefs the crewmembers. Donald C. Carico, of the crew training staff and Rockwell International, stands near Bolden. The camera, used for out-the-window observations, is expected to be used frequently on the Atmospheric Laboratory for Applications and Science (ATLAS-1) mission, scheduled for a March date with the Space Shuttle Atlantis.
Fast visible imaging of turbulent plasma in TORPEX
DOE Office of Scientific and Technical Information (OSTI.GOV)
Iraji, D.; Diallo, A.; Fasoli, A.
2008-10-15
Fast framing cameras constitute an important recent diagnostic development aimed at monitoring light emission from magnetically confined plasmas, and are now commonly used to study turbulence in plasmas. In the TORPEX toroidal device [A. Fasoli et al., Phys. Plasmas 13, 055902 (2006)], low frequency electrostatic fluctuations associated with drift-interchange waves are routinely measured by means of extensive sets of Langmuir probes. A Photron Ultima APX-RS fast framing camera has recently been acquired to complement Langmuir probe measurements, which allows comparing statistical and spectral properties of visible light and electrostatic fluctuations. A direct imaging system has been developed, which allows viewingmore » the light, emitted from microwave-produced plasmas tangentially and perpendicularly to the toroidal direction. The comparison of the probability density function, power spectral density, and autoconditional average of the camera data to those obtained using a multiple head electrostatic probe covering the plasma cross section shows reasonable agreement in the case of perpendicular view and in the plasma region where interchange modes dominate.« less
Single-Camera Stereoscopy Setup to Visualize 3D Dusty Plasma Flows
NASA Astrophysics Data System (ADS)
Romero-Talamas, C. A.; Lemma, T.; Bates, E. M.; Birmingham, W. J.; Rivera, W. F.
2016-10-01
A setup to visualize and track individual particles in multi-layered dusty plasma flows is presented. The setup consists of a single camera with variable frame rate, and a pair of adjustable mirrors that project the same field of view from two different angles to the camera, allowing for three-dimensional tracking of particles. Flows are generated by inclining the plane in which the dust is levitated using a specially designed setup that allows for external motion control without compromising vacuum. Dust illumination is achieved with an optics arrangement that includes a Powell lens that creates a laser fan with adjustable thickness and with approximately constant intensity everywhere. Both the illumination and the stereoscopy setup allow for the camera to be placed at right angles with respect to the levitation plane, in preparation for magnetized dusty plasma experiments in which there will be no direct optical access to the levitation plane. Image data and analysis of unmagnetized dusty plasma flows acquired with this setup are presented.
SVBRDF-Invariant Shape and Reflectance Estimation from a Light-Field Camera.
Wang, Ting-Chun; Chandraker, Manmohan; Efros, Alexei A; Ramamoorthi, Ravi
2018-03-01
Light-field cameras have recently emerged as a powerful tool for one-shot passive 3D shape capture. However, obtaining the shape of glossy objects like metals or plastics remains challenging, since standard Lambertian cues like photo-consistency cannot be easily applied. In this paper, we derive a spatially-varying (SV)BRDF-invariant theory for recovering 3D shape and reflectance from light-field cameras. Our key theoretical insight is a novel analysis of diffuse plus single-lobe SVBRDFs under a light-field setup. We show that, although direct shape recovery is not possible, an equation relating depths and normals can still be derived. Using this equation, we then propose using a polynomial (quadratic) shape prior to resolve the shape ambiguity. Once shape is estimated, we also recover the reflectance. We present extensive synthetic data on the entire MERL BRDF dataset, as well as a number of real examples to validate the theory, where we simultaneously recover shape and BRDFs from a single image taken with a Lytro Illum camera.
Lightweight helmet-mounted eye movement measurement system
NASA Technical Reports Server (NTRS)
Barnes, J. A.
1978-01-01
The helmet-mounted eye movement measuring system, weighs 1,530 grams; the weight of the present aviators' helmet in standard form with the visor is 1,545 grams. The optical head is standard NAC Eye-Mark. This optical head was mounted on a magnesium yoke which in turn was attached to a slide cam mounted on the flight helmet. The slide cam allows one to adjust the eye-to-optics system distance quite easily and to secure it so that the system will remain in calibration. The design of the yoke and slide cam is such that the subject can, in an emergency, move the optical head forward and upward to the stowed and locked position atop the helmet. This feature was necessary for flight safety. The television camera that is used in the system is a solid state General Electric TN-2000 with a charged induced device imager used as the vidicon.
A neural-based remote eye gaze tracker under natural head motion.
Torricelli, Diego; Conforto, Silvia; Schmid, Maurizio; D'Alessio, Tommaso
2008-10-01
A novel approach to view-based eye gaze tracking for human computer interface (HCI) is presented. The proposed method combines different techniques to address the problems of head motion, illumination and usability in the framework of low cost applications. Feature detection and tracking algorithms have been designed to obtain an automatic setup and strengthen the robustness to light conditions. An extensive analysis of neural solutions has been performed to deal with the non-linearity associated with gaze mapping under free-head conditions. No specific hardware, such as infrared illumination or high-resolution cameras, is needed, rather a simple commercial webcam working in visible light spectrum suffices. The system is able to classify the gaze direction of the user over a 15-zone graphical interface, with a success rate of 95% and a global accuracy of around 2 degrees , comparable with the vast majority of existing remote gaze trackers.
Holder, J P; Benedetti, L R; Bradley, D K
2016-11-01
Single hit pulse height analysis is applied to National Ignition Facility x-ray framing cameras to quantify gain and gain variation in a single micro-channel plate-based instrument. This method allows the separation of gain from detectability in these photon-detecting devices. While pulse heights measured by standard-DC calibration methods follow the expected exponential distribution at the limit of a compound-Poisson process, gain-gated pulse heights follow a more complex distribution that may be approximated as a weighted sum of a few exponentials. We can reproduce this behavior with a simple statistical-sampling model.
The Effect of Transition Type in Multi-View 360° Media.
MacQuarrie, Andrew; Steed, Anthony
2018-04-01
360° images and video have become extremely popular formats for immersive displays, due in large part to the technical ease of content production. While many experiences use a single camera viewpoint, an increasing number of experiences use multiple camera locations. In such multi-view 360° media (MV360M) systems, a visual effect is required when the user transitions from one camera location to another. This effect can take several forms, such as a cut or an image-based warp, and the choice of effect may impact many aspects of the experience, including issues related to enjoyment and scene understanding. To investigate the effect of transition types on immersive MV360M experiences, a repeated-measures experiment was conducted with 31 participants. Wearing a head-mounted display, participants explored four static scenes, for which multiple 360° images and a reconstructed 3D model were available. Three transition types were examined: teleport, a linear move through a 3D model of the scene, and an image-based transition using a Möbius transformation. The metrics investigated included spatial awareness, users' movement profiles, transition preference and the subjective feeling of moving through the space. Results indicate that there was no significant difference between transition types in terms of spatial awareness, while significant differences were found for users' movement profiles, with participants taking 1.6 seconds longer to select their next location following a teleport transition. The model and Möbius transitions were significantly better in terms of creating the feeling of moving through the space. Preference was also significantly different, with model and teleport transitions being preferred over Möbius transitions. Our results indicate that trade-offs between transitions will require content creators to think carefully about what aspects they consider to be most important when producing MV360M experiences.
Hemispherical Field-of-View Above-Water Surface Imager for Submarines
NASA Technical Reports Server (NTRS)
Hemmati, Hamid; Kovalik, Joseph M.; Farr, William H.; Dannecker, John D.
2012-01-01
A document discusses solutions to the problem of submarines having to rise above water to detect airplanes in the general vicinity. Two solutions are provided, in which a sensor is located just under the water surface, and at a few to tens of meter depth under the water surface. The first option is a Fish Eye Lens (FEL) digital-camera combination, situated just under the water surface that will have near-full- hemisphere (360 azimuth and 90 elevation) field of view for detecting objects on the water surface. This sensor can provide a three-dimensional picture of the airspace both in the marine and in the land environment. The FEL is coupled to a camera and can continuously look at the entire sky above it. The camera can have an Active Pixel Sensor (APS) focal plane array that allows logic circuitry to be built directly in the sensor. The logic circuitry allows data processing to occur on the sensor head without the need for any other external electronics. In the second option, a single-photon sensitive (photon counting) detector-array is used at depth, without the need for any optics in front of it, since at this location, optical signals are scattered and arrive at a wide (tens of degrees) range of angles. Beam scattering through clouds and seawater effectively negates optical imaging at depths below a few meters under cloudy or turbulent conditions. Under those conditions, maximum collection efficiency can be achieved by using a non-imaging photon-counting detector behind narrowband filters. In either case, signals from these sensors may be fused and correlated or decorrelated with other sensor data to get an accurate picture of the object(s) above the submarine. These devices can complement traditional submarine periscopes that have a limited field of view in the elevation direction. Also, these techniques circumvent the need for exposing the entire submarine or its periscopes to the outside environment.
Thermal Imaging with Novel Infrared Focal Plane Arrays and Quantitative Analysis of Thermal Imagery
NASA Technical Reports Server (NTRS)
Gunapala, S. D.; Rafol, S. B.; Bandara, S. V.; Liu, J. K.; Mumolo, J. M.; Soibel, A.; Ting, D. Z.; Tidrow, Meimei
2012-01-01
We have developed a single long-wavelength infrared (LWIR) quantum well infrared photodetector (QWIP) camera for thermography. This camera has been used to measure the temperature profile of patients. A pixel coregistered simultaneously reading mid-wavelength infrared (MWIR)/LWIR dual-band QWIP camera was developed to improve the accuracy of temperature measurements especially with objects with unknown emissivity. Even the dualband measurement can provide inaccurate results due to the fact that emissivity is a function of wavelength. Thus we have been developing a four-band QWIP camera for accurate temperature measurement of remote object.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-03-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor (the size of the standard 35mm frame) with the means to select left and right image information. Even with the added stereoscopic capability the appearance of existing camera bodies will be unaltered.
Polarizing aperture stereoscopic cinema camera
NASA Astrophysics Data System (ADS)
Lipton, Lenny
2012-07-01
The art of stereoscopic cinematography has been held back because of the lack of a convenient way to reduce the stereo camera lenses' interaxial to less than the distance between the eyes. This article describes a unified stereoscopic camera and lens design that allows for varying the interaxial separation to small values using a unique electro-optical polarizing aperture design for imaging left and right perspective views onto a large single digital sensor, the size of the standard 35 mm frame, with the means to select left and right image information. Even with the added stereoscopic capability, the appearance of existing camera bodies will be unaltered.
Spatial capture–recapture with partial identity: An application to camera traps
Augustine, Ben C.; Royle, J. Andrew; Kelly, Marcella J.; Satter, Christopher B.; Alonso, Robert S.; Boydston, Erin E.; Crooks, Kevin R.
2018-01-01
Camera trapping surveys frequently capture individuals whose identity is only known from a single flank. The most widely used methods for incorporating these partial identity individuals into density analyses discard some of the partial identity capture histories, reducing precision, and, while not previously recognized, introducing bias. Here, we present the spatial partial identity model (SPIM), which uses the spatial location where partial identity samples are captured to probabilistically resolve their complete identities, allowing all partial identity samples to be used in the analysis. We show that the SPIM outperforms other analytical alternatives. We then apply the SPIM to an ocelot data set collected on a trapping array with double-camera stations and a bobcat data set collected on a trapping array with single-camera stations. The SPIM improves inference in both cases and, in the ocelot example, individual sex is determined from photographs used to further resolve partial identities—one of which is resolved to near certainty. The SPIM opens the door for the investigation of trapping designs that deviate from the standard two camera design, the combination of other data types between which identities cannot be deterministically linked, and can be extended to the problem of partial genotypes.
Pirie, Chris G; Pizzirani, Stefano
2011-12-01
To describe a digital single lens reflex (dSLR) camera adaptor for posterior segment photography. A total of 30 normal canine and feline animals were imaged using a dSLR adaptor which mounts between a dSLR camera body and lens. Posterior segment viewing and imaging was performed with the aid of an indirect lens ranging from 28-90D. Coaxial illumination for viewing was provided by a single white light emitting diode (LED) within the adaptor, while illumination during exposure was provided by the pop-up flash or an accessory flash. Corneal and/or lens reflections were reduced using a pair of linear polarizers, having their azimuths perpendicular to one another. Quality high-resolution, reflection-free, digital images of the retina were obtained. Subjective image evaluation demonstrated the same amount of detail, as compared to a conventional fundus camera. A wide range of magnification(s) [1.2-4X] and/or field(s) of view [31-95 degrees, horizontal] were obtained by altering the indirect lens utilized. The described adaptor may provide an alternative to existing fundus camera systems. Quality images were obtained and the adapter proved to be versatile, portable and of low cost.
Fischer, Andreas; Kupsch, Christian; Gürtler, Johannes; Czarske, Jürgen
2015-09-21
Non-intrusive fast 3d measurements of volumetric velocity fields are necessary for understanding complex flows. Using high-speed cameras and spectroscopic measurement principles, where the Doppler frequency of scattered light is evaluated within the illuminated plane, each pixel allows one measurement and, thus, planar measurements with high data rates are possible. While scanning is one standard technique to add the third dimension, the volumetric data is not acquired simultaneously. In order to overcome this drawback, a high-speed light field camera is proposed for obtaining volumetric data with each single frame. The high-speed light field camera approach is applied to a Doppler global velocimeter with sinusoidal laser frequency modulation. As a result, a frequency multiplexing technique is required in addition to the plenoptic refocusing for eliminating the crosstalk between the measurement planes. However, the plenoptic refocusing is still necessary in order to achieve a large refocusing range for a high numerical aperture that minimizes the measurement uncertainty. Finally, two spatially separated measurement planes with 25×25 pixels each are simultaneously acquired with a measurement rate of 0.5 kHz with a single high-speed camera.
Application of real-time single camera SLAM technology for image-guided targeting in neurosurgery
NASA Astrophysics Data System (ADS)
Chang, Yau-Zen; Hou, Jung-Fu; Tsao, Yi Hsiang; Lee, Shih-Tseng
2012-10-01
In this paper, we propose an application of augmented reality technology for targeting tumors or anatomical structures inside the skull. The application is a combination of the technologies of MonoSLAM (Single Camera Simultaneous Localization and Mapping) and computer graphics. A stereo vision system is developed to construct geometric data of human face for registration with CT images. Reliability and accuracy of the application is enhanced by the use of fiduciary markers fixed to the skull. The MonoSLAM keeps track of the current location of the camera with respect to an augmented reality (AR) marker using the extended Kalman filter. The fiduciary markers provide reference when the AR marker is invisible to the camera. Relationship between the markers on the face and the augmented reality marker is obtained by a registration procedure by the stereo vision system and is updated on-line. A commercially available Android based tablet PC equipped with a 320×240 front-facing camera was used for implementation. The system is able to provide a live view of the patient overlaid by the solid models of tumors or anatomical structures, as well as the missing part of the tool inside the skull.
Multiple-camera/motion stereoscopy for range estimation in helicopter flight
NASA Technical Reports Server (NTRS)
Smith, Phillip N.; Sridhar, Banavar; Suorsa, Raymond E.
1993-01-01
Aiding the pilot to improve safety and reduce pilot workload by detecting obstacles and planning obstacle-free flight paths during low-altitude helicopter flight is desirable. Computer vision techniques provide an attractive method of obstacle detection and range estimation for objects within a large field of view ahead of the helicopter. Previous research has had considerable success by using an image sequence from a single moving camera to solving this problem. The major limitations of single camera approaches are that no range information can be obtained near the instantaneous direction of motion or in the absence of motion. These limitations can be overcome through the use of multiple cameras. This paper presents a hybrid motion/stereo algorithm which allows range refinement through recursive range estimation while avoiding loss of range information in the direction of travel. A feature-based approach is used to track objects between image frames. An extended Kalman filter combines knowledge of the camera motion and measurements of a feature's image location to recursively estimate the feature's range and to predict its location in future images. Performance of the algorithm will be illustrated using an image sequence, motion information, and independent range measurements from a low-altitude helicopter flight experiment.
Measurement of the timing behaviour of off-the-shelf cameras
NASA Astrophysics Data System (ADS)
Schatz, Volker
2017-04-01
This paper presents a measurement method suitable for investigating the timing properties of cameras. A single light source illuminates the camera detector starting with a varying defined delay after the camera trigger. Pixels from the recorded camera frames are summed up and normalised, and the resulting function is indicative of the overlap between illumination and exposure. This allows one to infer the trigger delay and the exposure time with sub-microsecond accuracy. The method is therefore of interest when off-the-shelf cameras are used in reactive systems or synchronised with other cameras. It can supplement radiometric and geometric calibration methods for cameras in scientific use. A closer look at the measurement results reveals deviations from the ideal camera behaviour of constant sensitivity limited to the exposure interval. One of the industrial cameras investigated retains a small sensitivity long after the end of the nominal exposure interval. All three investigated cameras show non-linear variations of sensitivity at O≤ft({{10}-3}\\right) to O≤ft({{10}-2}\\right) during exposure. Due to its sign, the latter effect cannot be described by a sensitivity function depending on the time after triggering, but represents non-linear pixel characteristics.
Chen, Brian R; Poon, Emily; Alam, Murad
2017-08-01
Photographs are an essential tool for the documentation and sharing of findings in dermatologic surgery, and various camera types are available. To evaluate the currently available camera types in view of the special functional needs of procedural dermatologists. Mobile phone, point and shoot, digital single-lens reflex (DSLR), digital medium format, and 3-dimensional cameras were compared in terms of their usefulness for dermatologic surgeons. For each camera type, the image quality, as well as the other practical benefits and limitations, were evaluated with reference to a set of ideal camera characteristics. Based on these assessments, recommendations were made regarding the specific clinical circumstances in which each camera type would likely be most useful. Mobile photography may be adequate when ease of use, availability, and accessibility are prioritized. Point and shoot cameras and DSLR cameras provide sufficient resolution for a range of clinical circumstances, while providing the added benefit of portability. Digital medium format cameras offer the highest image quality, with accurate color rendition and greater color depth. Three-dimensional imaging may be optimal for the definition of skin contour. The selection of an optimal camera depends on the context in which it will be used.
Applications of digital image acquisition in anthropometry
NASA Technical Reports Server (NTRS)
Woolford, B.; Lewis, J. L.
1981-01-01
A description is given of a video kinesimeter, a device for the automatic real-time collection of kinematic and dynamic data. Based on the detection of a single bright spot by three TV cameras, the system provides automatic real-time recording of three-dimensional position and force data. It comprises three cameras, two incandescent lights, a voltage comparator circuit, a central control unit, and a mass storage device. The control unit determines the signal threshold for each camera before testing, sequences the lights, synchronizes and analyzes the scan voltages from the three cameras, digitizes force from a dynamometer, and codes the data for transmission to a floppy disk for recording. Two of the three cameras face each other along the 'X' axis; the third camera, which faces the center of the line between the first two, defines the 'Y' axis. An image from the 'Y' camera and either 'X' camera is necessary for determining the three-dimensional coordinates of the point.
Arain, Nabeel A; Cadeddu, Jeffrey A; Best, Sara L; Roshek, Thomas; Chang, Victoria; Hogg, Deborah C; Bergs, Richard; Fernandez, Raul; Webb, Erin M; Scott, Daniel J
2012-04-01
This study aimed to evaluate the surgeon performance and workload of a next-generation magnetically anchored camera compared with laparoscopic and flexible endoscopic imaging systems for laparoscopic and single-site laparoscopy (SSL) settings. The cameras included a 5-mm 30° laparoscope (LAP), a magnetically anchored (MAGS) camera, and a flexible endoscope (ENDO). The three camera systems were evaluated using standardized optical characteristic tests. Each system was used in random order for visualization during performance of a standardized suturing task by four surgeons. Each participant performed three to five consecutive repetitions as a surgeon and also served as a camera driver for other surgeons. Ex vivo testing was conducted in a laparoscopic multiport and SSL layout using a box trainer. In vivo testing was performed only in the multiport configuration and used a previously validated live porcine Nissen model. Optical testing showed superior resolution for MAGS at 5 and 10 cm compared with LAP or ENDO. The field of view ranged from 39 to 99°. The depth of focus was almost three times greater for MAGS (6-270 mm) than for LAP (2-88 mm) or ENDO (1-93 mm). Both ex vivo and in vivo multiport combined surgeon performance was significantly better for LAP than for ENDO, but no significant differences were detected for MAGS. For multiport testing, workload ratings were significantly less ex vivo for LAP and MAGS than for ENDO and less in vivo for LAP than for MAGS or ENDO. For ex vivo SSL, no significant performance differences were detected, but camera drivers rated the workload significantly less for MAGS than for LAP or ENDO. The data suggest that the improved imaging element of the next-generation MAGS camera has optical and performance characteristics that meet or exceed those of the LAP or ENDO systems and that the MAGS camera may be especially useful for SSL. Further refinements of the MAGS camera are encouraged.
Aqueous Foam Stabilized by Tricationic Amphiphilic Surfactants
NASA Astrophysics Data System (ADS)
Heerschap, Seth; Marafino, John; McKenna, Kristin; Caran, Kevin; Feitosa, Klebert; Kevin Caran's Research Group Collaboration
2015-03-01
The unique surface properties of amphiphilic molecules have made them widely used in applications where foaming, emulsifying or coating processes are needed. The development of novel architectures with multi-cephalic/tailed molecules have enhanced their anti-bacterial activity in connection with tail length and the nature of the head group. Here we report on the foamability of two triple head double, tail cationic surfactants (M-1,14,14, M-P, 14,14) and a triple head single tail cationic surfactant (M-1,1,14) and compare them with commercially available single headed, single tailed anionic and cationic surfactants (SDS,CTAB and DTAB). The results show that bubble rupture rate decrease with the length of the carbon chain irrespective of head structure. The growth rate of bubbles with short tailed surfactants (SDS) and longer, single tailed tricationic surfactants (M-1,1,14) was shown to be twice as high as those with longer tailed surfactants (CTAB, M-P,14,14, M-1,14,14). This fact was related to the size variation of bubbles, where the foams made with short tail surfactants exhibited higher polydispersivity than those with short tails. This suggests that foams with tricationic amphiphilics are closed linked to their tail length and generally insensitive to their head structure.
Using turbulence scintillation to assist object ranging from a single camera viewpoint.
Wu, Chensheng; Ko, Jonathan; Coffaro, Joseph; Paulson, Daniel A; Rzasa, John R; Andrews, Larry C; Phillips, Ronald L; Crabbs, Robert; Davis, Christopher C
2018-03-20
Image distortions caused by atmospheric turbulence are often treated as unwanted noise or errors in many image processing studies. Our study, however, shows that in certain scenarios the turbulence distortion can be very helpful in enhancing image processing results. This paper describes a novel approach that uses the scintillation traits recorded on a video clip to perform object ranging with reasonable accuracy from a single camera viewpoint. Conventionally, a single camera would be confused by the perspective viewing problem, where a large object far away looks the same as a small object close by. When the atmospheric turbulence phenomenon is considered, the edge or texture pixels of an object tend to scintillate and vary more with increased distance. This turbulence induced signature can be quantitatively analyzed to achieve object ranging with reasonable accuracy. Despite the inevitable fact that turbulence will cause random blurring and deformation of imaging results, it also offers convenient solutions to some remote sensing and machine vision problems, which would otherwise be difficult.
3D digital image correlation using single color camera pseudo-stereo system
NASA Astrophysics Data System (ADS)
Li, Junrui; Dan, Xizuo; Xu, Wan; Wang, Yonghong; Yang, Guobiao; Yang, Lianxiang
2017-10-01
Three dimensional digital image correlation (3D-DIC) has been widely used by industry to measure the 3D contour and whole-field displacement/strain. In this paper, a novel single color camera 3D-DIC setup, using a reflection-based pseudo-stereo system, is proposed. Compared to the conventional single camera pseudo-stereo system, which splits the CCD sensor into two halves to capture the stereo views, the proposed system achieves both views using the whole CCD chip and without reducing the spatial resolution. In addition, similarly to the conventional 3D-DIC system, the center of the two views stands in the center of the CCD chip, which minimizes the image distortion relative to the conventional pseudo-stereo system. The two overlapped views in the CCD are separated by the color domain, and the standard 3D-DIC algorithm can be utilized directly to perform the evaluation. The system's principle and experimental setup are described in detail, and multiple tests are performed to validate the system.
Schindler, Andreas; Bartels, Andreas
2018-05-15
Our phenomenological experience of the stable world is maintained by continuous integration of visual self-motion with extra-retinal signals. However, due to conventional constraints of fMRI acquisition in humans, neural responses to visuo-vestibular integration have only been studied using artificial stimuli, in the absence of voluntary head-motion. We here circumvented these limitations and let participants to move their heads during scanning. The slow dynamics of the BOLD signal allowed us to acquire neural signal related to head motion after the observer's head was stabilized by inflatable aircushions. Visual stimuli were presented on head-fixed display goggles and updated in real time as a function of head-motion that was tracked using an external camera. Two conditions simulated forward translation of the participant. During physical head rotation, the congruent condition simulated a stable world, whereas the incongruent condition added arbitrary lateral motion. Importantly, both conditions were precisely matched in visual properties and head-rotation. By comparing congruent with incongruent conditions we found evidence consistent with the multi-modal integration of visual cues with head motion into a coherent "stable world" percept in the parietal operculum and in an anterior part of parieto-insular cortex (aPIC). In the visual motion network, human regions MST, a dorsal part of VIP, the cingulate sulcus visual area (CSv) and a region in precuneus (Pc) showed differential responses to the same contrast. The results demonstrate for the first time neural multimodal interactions between precisely matched congruent versus incongruent visual and non-visual cues during physical head-movement in the human brain. The methodological approach opens the path to a new class of fMRI studies with unprecedented temporal and spatial control over visuo-vestibular stimulation. Copyright © 2018 Elsevier Inc. All rights reserved.
Automated Meteor Fluxes with a Wide-Field Meteor Camera Network
NASA Technical Reports Server (NTRS)
Blaauw, R. C.; Campbell-Brown, M. D.; Cooke, W.; Weryk, R. J.; Gill, J.; Musci, R.
2013-01-01
Within NASA, the Meteoroid Environment Office (MEO) is charged to monitor the meteoroid environment in near ]earth space for the protection of satellites and spacecraft. The MEO has recently established a two ]station system to calculate automated meteor fluxes in the millimeter ]size ]range. The cameras each consist of a 17 mm focal length Schneider lens on a Watec 902H2 Ultimate CCD video camera, producing a 21.7 x 16.3 degree field of view. This configuration has a red ]sensitive limiting meteor magnitude of about +5. The stations are located in the South Eastern USA, 31.8 kilometers apart, and are aimed at a location 90 km above a point 50 km equidistant from each station, which optimizes the common volume. Both single station and double station fluxes are found, each having benefits; more meteors will be detected in a single camera than will be seen in both cameras, producing a better determined flux, but double station detections allow for non ]ambiguous shower associations and permit speed/orbit determinations. Video from the cameras are fed into Linux computers running the ASGARD (All Sky and Guided Automatic Real ]time Detection) software, created by Rob Weryk of the University of Western Ontario Meteor Physics Group. ASGARD performs the meteor detection/photometry, and invokes the MILIG and MORB codes to determine the trajectory, speed, and orbit of the meteor. A subroutine in ASGARD allows for the approximate shower identification in single station meteors. The ASGARD output is used in routines to calculate the flux in units of #/sq km/hour. The flux algorithm employed here differs from others currently in use in that it does not assume a single height for all meteors observed in the common camera volume. In the MEO system, the volume is broken up into a set of height intervals, with the collecting areas determined by the radiant of active shower or sporadic source. The flux per height interval is summed to obtain the total meteor flux. As ASGARD also computes the meteor mass from the photometry, a mass flux can be also calculated. Weather conditions in the southeastern United States are seldom ideal, which introduces the difficulty of a variable sky background. First a weather algorithm indicates if sky conditions are clear enough to calculate fluxes, at which point a limiting magnitude algorithm is employed. The limiting magnitude algorithm performs a fit of stellar magnitudes vs camera intensities. The stellar limiting magnitude is derived from this and easily converted to a limiting meteor magnitude for the active shower or sporadic source.
rf streak camera based ultrafast relativistic electron diffraction.
Musumeci, P; Moody, J T; Scoby, C M; Gutierrez, M S; Tran, T
2009-01-01
We theoretically and experimentally investigate the possibility of using a rf streak camera to time resolve in a single shot structural changes at the sub-100 fs time scale via relativistic electron diffraction. We experimentally tested this novel concept at the UCLA Pegasus rf photoinjector. Time-resolved diffraction patterns from thin Al foil are recorded. Averaging over 50 shots is required in order to get statistics sufficient to uncover a variation in time of the diffraction patterns. In the absence of an external pump laser, this is explained as due to the energy chirp on the beam out of the electron gun. With further improvements to the electron source, rf streak camera based ultrafast electron diffraction has the potential to yield truly single shot measurements of ultrafast processes.
NASA Technical Reports Server (NTRS)
Bendura, R. J.; Renfroe, P. G.
1974-01-01
A detailed discussion of the application of a previously method to determine vehicle flight attitude using a single camera onboard the vehicle is presented with emphasis on the digital computer program format and data reduction techniques. Application requirements include film and earth-related coordinates of at least two landmarks (or features), location of the flight vehicle with respect to the earth, and camera characteristics. Included in this report are a detailed discussion of the program input and output format, a computer program listing, a discussion of modifications made to the initial method, a step-by-step basic data reduction procedure, and several example applications. The computer program is written in FORTRAN 4 language for the Control Data 6000 series digital computer.
A Portable Shoulder-Mounted Camera System for Surgical Education in Spine Surgery.
Pham, Martin H; Ohiorhenuan, Ifije E; Patel, Neil N; Jakoi, Andre M; Hsieh, Patrick C; Acosta, Frank L; Wang, Jeffrey C; Liu, John C
2017-02-07
The past several years have demonstrated an increased recognition of operative videos as an important adjunct for resident education. Currently lacking, however, are effective methods to record video for the purposes of illustrating the techniques of minimally invasive (MIS) and complex spine surgery. We describe here our experiences developing and using a shoulder-mounted camera system for recording surgical video. Our requirements for an effective camera system included wireless portability to allow for movement around the operating room, camera mount location for comfort and loupes/headlight usage, battery life for long operative days, and sterile control of on/off recording. With this in mind, we created a shoulder-mounted camera system utilizing a GoPro HERO3+, its Smart Remote (GoPro, Inc., San Mateo, California), a high-capacity external battery pack, and a commercially available shoulder-mount harness. This shoulder-mounted system was more comfortable to wear for long periods of time in comparison to existing head-mounted and loupe-mounted systems. Without requiring any wired connections, the surgeon was free to move around the room as needed. Over the past several years, we have recorded numerous MIS and complex spine surgeries for the purposes of surgical video creation for resident education. Surgical videos serve as a platform to distribute important operative nuances in rich multimedia. Effective and practical camera system setups are needed to encourage the continued creation of videos to illustrate the surgical maneuvers in minimally invasive and complex spinal surgery. We describe here a novel portable shoulder-mounted camera system setup specifically designed to be worn and used for long periods of time in the operating room.
Video Capture of Plastic Surgery Procedures Using the GoPro HERO 3+
Graves, Steven Nicholas; Shenaq, Deana Saleh; Langerman, Alexander J.
2015-01-01
Background: Significant improvements can be made in recoding surgical procedures, particularly in capturing high-quality video recordings from the surgeons’ point of view. This study examined the utility of the GoPro HERO 3+ Black Edition camera for high-definition, point-of-view recordings of plastic and reconstructive surgery. Methods: The GoPro HERO 3+ Black Edition camera was head-mounted on the surgeon and oriented to the surgeon’s perspective using the GoPro App. The camera was used to record 4 cases: 2 fat graft procedures and 2 breast reconstructions. During cases 1-3, an assistant remotely controlled the GoPro via the GoPro App. For case 4 the GoPro was linked to a WiFi remote, and controlled by the surgeon. Results: Camera settings for case 1 were as follows: 1080p video resolution; 48 fps; Protune mode on; wide field of view; 16:9 aspect ratio. The lighting contrast due to the overhead lights resulted in limited washout of the video image. Camera settings were adjusted for cases 2-4 to a narrow field of view, which enabled the camera’s automatic white balance to better compensate for bright lights focused on the surgical field. Cases 2-4 captured video sufficient for teaching or presentation purposes. Conclusions: The GoPro HERO 3+ Black Edition camera enables high-quality, cost-effective video recording of plastic and reconstructive surgery procedures. When set to a narrow field of view and automatic white balance, the camera is able to sufficiently compensate for the contrasting light environment of the operating room and capture high-resolution, detailed video. PMID:25750851
Free-form reflective optics for mid-infrared camera and spectrometer on board SPICA
NASA Astrophysics Data System (ADS)
Fujishiro, Naofumi; Kataza, Hirokazu; Wada, Takehiko; Ikeda, Yuji; Sakon, Itsuki; Oyabu, Shinki
2017-11-01
SPICA (Space Infrared Telescope for Cosmology and Astrophysics) is an astronomical mission optimized for mid-and far-infrared astronomy with a cryogenically cooled 3-m class telescope, envisioned for launch in early 2020s. Mid-infrared Camera and Spectrometer (MCS) is a focal plane instrument for SPICA with imaging and spectroscopic observing capabilities in the mid-infrared wavelength range of 5-38μm. MCS consists of two relay optical modules and following four scientific optical modules of WFC (Wide Field Camera; 5'x 5' field of view, f/11.7 and f/4.2 cameras), LRS (Low Resolution Spectrometer; 2'.5 long slits, prism dispersers, f/5.0 and f/1.7 cameras, spectral resolving power R ∼ 50-100), MRS (Mid Resolution Spectrometer; echelles, integral field units by image slicer, f/3.3 and f/1.9 cameras, R ∼ 1100-3000) and HRS (High Resolution Spectrometer; immersed echelles, f/6.0 and f/3.6 cameras, R ∼ 20000-30000). Here, we present optical design and expected optical performance of MCS. Most parts of MCS optics adopt off-axis reflective system for covering the wide wavelength range of 5-38μm without chromatic aberration and minimizing problems due to changes in shapes and refractive indices of materials from room temperature to cryogenic temperature. In order to achieve the high specification requirements of wide field of view, small F-number and large spectral resolving power with compact size, we employed the paraxial and aberration analysis of off-axial optical systems (Araki 2005 [1]) which is a design method using free-form surfaces for compact reflective optics such as head mount displays. As a result, we have successfully designed compact reflective optics for MCS with as-built performance of diffraction-limited image resolution.
High-speed imaging system for observation of discharge phenomena
NASA Astrophysics Data System (ADS)
Tanabe, R.; Kusano, H.; Ito, Y.
2008-11-01
A thin metal electrode tip instantly changes its shape into a sphere or a needlelike shape in a single electrical discharge of high current. These changes occur within several hundred microseconds. To observe these high-speed phenomena in a single discharge, an imaging system using a high-speed video camera and a high repetition rate pulse laser was constructed. A nanosecond laser, the wavelength of which was 532 nm, was used as the illuminating source of a newly developed high-speed video camera, HPV-1. The time resolution of our system was determined by the laser pulse width and was about 80 nanoseconds. The system can take one hundred pictures at 16- or 64-microsecond intervals in a single discharge event. A band-pass filter at 532 nm was placed in front of the camera to block the emission of the discharge arc at other wavelengths. Therefore, clear images of the electrode were recorded even during the discharge. If the laser was not used, only images of plasma during discharge and thermal radiation from the electrode after discharge were observed. These results demonstrate that the combination of a high repetition rate and a short pulse laser with a high speed video camera provides a unique and powerful method for high speed imaging.
Solid state replacement of rotating mirror cameras
NASA Astrophysics Data System (ADS)
Frank, Alan M.; Bartolick, Joseph M.
2007-01-01
Rotating mirror cameras have been the mainstay of mega-frame per second imaging for decades. There is still no electronic camera that can match a film based rotary mirror camera for the combination of frame count, speed, resolution and dynamic range. The rotary mirror cameras are predominantly used in the range of 0.1 to 100 micro-seconds per frame, for 25 to more than a hundred frames. Electron tube gated cameras dominate the sub microsecond regime but are frame count limited. Video cameras are pushing into the microsecond regime but are resolution limited by the high data rates. An all solid state architecture, dubbed 'In-situ Storage Image Sensor' or 'ISIS', by Prof. Goji Etoh has made its first appearance into the market and its evaluation is discussed. Recent work at Lawrence Livermore National Laboratory has concentrated both on evaluation of the presently available technologies and exploring the capabilities of the ISIS architecture. It is clear though there is presently no single chip camera that can simultaneously match the rotary mirror cameras, the ISIS architecture has the potential to approach their performance.
Geometric rectification of camera-captured document images.
Liang, Jian; DeMenthon, Daniel; Doermann, David
2008-04-01
Compared to typical scanners, handheld cameras offer convenient, flexible, portable, and non-contact image capture, which enables many new applications and breathes new life into existing ones. However, camera-captured documents may suffer from distortions caused by non-planar document shape and perspective projection, which lead to failure of current OCR technologies. We present a geometric rectification framework for restoring the frontal-flat view of a document from a single camera-captured image. Our approach estimates 3D document shape from texture flow information obtained directly from the image without requiring additional 3D/metric data or prior camera calibration. Our framework provides a unified solution for both planar and curved documents and can be applied in many, especially mobile, camera-based document analysis applications. Experiments show that our method produces results that are significantly more OCR compatible than the original images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Zhao, Arthur; van Beuzekom, Martin; Bouwens, Bram; ...
2017-11-07
Here, we demonstrate a coincidence velocity map imaging apparatus equipped with a novel time-stamping fast optical camera, Tpx3Cam, whose high sensitivity and nanosecond timing resolution allow for simultaneous position and time-of-flight detection. This single detector design is simple, flexible, and capable of highly differential measurements. We show detailed characterization of the camera and its application in strong field ionization experiments.
Generalized assorted pixel camera: postcapture control of resolution, dynamic range, and spectrum.
Yasuma, Fumihito; Mitsunaga, Tomoo; Iso, Daisuke; Nayar, Shree K
2010-09-01
We propose the concept of a generalized assorted pixel (GAP) camera, which enables the user to capture a single image of a scene and, after the fact, control the tradeoff between spatial resolution, dynamic range and spectral detail. The GAP camera uses a complex array (or mosaic) of color filters. A major problem with using such an array is that the captured image is severely under-sampled for at least some of the filter types. This leads to reconstructed images with strong aliasing. We make four contributions in this paper: 1) we present a comprehensive optimization method to arrive at the spatial and spectral layout of the color filter array of a GAP camera. 2) We develop a novel algorithm for reconstructing the under-sampled channels of the image while minimizing aliasing artifacts. 3) We demonstrate how the user can capture a single image and then control the tradeoff of spatial resolution to generate a variety of images, including monochrome, high dynamic range (HDR) monochrome, RGB, HDR RGB, and multispectral images. 4) Finally, the performance of our GAP camera has been verified using extensive simulations that use multispectral images of real world scenes. A large database of these multispectral images has been made available at http://www1.cs.columbia.edu/CAVE/projects/gap_camera/ for use by the research community.
Jordt, Anne; Zelenka, Claudius; von Deimling, Jens Schneider; Koch, Reinhard; Köser, Kevin
2015-12-05
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information.
NASA Astrophysics Data System (ADS)
Yonai, J.; Arai, T.; Hayashida, T.; Ohtake, H.; Namiki, J.; Yoshida, T.; Etoh, T. Goji
2012-03-01
We have developed an ultrahigh-speed CCD camera that can capture instantaneous phenomena not visible to the human eye and impossible to capture with a regular video camera. The ultrahigh-speed CCD was specially constructed so that the CCD memory between the photodiode and the vertical transfer path of each pixel can store 144 frames each. For every one-frame shot, the electric charges generated from the photodiodes are transferred in one step to the memory of all the parallel pixels, making ultrahigh-speed shooting possible. Earlier, we experimentally manufactured a 1M-fps ultrahigh-speed camera and tested it for broadcasting applications. Through those tests, we learned that there are cases that require shooting speeds (frame rate) of more than 1M fps; hence we aimed to develop a new ultrahigh-speed camera that will enable much faster shooting speeds than what is currently possible. Since shooting at speeds of more than 200,000 fps results in decreased image quality and abrupt heating of the image sensor and drive circuit board, faster speeds cannot be achieved merely by increasing the drive frequency. We therefore had to improve the image sensor wiring layout and the driving method to develop a new 2M-fps, 300k-pixel ultrahigh-speed single-chip color camera for broadcasting purposes.
Jordt, Anne; Zelenka, Claudius; Schneider von Deimling, Jens; Koch, Reinhard; Köser, Kevin
2015-01-01
Several acoustic and optical techniques have been used for characterizing natural and anthropogenic gas leaks (carbon dioxide, methane) from the ocean floor. Here, single-camera based methods for bubble stream observation have become an important tool, as they help estimating flux and bubble sizes under certain assumptions. However, they record only a projection of a bubble into the camera and therefore cannot capture the full 3D shape, which is particularly important for larger, non-spherical bubbles. The unknown distance of the bubble to the camera (making it appear larger or smaller than expected) as well as refraction at the camera interface introduce extra uncertainties. In this article, we introduce our wide baseline stereo-camera deep-sea sensor bubble box that overcomes these limitations, as it observes bubbles from two orthogonal directions using calibrated cameras. Besides the setup and the hardware of the system, we discuss appropriate calibration and the different automated processing steps deblurring, detection, tracking, and 3D fitting that are crucial to arrive at a 3D ellipsoidal shape and rise speed of each bubble. The obtained values for single bubbles can be aggregated into statistical bubble size distributions or fluxes for extrapolation based on diffusion and dissolution models and large scale acoustic surveys. We demonstrate and evaluate the wide baseline stereo measurement model using a controlled test setup with ground truth information. PMID:26690168
Binocular Multispectral Adaptive Imaging System (BMAIS)
2010-07-26
system for pilots that adaptively integrates shortwave infrared (SWIR), visible, near ‐IR (NIR), off‐head thermal, and computer symbology/imagery into...respective areas. BMAIS is a binocular helmet mounted imaging system that features dual shortwave infrared (SWIR) cameras, embedded image processors and...algorithms and fusion of other sensor sites such as forward looking infrared (FLIR) and other aircraft subsystems. BMAIS is attached to the helmet
MS Foale performs maintenance on middeck
1999-12-21
S103-E-5184 (21 December 1999) --- Astronaut C. Michael Foale, mission specialist, performs a minor maintenance task on the mid deck of the Earth-orbiting Space Shuttle Discovery. The long rectangular structure near Foale's head is the escape pole, which has been standard equipment on the shuttle fleet since 1988. The photo was recorded with an electronic still camera (ESC) at 10:39:31 GMT, Dec. 21, 1999.
Innovativ Airborne Sensors for Disaster Management
NASA Astrophysics Data System (ADS)
Altan, M. O.; Kemper, G.
2016-06-01
Modern Disaster Management Systems are based on 3 columns, crisis preparedness, early warning and the final crisis management. In all parts, special data are needed in order to analyze existing structures, assist in the early warning system and in the updating after a disaster happens to assist the crises management organizations. How can new and innovative sensors assist in these tasks? Aerial images have been frequently used in the past for generating spatial data, however in urban structures not all information can be extracted easily. Modern Oblique camera systems already assist in the evaluation of building structures to define rescue paths, analyze building structures and give also information of the stability of the urban fabric. For this application there is no need of a high geometric accurate sensor, also SLC Camera based Oblique Camera system as the OI X5, which uses Nikon Cameras, do a proper job. Such a camera also delivers worth full information after a Disaster happens to validate the degree of deformation in order to estimate stability and usability for the population. Thermal data in combination with RGB give further information of the building structure, damages and potential water intrusion. Under development is an oblique thermal sensor with 9 heads which enables nadir and oblique thermal data acquisition. Beside the application for searching people, thermal anomalies can be created out of humidity in constructions (transpiration effects), damaged power lines, burning gas tubes and many other dangerous facts. A big task is in the data analysis which should be made automatically and fast. This requires a good initial orientation and a proper relative adjustment of the single sensors. Like that, many modern software tools enable a rapid data extraction. Automated analysis of the data before and after a disaster can highlight areas of significant changes. Detecting anomalies are the way to get the focus on the prior area. Also Lidar supports Disaster management by analyzing changes in the DSM before and after the "event". Advantage of Lidar is that beside rain and clouds, no other weather conditions limit their use. As an active sensor, missions in the nighttime are possible. The new mid-format cameras that make use CMOS sensors (e.g. Phase One IXU1000) can capture data also under poor and difficult light conditions and might will be the first choice for remotely sensed data acquisition in aircrafts and UAVs. UAVs will surely be more and more part of the disaster management on the detailed level. Today equipped with video live cams using RGB and Thermal IR, they assist in looking inside buildings and behind. Thus, they can continue with the aerial survey where airborne anomalies have been detected.
A pixellated γ-camera based on CdTe detectors clinical interests and performances
NASA Astrophysics Data System (ADS)
Chambron, J.; Arntz, Y.; Eclancher, B.; Scheiber, Ch; Siffert, P.; Hage Hali, M.; Regal, R.; Kazandjian, A.; Prat, V.; Thomas, S.; Warren, S.; Matz, R.; Jahnke, A.; Karman, M.; Pszota, A.; Nemeth, L.
2000-07-01
A mobile gamma camera dedicated to nuclear cardiology, based on a 15 cm×15 cm detection matrix of 2304 CdTe detector elements, 2.83 mm×2.83 mm×2 mm, has been developed with a European Community support to academic and industrial research centres. The intrinsic properties of the semiconductor crystals - low-ionisation energy, high-energy resolution, high attenuation coefficient - are potentially attractive to improve the γ-camera performances. But their use as γ detectors for medical imaging at high resolution requires production of high-grade materials and large quantities of sophisticated read-out electronics. The decision was taken to use CdTe rather than CdZnTe, because the manufacturer (Eurorad, France) has a large experience for producing high-grade materials, with a good homogeneity and stability and whose transport properties, characterised by the mobility-lifetime product, are at least 5 times greater than that of CdZnTe. The detector matrix is divided in 9 square units, each unit is composed of 256 detectors shared in 16 modules. Each module consists in a thin ceramic plate holding a line of 16 detectors, in four groups of four for an easy replacement, and holding a special 16 channels integrated circuit designed by CLRC (UK). A detection and acquisition logic based on a DSP card and a PC has been programmed by Eurorad for spectral and counting acquisition modes. Collimators LEAP and LEHR from commercial design, mobile gantry and clinical software were provided by Siemens (Germany). The γ-camera head housing, its general mounting and the electric connections were performed by Phase Laboratory (CNRS, France). The compactness of the γ-camera head, thin detectors matrix, electronic readout and collimator, facilitates the detection of close γ sources with the advantage of a high spatial resolution. Such an equipment is intended to bedside explorations. There is a growing clinical requirement in nuclear cardiology to early assess the extent of an infarct in intensive care units, as well as in neurology to detect the grade of a cerebral vascular insult, in pregnancy to detect a pulmonary capillary embolism, or in presurgical oncology to identify sentinel lymph nodes. The physical tests and the clinical imaging capabilities of the experimental device which have been performed by IPB (France) and SHC (Hungary), agree with the expected performances better than those of a cardiac conventional γ- camera except for dynamic studies.
Development of a PET/Cerenkov-light hybrid imaging system
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yamamoto, Seiichi, E-mail: s-yama@met.nagoya-u.ac.jp; Hamamura, Fuka; Kato, Katsuhiko
2014-09-15
Purpose: Cerenkov-light imaging is a new molecular imaging technology that detects visible photons from high-speed electrons using a high sensitivity optical camera. However, the merit of Cerenkov-light imaging remains unclear. If a PET/Cerenkov-light hybrid imaging system were developed, the merit of Cerenkov-light imaging would be clarified by directly comparing these two imaging modalities. Methods: The authors developed and tested a PET/Cerenkov-light hybrid imaging system that consists of a dual-head PET system, a reflection mirror located above the subject, and a high sensitivity charge coupled device (CCD) camera. The authors installed these systems inside a black box for imaging the Cerenkov-light.more » The dual-head PET system employed a 1.2 × 1.2 × 10 mm{sup 3} GSO arranged in a 33 × 33 matrix that was optically coupled to a position sensitive photomultiplier tube to form a GSO block detector. The authors arranged two GSO block detectors 10 cm apart and positioned the subject between them. The Cerenkov-light above the subject is reflected by the mirror and changes its direction to the side of the PET system and is imaged by the high sensitivity CCD camera. Results: The dual-head PET system had a spatial resolution of ∼1.2 mm FWHM and sensitivity of ∼0.31% at the center of the FOV. The Cerenkov-light imaging system's spatial resolution was ∼275μm for a {sup 22}Na point source. Using the combined PET/Cerenkov-light hybrid imaging system, the authors successfully obtained fused images from simultaneously acquired images. The image distributions are sometimes different due to the light transmission and absorption in the body of the subject in the Cerenkov-light images. In simultaneous imaging of rat, the authors found that {sup 18}F-FDG accumulation was observed mainly in the Harderian gland on the PET image, while the distribution of Cerenkov-light was observed in the eyes. Conclusions: The authors conclude that their developed PET/Cerenkov-light hybrid imaging system is useful to evaluate the merits and the limitations of Cerenkov-light imaging in molecular imaging research.« less
Overview of Digital Forensics Algorithms in Dslr Cameras
NASA Astrophysics Data System (ADS)
Aminova, E.; Trapeznikov, I.; Priorov, A.
2017-05-01
The widespread usage of the mobile technologies and the improvement of the digital photo devices getting has led to more frequent cases of falsification of images including in the judicial practice. Consequently, the actual task for up-to-date digital image processing tools is the development of algorithms for determining the source and model of the DSLR (Digital Single Lens Reflex) camera and improve image formation algorithms. Most research in this area based on the mention that the extraction of unique sensor trace of DSLR camera could be possible on the certain stage of the imaging process into the camera. It is considered that the study focuses on the problem of determination of unique feature of DSLR cameras based on optical subsystem artifacts and sensor noises.
NASA Technical Reports Server (NTRS)
Miller, Chris; Mulavara, Ajitkumar; Bloomberg, Jacob
2001-01-01
To confidently report any data collected from a video-based motion capture system, its functional characteristics must be determined, namely accuracy, repeatability and resolution. Many researchers have examined these characteristics with motion capture systems, but they used only two cameras, positioned 90 degrees to each other. Everaert used 4 cameras, but all were aligned along major axes (two in x, one in y and z). Richards compared the characteristics of different commercially available systems set-up in practical configurations, but all cameras viewed a single calibration volume. The purpose of this study was to determine the accuracy, repeatability and resolution of a 6-camera Motion Analysis system in a split-volume configuration using a quasistatic methodology.
Single photon detection imaging of Cherenkov light emitted during radiation therapy
NASA Astrophysics Data System (ADS)
Adamson, Philip M.; Andreozzi, Jacqueline M.; LaRochelle, Ethan; Gladstone, David J.; Pogue, Brian W.
2018-03-01
Cherenkov imaging during radiation therapy has been developed as a tool for dosimetry, which could have applications in patient delivery verification or in regular quality audit. The cameras used are intensified imaging sensors, either ICCD or ICMOS cameras, which allow important features of imaging, including: (1) nanosecond time gating, (2) amplification by 103-104, which together allow for imaging which has (1) real time capture at 10-30 frames per second, (2) sensitivity at the level of single photon event level, and (3) ability to suppress background light from the ambient room. However, the capability to achieve single photon imaging has not been fully analyzed to date, and as such was the focus of this study. The ability to quantitatively characterize how a single photon event appears in amplified camera imaging from the Cherenkov images was analyzed with image processing. The signal seen at normal gain levels appears to be a blur of about 90 counts in the CCD detector, after going through the chain of photocathode detection, amplification through a microchannel plate PMT, excitation onto a phosphor screen and then imaged on the CCD. The analysis of single photon events requires careful interpretation of the fixed pattern noise, statistical quantum noise distributions, and the spatial spread of each pulse through the ICCD.
Effect of Booster Seat Design on Children’s Choice of Seating Positions During Naturalistic Riding
Andersson, Marianne; Bohman, Katarina; Osvalder, Anna-Lisa
2010-01-01
The purpose of this naturalistic study was to investigate the effect of booster seat design on the choice of children’s seating positions during naturalistic riding. Data was collected through observations of children during in-vehicle riding by means of a film camera. The children were positioned in high back boosters in the rear seat while a parent drove the car. The study included two different booster designs: one with large head and torso side supports, and one with small head side supports and no torso side supports. Six children between three and six years of age participated in the study. Each child was observed in both boosters. The duration of the seating positions that each child assumed was quantified. The design with large side head supports resulted more often in seating positions without head and shoulder contact with the booster’s back. There was shoulder-to-booster back contact during an average of 45% of riding time in the seat with the large head side supports compared to 75% in the seat with the small head supports. The children in the study were seated with the head in front of the front edge of the head side supports more than half the time, in both boosters. Laterally, the children were almost constantly positioned between the side supports of the booster in both seats. The observed seating positions probably reduce the desired protective effect by the side supports in side impact, and may increase the probability of head impact with the vehicle interior in frontal impact. PMID:21050601
2011-01-01
Background and purpose We noticed that our instruments were often too hot to touch after preparing the femoral head for resurfacing, and questioned whether the heat generated could exceed temperatures known to cause osteonecrosis. Patients and methods Using an infra-red thermal imaging camera, we measured real-time femoral head temperatures during femoral head reaming in 35 patients undergoing resurfacing hip arthroplasty. 7 patients received an ASR, 8 received a Cormet, and 20 received a Birmingham resurfacing arthroplasty. Results The maximum temperature recorded was 89°C. The temperature exceeded 47°C in 28 patients and 70°C in 11. The mean duration of most stages of head preparation was less than 1 min. The mean time exceeded 1 min only on peripheral head reaming of the ASR system. At temperatures lower than 47°C, only 2 femoral heads were exposed long enough to cause osteonecrosis. The highest mean maximum temperatures recorded were 54°C when the proximal femoral head was resected with an oscillating saw and 47°C during peripheral reaming with the crown drill. The modified new Birmingham resurfacing proximal femoral head reamer substantially reduced the maximum temperatures generated. Lavage reduced temperatures to a mean of 18°C. Interpretation 11 patients were subjected to temperatures sufficient to cause osteonecrosis secondary to thermal insult, regardless of the duration of reaming. In 2 cases only, the length of reaming was long enough to induce damage at lower temperatures. Lavage and sharp instruments should reduce the risk of thermal insult during hip resurfacing. PMID:22066558
Converting aerial imagery to application maps
USDA-ARS?s Scientific Manuscript database
Over the last couple of years in Agricultural Aviation and at the 2014 and 2015 NAAA conventions, we have written about and presented both single-camera and two-camera imaging systems for use on agricultural aircraft. Many aerial applicators have shown a great deal of interest in the imaging systems...
Harry E. Brown
1962-01-01
The canopy camera is a device of new design that takes wide-angle, overhead photographs of vegetation canopies, cloud cover, topographic horizons, and similar subjects. Since the entire hemisphere is photographed in a single exposure, the resulting photograph is circular, with the horizon forming the perimeter and the zenith the center. Photographs of this type provide...
NASA Astrophysics Data System (ADS)
Chang, Jiaqing; Liu, Yaxin; Huang, Bo
2017-07-01
In inkjet applications, it is normal to search for an optimal drive waveform when dispensing a fresh fluid or adjusting a newly fabricated print-head. To test trial waveforms with different dwell times, a camera and a strobe light were used to image the protruding or retracting liquid tongues without ejecting any droplets. An edge detection method was used to calculate the lengths of the liquid tongues to draw the meniscus movement curves. The meniscus movement is determined by the time-domain response of the acoustic pressure at the nozzle of the print-head. Starting at the inverse piezoelectric effect, a mathematical model which considers the liquid viscosity in acoustic propagation is constructed to study the acoustic pressure response at the nozzle of the print-head. The liquid viscosity retards the propagation speed and dampens the harmonic amplitude. The pressure response, which is the combined effect of the acoustic pressures generated during the rising time and the falling time and after their propagations and reflections, explains the meniscus movements well. Finally, the optimal dwell time for droplet ejections is discussed.
Young Stars Emerge from Orion Head
2007-05-17
This image from NASA's Spitzer Space Telescope shows infant stars "hatching" in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth . The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's "head," just north of the massive star Lambda Orionis. Wisps of red in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. This image shows infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight. http://photojournal.jpl.nasa.gov/catalog/PIA09412
Young Stars Emerge from Orion's Head
NASA Technical Reports Server (NTRS)
2007-01-01
This image from NASA's Spitzer Space Telescope shows infant stars 'hatching' in the head of the hunter constellation, Orion. Astronomers suspect that shockwaves from a supernova explosion in Orion's head, nearly three million years ago, may have initiated this newfound birth The region featured in this Spitzer image is called Barnard 30. It is located approximately 1,300 light-years away and sits on the right side of Orion's 'head,' just north of the massive star Lambda Orionis. Wisps of red in the cloud are organic molecules called polycyclic aromatic hydrocarbons. These molecules are formed anytime carbon-based materials are burned incompletely. On Earth, they can be found in the sooty exhaust from automobile and airplane engines. They also coat the grills where charcoal-broiled meats are cooked. This image shows infrared light captured by Spitzer's infrared array camera. Light with wavelengths of 8 and 5.8 microns (red and orange) comes mainly from dust that has been heated by starlight. Light of 4.5 microns (green) shows hot gas and dust; and light of 3.6 microns (blue) is from starlight.Plenoptic camera image simulation for reconstruction algorithm verification
NASA Astrophysics Data System (ADS)
Schwiegerling, Jim
2014-09-01
Plenoptic cameras have emerged in recent years as a technology for capturing light field data in a single snapshot. A conventional digital camera can be modified with the addition of a lenslet array to create a plenoptic camera. Two distinct camera forms have been proposed in the literature. The first has the camera image focused onto the lenslet array. The lenslet array is placed over the camera sensor such that each lenslet forms an image of the exit pupil onto the sensor. The second plenoptic form has the lenslet array relaying the image formed by the camera lens to the sensor. We have developed a raytracing package that can simulate images formed by a generalized version of the plenoptic camera. Several rays from each sensor pixel are traced backwards through the system to define a cone of rays emanating from the entrance pupil of the camera lens. Objects that lie within this cone are integrated to lead to a color and exposure level for that pixel. To speed processing three-dimensional objects are approximated as a series of planes at different depths. Repeating this process for each pixel in the sensor leads to a simulated plenoptic image on which different reconstruction algorithms can be tested.
Laparoscopic female sterilisation by a single port through monitor--a better alternative.
Sewta, Rajender Singh
2011-04-01
Female sterilisation by tubal occlusion method by laparocator is most widely used and accepted technique of all family planning measures all over the world. After the development of laparoscopic surgery in all faculties of surgery by monitor, now laparoscopic female sterilisation has been developed to do under monitor control by two ports--one for laparoscope and second for ring applicator. But the technique has been modified using single port with monitor through laparocator in which camera is fitted on the eye piece of laparocator (the same laparocator which is commonly used in camps without monitor since a long time in India). In this study over a period of about 2 years, a total 2011 cases were operated upon. In this study, I used camera and monitor through a single port by laparocator to visualise as well as to apply ring on fallopian tubes. The result is excellent and is a better alternative to conventional laparoscopic sterilisation and double puncture technique through camera--which give two scars and an extra assistant is required. However, there was no failure and the strain on surgeon's eye was minimum. Single port is much easier, safe, equally effective and better acceptable method.
NASA Astrophysics Data System (ADS)
Hatala, J.; Sonnentag, O.; Detto, M.; Runkle, B.; Vargas, R.; Kelly, M.; Baldocchi, D. D.
2009-12-01
Ground-based, visible light imagery has been used for different purposes in agricultural and ecological research. A series of recent studies explored the utilization of networked digital cameras to continuously monitor vegetation by taking oblique canopy images at fixed view angles and time intervals. In our contribution we combine high temporal resolution digital camera imagery, eddy-covariance, and meteorological measurements with weekly field-based hyperspectral and LAI measurements to gain new insights on temporal changes in canopy structure and functioning of two managed ecosystems in California’s Sacramento-San Joaquin River Delta: a pasture infested by the invasive perennial pepperweed (Lepidium latifolium) and a rice plantation (Oryza sativa). Specific questions we address are: a) how does year-round grazing affect pepperweed canopy development, b) is it possible to identify phenological key events of managed ecosystems (pepperweed: flowering; rice: heading) from the limited spectral information of digital camera imagery, c) is a simple greenness index derived from digital camera imagery sufficient to track leaf area index and canopy development of managed ecosystems, and d) what are the scales of temporal correlation between digital camera signals and carbon and water fluxes of managed ecosystems? Preliminary results for the pasture-pepperweed ecosystem show that year-round grazing inhibits the accumulation of dead stalks causing earlier green-up and that digital camera imagery is well suited to capture the onset of flowering and the associated decrease in photosynthetic CO2 uptake. Results from our analyses are of great relevance from both a global environmental change and land management perspective.
NASA Astrophysics Data System (ADS)
Daly, Michael J.; Muhanna, Nidal; Chan, Harley; Wilson, Brian C.; Irish, Jonathan C.; Jaffray, David A.
2014-02-01
A freehand, non-contact diffuse optical tomography (DOT) system has been developed for multimodal imaging with intraoperative cone-beam CT (CBCT) during minimally-invasive cancer surgery. The DOT system is configured for near-infrared fluorescence imaging with indocyanine green (ICG) using a collimated 780 nm laser diode and a nearinfrared CCD camera (PCO Pixelfly USB). Depending on the intended surgical application, the camera is coupled to either a rigid 10 mm diameter endoscope (Karl Storz) or a 25 mm focal length lens (Edmund Optics). A prototype flatpanel CBCT C-Arm (Siemens Healthcare) acquires low-dose 3D images with sub-mm spatial resolution. A 3D mesh is extracted from CBCT for finite-element DOT implementation in NIRFAST (Dartmouth College), with the capability for soft/hard imaging priors (e.g., segmented lymph nodes). A stereoscopic optical camera (NDI Polaris) provides real-time 6D localization of reflective spheres mounted to the laser and camera. Camera calibration combined with tracking data is used to estimate intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) lens parameters. Source/detector boundary data is computed from the tracked laser/camera positions using radiometry models. Target registration errors (TRE) between real and projected boundary points are ~1-2 mm for typical acquisition geometries. Pre-clinical studies using tissue phantoms are presented to characterize 3D imaging performance. This translational research system is under investigation for clinical applications in head-and-neck surgery including oral cavity tumour resection, lymph node mapping, and free-flap perforator assessment.
NASA Astrophysics Data System (ADS)
Chen, Hai-Wen; McGurr, Mike
2016-05-01
We have developed a new way for detection and tracking of human full-body and body-parts with color (intensity) patch morphological segmentation and adaptive thresholding for security surveillance cameras. An adaptive threshold scheme has been developed for dealing with body size changes, illumination condition changes, and cross camera parameter changes. Tests with the PETS 2009 and 2014 datasets show that we can obtain high probability of detection and low probability of false alarm for full-body. Test results indicate that our human full-body detection method can considerably outperform the current state-of-the-art methods in both detection performance and computational complexity. Furthermore, in this paper, we have developed several methods using color features for detection and tracking of human body-parts (arms, legs, torso, and head, etc.). For example, we have developed a human skin color sub-patch segmentation algorithm by first conducting a RGB to YIQ transformation and then applying a Subtractive I/Q image Fusion with morphological operations. With this method, we can reliably detect and track human skin color related body-parts such as face, neck, arms, and legs. Reliable body-parts (e.g. head) detection allows us to continuously track the individual person even in the case that multiple closely spaced persons are merged. Accordingly, we have developed a new algorithm to split a merged detection blob back to individual detections based on the detected head positions. Detected body-parts also allow us to extract important local constellation features of the body-parts positions and angles related to the full-body. These features are useful for human walking gait pattern recognition and human pose (e.g. standing or falling down) estimation for potential abnormal behavior and accidental event detection, as evidenced with our experimental tests. Furthermore, based on the reliable head (face) tacking, we have applied a super-resolution algorithm to enhance the face resolution for improved human face recognition performance.
Using a smartphone as a tool to measure compensatory and anomalous head positions.
Farah, Michelle de Lima; Santinello, Murillo; Carvalho, Luis Eduardo Morato Rebouças de; Uesugui, Carlos Fumiaki; Barcellos, Ronaldo Boaventura
2018-01-01
To describe a new method for measuring anomalous head positions by using a cell phone. The photo rotation feature of the iPhone® PHOTOS application was used. With the patient seated on a chair, a horizontal stripe was fixed on the wall in the background and a sagittal stripe was fixed on the seat. Photographs were obtained in the following views: front view (photographs A and B; with the head tilted over one shoulder) and upper axial view (photographs C and D; viewing the forehead and nose) (A and C are without camera rotation, and B and D are with camera rotation). A blank sheet of paper with two straight lines making a 32-degree angle was also photographed. Thirty examiners were instructed to measure the rotation required to align the reference points with the orthogonal axes. In order to set benchmarks to be compared with the measurements obtained by the examiners, blue lines were digitally added to the front and upper view photographs. In the photograph of the sheet of paper (p=0.380 and a=5%), the observed values did not differ statistically from the known value of 32 degrees. Mean measurements were as follows: front view photograph A, 22.8 ± 2.77; front view B, 21.4 ± 1.61; upper view C, 19.6 ± 2.36; and upper view D, 20.1 ± 2.33 degrees. The mean difference in measurements for the front view photograph A was -1.88 (95% CI -2.88 to -0.88), front view B was -0.37 (95% CI -0.97 to 0.17), upper view C was 1.43 (95% CI 0.55 to 2.24), and upper view D was 1.87 (95% CI 1.02 to 2.77). The method used in this study for measuring anomalous head position is reproducible, with maximum variations for AHPs of 2.88 degrees around the X-axis and 2.77 degrees around the Y-axis.
NASA Astrophysics Data System (ADS)
Moon, Sunghwan
2017-06-01
A Compton camera has been introduced for use in single photon emission computed tomography to improve the low efficiency of a conventional gamma camera. In general, a Compton camera brings about the conical Radon transform. Here we consider a conical Radon transform with the vertices on a rotation symmetric set with respect to a coordinate axis. We show that this conical Radon transform can be decomposed into two transforms: the spherical sectional transform and the weighted fan beam transform. After finding inversion formulas for these two transforms, we provide an inversion formula for the conical Radon transform.
Control system for several rotating mirror camera synchronization operation
NASA Astrophysics Data System (ADS)
Liu, Ningwen; Wu, Yunfeng; Tan, Xianxiang; Lai, Guoji
1997-05-01
This paper introduces a single chip microcomputer control system for synchronization operation of several rotating mirror high-speed cameras. The system consists of four parts: the microcomputer control unit (including the synchronization part and precise measurement part and the time delay part), the shutter control unit, the motor driving unit and the high voltage pulse generator unit. The control system has been used to control the synchronization working process of the GSI cameras (driven by a motor) and FJZ-250 rotating mirror cameras (driven by a gas driven turbine). We have obtained the films of the same objective from different directions in different speed or in same speed.
Development of a Compact & Easy-to-Use 3-D Camera for High Speed Turbulent Flow Fields
2013-12-05
resolved. Also, in the case of a single camera system, the use of an aperture greatly reduces the amount of collected light. The combination of these...a study on wall-bounded turbulence [Sheng_2006]. Nevertheless, these techniques are limited to small measurement volumes, while maintaining a high...It has also been adapted to kHz rates using high-speed cameras for aeroacoustic studies (see Violato et al. [17, 18]. Tomo-PIV, however, has some
Radiation-Triggered Surveillance for UF6 Monitoring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Curtis, Michael M.
2015-12-01
This paper recommends the use of radiation detectors, singly or in sets, to trigger surveillance cameras. Ideally, the cameras will monitor cylinders transiting the process area as well as the process area itself. The general process area will be surveyed to record how many cylinders have been attached and detached to the process between inspections. Rad-triggered cameras can dramatically reduce the quantity of recorded images, because the movement of personnel and equipment not involving UF6 cylinders will not generate a surveillance review file.
Video sensor with range measurement capability
NASA Technical Reports Server (NTRS)
Howard, Richard T. (Inventor); Briscoe, Jeri M. (Inventor); Corder, Eric L. (Inventor); Broderick, David J. (Inventor)
2008-01-01
A video sensor device is provided which incorporates a rangefinder function. The device includes a single video camera and a fixed laser spaced a predetermined distance from the camera for, when activated, producing a laser beam. A diffractive optic element divides the beam so that multiple light spots are produced on a target object. A processor calculates the range to the object based on the known spacing and angles determined from the light spots on the video images produced by the camera.
Multi-Angle Snowflake Camera Instrument Handbook
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stuefer, Martin; Bailey, J.
2016-07-01
The Multi-Angle Snowflake Camera (MASC) takes 9- to 37-micron resolution stereographic photographs of free-falling hydrometers from three angles, while simultaneously measuring their fall speed. Information about hydrometeor size, shape orientation, and aspect ratio is derived from MASC photographs. The instrument consists of three commercial cameras separated by angles of 36º. Each camera field of view is aligned to have a common single focus point about 10 cm distant from the cameras. Two near-infrared emitter pairs are aligned with the camera’s field of view within a 10-angular ring and detect hydrometeor passage, with the lower emitters configured to trigger the MASCmore » cameras. The sensitive IR motion sensors are designed to filter out slow variations in ambient light. Fall speed is derived from successive triggers along the fall path. The camera exposure times are extremely short, in the range of 1/25,000th of a second, enabling the MASC to capture snowflake sizes ranging from 30 micrometers to 3 cm.« less
The Work and Family Responsibilities of Black Women Single Parents. Working Paper No. 148.
ERIC Educational Resources Information Center
Malson, Michelene R.; Woody, Bette
One aspect of the general rise in the number of single parent households is the high proportion of them that are headed by black women. Black families headed by women tend to be larger and are more likely to be impoverished. Contrary to popular belief, many black single mothers considered poor are employed women, not recipients of welfare. An…
OSIRIS-REx Asteroid Sample Return Mission Image Analysis
NASA Astrophysics Data System (ADS)
Chevres Fernandez, Lee Roger; Bos, Brent
2018-01-01
NASA’s Origins Spectral Interpretation Resource Identification Security-Regolith Explorer (OSIRIS-REx) mission constitutes the “first-of-its-kind” project to thoroughly characterize a near-Earth asteroid. The selected asteroid is (101955) 1999 RQ36 (a.k.a. Bennu). The mission launched in September 2016, and the spacecraft will reach its asteroid target in 2018 and return a sample to Earth in 2023. The spacecraft that will travel to, and collect a sample from, Bennu has five integrated instruments from national and international partners. NASA's OSIRIS-REx asteroid sample return mission spacecraft includes the Touch-And-Go Camera System (TAGCAMS) three camera-head instrument. The purpose of TAGCAMS is to provide imagery during the mission to facilitate navigation to the target asteroid, confirm acquisition of the asteroid sample and document asteroid sample stowage. Two of the TAGCAMS cameras, NavCam 1 and NavCam 2, serve as fully redundant navigation cameras to support optical navigation and natural feature tracking. The third TAGCAMS camera, StowCam, provides imagery to assist with and confirm proper stowage of the asteroid sample. Analysis of spacecraft imagery acquired by the TAGCAMS during cruise to the target asteroid Bennu was performed using custom codes developed in MATLAB. Assessment of the TAGCAMS in-flight performance using flight imagery was done to characterize camera performance. One specific area of investigation that was targeted was bad pixel mapping. A recent phase of the mission, known as the Earth Gravity Assist (EGA) maneuver, provided images that were used for the detection and confirmation of “questionable” pixels, possibly under responsive, using image segmentation analysis. Ongoing work on point spread function morphology and camera linearity and responsivity will also be used for calibration purposes and further analysis in preparation for proximity operations around Bennu. Said analyses will provide a broader understanding regarding the functionality of the camera system, which will in turn aid in the fly-down to the asteroid, as it will allow the pick of a suitable landing and sample location.
NASA Astrophysics Data System (ADS)
Carlà, Marcello; Orlando, Antonio
2018-07-01
This paper describes the implementation of an axisymmetric drop shape apparatus for the measurement of surface or interfacial tension of a hanging liquid drop, using only cheap resources like a common web camera and a single-board microcomputer. The mechanics of the apparatus is composed of stubs of commonly available aluminium bar, with all other mechanical parts manufactured with an amateur 3D printer. All of the required software, either for handling the camera and taking the images, or for processing the drop images to get the drop profile and fit it with the Bashforth and Adams equation, is freely available under an open source license. Despite the very limited cost of the whole setup, an extensive test has demonstrated an overall accuracy of ±0.2% or better.
Performance evaluation of a two detector camera for real-time video.
Lochocki, Benjamin; Gambín-Regadera, Adrián; Artal, Pablo
2016-12-20
Single pixel imaging can be the preferred method over traditional 2D-array imaging in spectral ranges where conventional cameras are not available. However, when it comes to real-time video imaging, single pixel imaging cannot compete with the framerates of conventional cameras, especially when high-resolution images are desired. Here we evaluate the performance of an imaging approach using two detectors simultaneously. First, we present theoretical results on how low SNR affects final image quality followed by experimentally determined results. Obtained video framerates were doubled compared to state of the art systems, resulting in a framerate from 22 Hz for a 32×32 resolution to 0.75 Hz for a 128×128 resolution image. Additionally, the two detector imaging technique enables the acquisition of images with a resolution of 256×256 in less than 3 s.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks
NASA Astrophysics Data System (ADS)
Gul, M. Shahzeb Khan; Gunturk, Bahadir K.
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Spatial and Angular Resolution Enhancement of Light Fields Using Convolutional Neural Networks.
Gul, M Shahzeb Khan; Gunturk, Bahadir K
2018-05-01
Light field imaging extends the traditional photography by capturing both spatial and angular distribution of light, which enables new capabilities, including post-capture refocusing, post-capture aperture control, and depth estimation from a single shot. Micro-lens array (MLA) based light field cameras offer a cost-effective approach to capture light field. A major drawback of MLA based light field cameras is low spatial resolution, which is due to the fact that a single image sensor is shared to capture both spatial and angular information. In this paper, we present a learning based light field enhancement approach. Both spatial and angular resolution of captured light field is enhanced using convolutional neural networks. The proposed method is tested with real light field data captured with a Lytro light field camera, clearly demonstrating spatial and angular resolution improvement.
Compact fluorescence and white-light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; tan Hehir, Cristina
2012-02-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
A compact fluorescence and white light imaging system for intraoperative visualization of nerves
NASA Astrophysics Data System (ADS)
Gray, Dan; Kim, Evgenia; Cotero, Victoria; Staudinger, Paul; Yazdanfar, Siavash; Tan Hehir, Cristina
2012-03-01
Fluorescence image guided surgery (FIGS) allows intraoperative visualization of critical structures, with applications spanning neurology, cardiology and oncology. An unmet clinical need is prevention of iatrogenic nerve damage, a major cause of post-surgical morbidity. Here we describe the advancement of FIGS imaging hardware, coupled with a custom nerve-labeling fluorophore (GE3082), to bring FIGS nerve imaging closer to clinical translation. The instrument is comprised of a 405nm laser and a white light LED source for excitation and illumination. A single 90 gram color CCD camera is coupled to a 10mm surgical laparoscope for image acquisition. Synchronization of the light source and camera allows for simultaneous visualization of reflected white light and fluorescence using only a single camera. The imaging hardware and contrast agent were evaluated in rats during in situ surgical procedures.
Quantifying external focus of attention in sailing by means of action sport cameras.
Pluijms, Joost P; Cañal-Bruland, Rouwen; Hoozemans, Marco J M; Van Beek, Morris W; Böcker, Kaj; Savelsbergh, Geert J P
2016-08-01
The aim of the current study was twofold: (1) to validate the use of action sport cameras for quantifying focus of visual attention in sailing and (2) to apply this method to examine whether an external focus of attention is associated with better performance in upwind sailing. To test the validity of this novel quantification method, we first calculated the agreement between gaze location measures and head orientation measures in 13 sailors sailing upwind during training regattas using a head mounted eye tracker. The results confirmed that for measuring visual focus of attention in upwind sailing, the agreement for the two measures was high (intraclass correlation coefficient (ICC) = 0.97) and the 95% limits of agreement were acceptable (between -8.0% and 14.6%). In a next step, we quantified the focus of visual attention in sailing upwind as fast as possible by means of an action sport camera. We captured sailing performance, operationalised as boat speed in the direction of the wind, and environmental conditions using a GPS, compass and wind meter. Four trials, each lasting 1 min, were analysed for 15 sailors each, resulting in a total of 30 upwind speed trials on port tack and 30 upwind speed trials on starboard tack. The results revealed that in sailing - within constantly changing environments - the focus of attention is not a significant predictor for better upwind sailing performances. This implicates that neither external nor internal foci of attention was per se correlated with better performances. Rather, relatively large interindividual differences seem to indicate that different visual attention strategies can lead to similar performance outcomes.
Advantages of semiconductor CZT for medical imaging
NASA Astrophysics Data System (ADS)
Wagenaar, Douglas J.; Parnham, Kevin; Sundal, Bjorn; Maehlum, Gunnar; Chowdhury, Samir; Meier, Dirk; Vandehei, Thor; Szawlowski, Marek; Patt, Bradley E.
2007-09-01
Cadmium zinc telluride (CdZnTe, or CZT) is a room-temperature semiconductor radiation detector that has been developed in recent years for a variety of applications. CZT has been investigated for many potential uses in medical imaging, especially in the field of single photon emission computed tomography (SPECT). CZT can also be used in positron emission tomography (PET) as well as photon-counting and integration-mode x-ray radiography and computed tomography (CT). The principal advantages of CZT are 1) direct conversion of x-ray or gamma-ray energy into electron-hole pairs; 2) energy resolution; 3) high spatial resolution and hence high space-bandwidth product; 4) room temperature operation, stable performance, high density, and small volume; 5) depth-of-interaction (DOI) available through signal processing. These advantages will be described in detail with examples from our own CZT systems. The ability to operate at room temperature, combined with DOI and very small pixels, make the use of multiple, stationary CZT "mini-gamma cameras" a realistic alternative to today's large Anger-type cameras that require motion to obtain tomographic sampling. The compatibility of CZT with Magnetic Resonance Imaging (MRI)-fields is demonstrated for a new type of multi-modality medical imaging, namely SPECT/MRI. For pre-clinical (i.e., laboratory animal) imaging, the advantages of CZT lie in spatial and energy resolution, small volume, automated quality control, and the potential for DOI for parallax removal in pinhole imaging. For clinical imaging, the imaging of radiographically dense breasts with CZT enables scatter rejection and hence improved contrast. Examples of clinical breast images with a dual-head CZT system are shown.
Astronomy education through hands-on photography workshops
NASA Astrophysics Data System (ADS)
Schofield, I.; Connors, M. G.; Holmberg, R.
2013-12-01
Athabasca University (AU), Athabasca University Geophysical and Geo-Space Observatories (AUGO / AUGSO), the Rotary Club of Athabasca and Science Outreach Athabasca has designed a three day science workshop entitled Photography and the Night Sky. This pilot workshop, aimed primarily at high-school aged students, serves as an introduction to observational astronomy as seen in the western Canadian night sky using digital astrophotography without the use of a telescope or tracking mount. Participants learn the layout of the night sky by proficiently photographing it using digital single lens reflex camera (DSLR) kits including telephoto and wide-angle lenses, tripod and cable release. The kits are assembled with entry-level consumer-grade camera gear as to be affordable by the participants, if they so desire to purchase their own equipment after the workshop. Basic digital photo editing is covered using free photo editing software (IrfanView). Students are given an overview of observational astronomy using interactive planetarium software (Stellarium) before heading outdoors to shoot the night sky. Photography is conducted at AU's auroral observatories, both of which possess dark open sky that is ideal for night sky viewing. If space weather conditions are favorable, there are opportunities to photograph the aurora borealis, then compare results with imagery generated by the all-sky auroral imagers located at the Geo-Space observatory. The aim of this program is to develop awareness to the science and beauty of the night sky, while promoting photography as a rewarding, lifelong hobby. Moreover, emphasis is placed on western Canada's unique subauroral location that makes aurora watching highly accessible and rewarding in 2013, the maximum of the current solar cycle.
Extended spectrum SWIR camera with user-accessible Dewar
NASA Astrophysics Data System (ADS)
Benapfl, Brendan; Miller, John Lester; Vemuri, Hari; Grein, Christoph; Sivananthan, Siva
2017-02-01
Episensors has developed a series of extended short wavelength infrared (eSWIR) cameras based on high-Cd concentration Hg1-xCdxTe absorbers. The cameras have a bandpass extending to 3 microns cutoff wavelength, opening new applications relative to traditional InGaAs-based cameras. Applications and uses are discussed and examples given. A liquid nitrogen pour-filled version was initially developed. This was followed by a compact Stirling-cooled version with detectors operating at 200 K. Each camera has unique sensitivity and performance characteristics. The cameras' size, weight and power specifications are presented along with images captured with band pass filters and eSWIR sources to demonstrate spectral response beyond 1.7 microns. The soft seal Dewars of the cameras are designed for accessibility, and can be opened and modified in a standard laboratory environment. This modular approach allows user flexibility for swapping internal components such as cold filters and cold stops. The core electronics of the Stirlingcooled camera are based on a single commercial field programmable gate array (FPGA) that also performs on-board non-uniformity corrections, bad pixel replacement, and directly drives any standard HDMI display.
Data-Acquisition Software for PSP/TSP Wind-Tunnel Cameras
NASA Technical Reports Server (NTRS)
Amer, Tahani R.; Goad, William K.
2005-01-01
Wing-Viewer is a computer program for acquisition and reduction of image data acquired by any of five different scientificgrade commercial electronic cameras used at Langley Research center to observe wind-tunnel models coated with pressure or temperature-sensitive paints (PSP/TSP). Wing-Viewer provides full automation of camera operation and acquisition of image data, and has limited data-preprocessing capability for quick viewing of the results of PSP/TSP test images. Wing- Viewer satisfies a requirement for a standard interface between all the cameras and a single personal computer: Written by use of Microsoft Visual C++ and the Microsoft Foundation Class Library as a framework, Wing-Viewer has the ability to communicate with the C/C++ software libraries that run on the controller circuit cards of all five cameras.
Software for Acquiring Image Data for PIV
NASA Technical Reports Server (NTRS)
Wernet, Mark P.; Cheung, H. M.; Kressler, Brian
2003-01-01
PIV Acquisition (PIVACQ) is a computer program for acquisition of data for particle-image velocimetry (PIV). In the PIV system for which PIVACQ was developed, small particles entrained in a flow are illuminated with a sheet of light from a pulsed laser. The illuminated region is monitored by a charge-coupled-device camera that operates in conjunction with a data-acquisition system that includes a frame grabber and a counter-timer board, both installed in a single computer. The camera operates in "frame-straddle" mode where a pair of images can be obtained closely spaced in time (on the order of microseconds). The frame grabber acquires image data from the camera and stores the data in the computer memory. The counter/timer board triggers the camera and synchronizes the pulsing of the laser with acquisition of data from the camera. PIVPROC coordinates all of these functions and provides a graphical user interface, through which the user can control the PIV data-acquisition system. PIVACQ enables the user to acquire a sequence of single-exposure images, display the images, process the images, and then save the images to the computer hard drive. PIVACQ works in conjunction with the PIVPROC program which processes the images of particles into the velocity field in the illuminated plane.
Photon collider: a four-channel autoguider solution
NASA Astrophysics Data System (ADS)
Hygelund, John C.; Haynes, Rachel; Burleson, Ben; Fulton, Benjamin J.
2010-07-01
The "Photon Collider" uses a compact array of four off axis autoguider cameras positioned with independent filtering and focus. The photon collider is two way symmetric and robustly mounted with the off axis light crossing the science field which allows the compact single frame construction to have extremely small relative deflections between guide and science CCDs. The photon collider provides four independent guiding signals with a total of 15 square arc minutes of sky coverage. These signals allow for simultaneous altitude, azimuth, field rotation and focus guiding. Guide cameras read out without exposure overhead increasing the tracking cadence. The independent focus allows the photon collider to maintain in focus guide stars when the main science camera is taking defocused exposures as well as track for telescope focus changes. Independent filters allow auto guiding in the science camera wavelength bandpass. The four cameras are controlled with a custom web services interface from a single Linux based industrial PC, and the autoguider mechanism and telemetry is built around a uCLinux based Analog Devices BlackFin embedded microprocessor. Off axis light is corrected with a custom meniscus correcting lens. Guide CCDs are cooled with ethylene glycol with an advanced leak detection system. The photon collider was built for use on Las Cumbres Observatory's 2 meter Faulks telescopes and currently used to guide the alt-az mount.
An efficient multiple exposure image fusion in JPEG domain
NASA Astrophysics Data System (ADS)
Hebbalaguppe, Ramya; Kakarala, Ramakrishna
2012-01-01
In this paper, we describe a method to fuse multiple images taken with varying exposure times in the JPEG domain. The proposed algorithm finds its application in HDR image acquisition and image stabilization for hand-held devices like mobile phones, music players with cameras, digital cameras etc. Image acquisition at low light typically results in blurry and noisy images for hand-held camera's. Altering camera settings like ISO sensitivity, exposure times and aperture for low light image capture results in noise amplification, motion blur and reduction of depth-of-field respectively. The purpose of fusing multiple exposures is to combine the sharp details of the shorter exposure images with high signal-to-noise-ratio (SNR) of the longer exposure images. The algorithm requires only a single pass over all images, making it efficient. It comprises of - sigmoidal boosting of shorter exposed images, image fusion, artifact removal and saturation detection. Algorithm does not need more memory than a single JPEG macro block to be kept in memory making it feasible to be implemented as the part of a digital cameras hardware image processing engine. The Artifact removal step reuses the JPEGs built-in frequency analysis and hence benefits from the considerable optimization and design experience that is available for JPEG.
Embedded Augmented Reality Training System for Dynamic Human-Robot Cooperation
2009-10-01
through (OST) head- mounted displays ( HMDs ) still lack in usability and ergonomics because of their size, weight, resolution, and the hard-to-realize...with addressable focal planes [10], for example. Accurate and easy-to-use calibration routines for OST HMDs remains a challenging task; established...methods are based on matching of virtual over real objects [11], newer approaches use cameras looking directly through the HMD optics to exploit both